Tutorial #23: Taming the beast (Dell XPS 7590 Core i9)


One of the significant purchases I made over the past 6 months is the Dell XPS 7590 with Intel's Coffee Lake Core i9-9980HK, Samsung's 32 GB DDR4-2667 RAM, Toshiba's 1 TB SSD, Nvidia's GTX 1650 and the crème de la crème that is the 4K OLED panel made by Samsung. But before you get any ideas, this is not a device that I would have otherwise purchased but for the fact that I found a single piece listed on Amazon 2 days prior to its official launch of the device at a price that was lesser than the 2019 Acer Helios 300. The risk was worth it as it came sealed with a valid 12-month Premium Plus support from Dell. There are instances in one's life where one doesn't mind getting lucky and this was certainly one of those.

Normally, I would be prompt in reviewing devices within the first few weeks of purchase. However, in this case I think I am too biased towards the device to perhaps put up a worthwhile review. Hence, I thought it better to post a tutorial that would be of some assistance to fellow users. One thing that I am certain of is that the hardware has outgrown the XPS chassis design over the years and the Core i9 pushes things a bit too far in terms of what the chassis is capable of handling thermally. Hence, I went on an optimisation quest with the intention of lowering the temperatures and increasing the overall efficiency of the device. I will own up to the fact that I don't intend to use the device on battery at all unless I am forced to but for that eventuality I decided to find a compromise which would at least provide stock performance at lower battery consumption as against higher performance when operating directly on AC.

The tool of choice in this case for the CPU is Throttlestop which offers significantly more tweaking potential than Intel's Extreme Tuning utility. As for the GPU, the mainstream tool to use is MSI Afterburner. However, in case of this GPU, I found that the temperature limit setting on MSI AB was locked for some reason even after unlocking all the advanced options and the Auto Overclocker resulted in far too frequent game crashes. Hence, I instead went ahead with Asus GPU Tweak II which allowed the GPU temperature target to be set upfront. By default, this is set to 75 Celsius and I instead bumped it to the stock value of GTX 1650 which is 87 Celsius. However, the idea in general is to still not exceed 75 Celsius during most strenuous tasks but to provide the headroom to exceed that if needed.

With this background, in the interest of time, I have decided to simply post the screenshots of the various screens from the tools since further elaboration on each parameter can be found on their respective forums. In case of the GPU, I eventually stuck with simply pushing up the clocks by 10% as undervolting using the frequency curve resulted in far too many instability issues. Is this the most optimum setting possible, most probably not. However, I believe this is the best setting I could identify with trial and error, as attested by the 88 unexpected reboots on record. I could certainly push the clocks and voltages quite a bit more but in general it led to instability and I am certainly no fan of BSODs. Another point to note is that while Asus GPU Tweak II can be set to start on reboot, Throttlestop requires additional effort in setting up the task scheduler which is what I have indicated below.

Starting Throttlestop on Login:

Throttlestop settings for AC profile:

Throttlestop settings for Battery profile:

Now to focus on the fruits of the labour or the pudding so to say. I am not a fan of benchmarks in general but in this case, I needed something to comparatively measure the impact of the changes and a few basic benchmarks provide the easiest reference in this case. Note that I ran all the benchmarks with only the discrete GPU enabled with the overclock settings, so it represents the worst possible scenario in case of thermals.

UserBenchmark:
This might not be the first benchmark utility that springs to mind but for the fact that it allows comparative analysis for similar hardware components and is of considerably short duration. In this case, the CPU came up at 97th percentile and the GPU at 100% percentile which, considering the fact that is mostly going up against much bulkier gaming laptops with much better thermals, is noteworthy. Overall, the CPU efficiency is excellent with the tweaks providing higher performance at lower power. The discrete GPU however doesn't scale up in terms of efficiency and while it is possible to get more performance out of it, it comes at a significant cost in terms of power and heat.

Cinebench:
Cinebench really pushes the CPU and is thus a good test of its ultimate performance. A sequence of 2 consecutive runs also pushes the CPU to its thermal limits. Not surprisingly then, the 1st run score of 3684 is more than 20% better than stock and even the 2nd consecutive run scores better than the stock settings with lower average temperatures.

Heaven:
This benchmark was run at the Extreme preset. As I have already mentioned previously, pushing the GPU doesn't really yield huge benefits in this constrained form factor as the any performance benefits come with equally higher power consumption and heat generation. However, as can be seen in the results, a 3% performance boost in Heaven comes with lower CPU temperatures and the GPU power consumption is lower even though it hasn't been undervolted. So, a win-win overall.

Lastly, how do these modifications fare with a modern game. I happen to have Hitman 2 installed at present, so I thought I'd give it a go with the in-built benchmarks which I frankly didn't find to be entirely consistent across different runs. But I believe it should give at least give an idea of what the laptop is now capable of, even though it is not meant to be a gaming laptop.
I set all the details to the maximum possible apart from lowering it a notch to 'High' for 'Level of Detail', 'SSAO' and 'Shadow Quality', besides turning 'Motion Blur' to 'Off'. The Mumbai benchmark produced a score of 70.95 FPS with CPU averaging at 79C and the GPU at 70C. The more demanding Miami benchmark chewed out 54.04 FPS with CPU/GPU temperatures averaging at 78C/69C respectively. A more than serviceable gaming machine if I may say so.

Musing #60: PC Overclocking



Having grown up through the megahertz and subsequently the gigahertz war, I can only say that speed matters. Over the years, I fought to get the last ounce of performance out of the system that was "machinely" possible. This was the case until Sandy Bridge arrived. On one hand, it offered the most value for money in an eternity and on the other, set a trend where overclocking meant buying in to the most expensive processors and motherboards.

Hence, it was a practical decision at the time to go with the i5-3470, a processor with locked multiplier, along with a H77 chipset motherboard that was not meant to assist overclocking. It still offered the option to run all the cores at the turbo frequency of 3.6 GHz instead of the base frequency of 3.2 GHz and that is how it ran for nearly 6 years. It met every requirement I had of the system and a bit more so as to not be concerned about upgrading.

However, as is always the case, my hand was forced, like it was in the past when I upgraded to the GTX 1060. Only this time, I had no intention of upgrading the trio of processor, motherboard and RAM considering the inflated memory prices as well as with AMD's Zen 2 and Intel's 10nm processors around the corner. For the first time, I was left in a rather peculiar situation where I needed to change a component for a platform that has been discontinued for years.

Luckily, there is always the web that one can turn to. Scourging the tech forums for a desired motherboard is akin to hitting the lottery and sure enough I didn't luck out. Then, I decided to go with one of the B75 chipset motherboards that were still mysteriously available on Amazon, only to discover that they were OEM boards with a locked BIOS and lacking compatibility with my RAM. So, after I made the most of Amazon's gracious return policy, I decided to uptake the final resort and go ahead with the purchase of a used motherboard, admittedly with my fingers crossed, on AliExpress.

The shipment had its fair bit of drama over a period of 3 weeks but finally made its way through and was surprisingly well packaged. The absence of dust was a welcome sight, though the rusted socket screws immediately gave way to the fact that the board was used. All things considered, the motherboard was in good condition and thankfully the mounting bracket was included.


The board, an Asus P8Z77-V LX, opened up CPU overclocking opportunities in ages, albeit limited ones on account of my existing hardware. Overclocking can't be thought of in isolation as due consideration is needed to be given toheat. Intel's stock cooler is anything but the perfect foil for overclocking and hence I had to first stock up (pun intended) on an after-market cooler. For this, I again first turned to the used market and amazingly found an open box Deepcool Gammaxx 300 for INR 1200 ($17) as opposed to a new unit price of INR 2000 ($29). It isn't something on any ardent overclocker's wishlist but it gets the job done with its 3 heat pipes and a ginormous 120 mm fan.


To capture the difference that even a budget after-market cooler can make, I ran the stock cooler back-to-back with the Gammaxx 300 on the exposed motherboard. To check the stress temperatures, I simply bumped up the CPU multiplier over the default settings. Even in this setup, the Gammaxx 300 lowered the temperatures by over 20 degrees when under load while also ensuring a much lower idle temperature.


The bigger test however is ensuring lower temperatures in a constrained environment. In that sense, my cabinet (a generic old one at that) is not located in the most optimum position due to cabling constraints. Hence, I was expecting the temperatures to be much worst than they actually turned out to be. It also indicates that using the stock cooler was not even an option, unless you are looking for fried eggs and expensive paperweights.


Being out of the overclocking game for so long, I read up on the motherboard's features while the board was still in transit to fathom some of the newer terms and pretty much decided on a list of settings I would go around changing in my pursuit of performance with the lowest power consumption and heat generation. Thankfully, up until Ivy Bridge, Intel provided limited unlocked multipliers 4 bins above the maximum turbo frequency. This meant that my i5-3470 with a base multiplier of 32 and turbo multiplier of 36 was capable of being run at 40 multiplier. This doesn't imply that all 4 cores can simultaneously hit the 4 GHz mark as it is limited to 3.8 GHz by design. However, what it means is that it can certainly hit the magical 4G mark when one or two of the cores are loaded. I suppose there is some satisfaction in finally getting an old horse to learn new tricks.


Setting the multiplier at its maximum is easy and can even be done using the Auto or XMP overclock option. The difficult part is controlling the temperatures while also finding the limits of the RAM. To that end, I found the Load-Line Calibration to be an indispensable tool in tightening up the voltages and thereby lowering the offset. After much trial and error, I was able to set a stable CPU offset of -0.045V with the high (50%) LLC option which lowered the temperatures by a few more degrees and ensured next to no vDroop.

Running quad-channel RAM from different manufacturers is always a tricky proposition, even when the timings are the same. I had my initial CAS 9, DDR3-1600, 2 x 4 GB Corsair Vengeance teamed up with a similar GSkill RipjawsX set from 4 years later. This meant the job of overclocking the RAM was anything but easy and involved numerous failed boots. Eventually, I was able to get them to run stably at 1800 MHZ, CAS 10 with only a minor bump up in voltage to 1.53V. However, the impact on memory performance was not insignificant.

I suppose it makes sense to go all-in when you have entered the game. Hence, I decided to overclock my GPU as well. For over 2 years, I never overclocked the Zotac GTX 1060 Mini, being as it is, a single fan device. Size can be misleading though and the added CPU cooler certainly aids the overall air flow. It didn't take me long to figure out the memory isn't going to be up to the task, which is understandable considering it is not protected by a heat sink. In the end, I conservatively increased the memory clock by 100 MHz and the core clock by 200 MHz without touching the voltage.

A final tool available in pushing the clock even further is the base clock. Unfortunately, after setting up the overclock for all the other components, I found that the base clock increment to even 101 caused significant instability. Increasing the CPU and RAM voltage brought some modicum of stability but inexplicably reduced the performance across all benchmarks while simultaneously raising the temperature. Thus, there was no use pursuing this path any further.

The performance comparison presents of the overclocked system with the default one certainly provides some satisfaction. The XMP overclock is set to use the maximum CPU multiplier of 40 but it was unable to run the RAM at 1800 MHz at the same time. Going by the incredibly higher temperatures, it is obvious that the XMP overclock pushes the voltages a lot higher. The only upside here is that it is capable of running all the cores simultaneously at 4 GHz which produces a minuscule performance advantage. However, the manual settings are more than a match and come with a significant upshot in memory performance with much better thermals.


While the upshot in CPU and RAM performance is quite evident looking at the table, the GPU performance is not. As it happens, PCMark doesn't stress the GPU much whereas Hitman seems to be constrained by the CPU. Thus, the need of the hour was a GPU intensive benchmark which came in the form of Heaven. As can be seen in the results, the overclock results in an FPS improvement of over 8% compared to the stock speeds. At the same time, it makes sense to set a custom fan curve as it can keep the temperatures down under full load.


To round up the post, no overclock is worth its salt without a stress and torture test. The idle CPU temperature of 27 is pushed up to 63 by AIDA64's stress test and then stratospherically to 77 by Prime95's torture test. However, this is well within the processor's specifications and represents the worst possible scenario that normally doesn't manifest itself in the most taxing of daily use cases.


To conclude, this entire episode was brought about by an unforeseen failure in ageing hardware and hence the overclock exercise is strictly incidental, but the thrill of it as much as anyone would get when setting up a new system.

P.S.: If you followed my earlier post on Meltdown and Spectre, then you'd know it is something I thought of when buying the motherboard. Like with the ASRock boards, there was a helpful soul patching the unsupported Asus boards as well. However, when I went about flashing the BIOS, I found it to be incompatible due to the way it was packaged. Thankfully, Microsoft has fully patched Windows to support the latest microcodes from Intel (1F in the case of the i5-3470). It wasn't auto installed over Windows update and I had to manually install the KB4100347 patch for Spectre.

Musing #48: Impact of Spectre/Meltdown patch (With Intel's March Microcode Update)


Spectre and Meltdown have been all over the news in the past few days. While the seriousness of the bug cannot be understated, the speculation on the performance impact of the patch, especially on older processors, has been particularly worrisome. Google and Intel have put forth some assurances, but the end result is yet to be seen.

As my desktop is equipped with the generations-old i5-3470, I have to brace for whatever performance degradation comes with the patch. Unfortunately, with ASRock having released the last BIOS update for my motherboard in 2013, one can only hope to receive an official update. For the time being, the only option is to rely on Microsoft's Windows 10 patch which only partially mitigates this issue.

Even then, it offers a first glimpse at the performance that has to be scarified in lieu of security. Intel has stated that the impact will vary based on the task and hence there is no easy way to determine the impact of the patch. I went with Cinebench R15 and CrystalDiskMark to quickly capture the impact on some everyday tasks.

As can be seen in the screenshot below, the performance impact seems to be quite significant with the post-patch score being nearly 7% lower. This is by all means a huge impact and cannot be disregarded.

Musing #26: Ryzen to the challenge


I have never owned an AMD device till date, being entrenched in the Intel and Nvidia camp as it were. However, I can't help but root for AMD's Ryzen this time around. It may just be a case of supporting the underdog but no one loves an underdog that doesn't put up a fight. Hence, it was surprisingly pleasant to see AMD Ryzen (!) from the ashes.

As always, Anandtech has put up the most comprehensive review and is worth a read if you can take in the details. Ars Technica has a more mainstream and comprehensible review. However, whichever review you read, one thing is certain - AMD is back in the CPU game. The Ryzen 7 seems to lag behind Kaby Lake in IPC performance and hence it can't be good from the gaming perspective, especially for those holding off for Canonlake. However, it holds its own and even beats Intel in the multi-core, multi-thread game which contributes immensely to content creation rather than consumption. It is admirable to see a less resourced team come up with such an impressive architecture, considering the debacle that was Bulldozer. A newer micro-architecture also means that a lot of performance is to be unearthed through optimization and that bodes well for future iterations.

While absolute performance is lacking, at present, AMD's value proposition is performance per buck. There is nothing to say that Intel wouldn't cut prices to compete but it might be more prudent to see the impact that AMD has on the market. As it stands, Ryzen 7 is focussed on the high performance desktop market at the present which can't be termed vibrant by any metric, when compared to the mobile counterpart. The gaming sub-section of this segment too might not be inclined to take the plunge in favour of AMD just yet. Hence, it will most probably be a waiting game for now.

I remember my first processor being the PIII-450 and was horrified at the pace of development that followed, envious of the fact that AMD broke through the 1 GHz barrier first with Athlon, within a year of my purchase. Intel's juggernaut meant AMD was relegated to the back stage as we moved further in to the new millennium, with even its legacy being not as prevalent as the moniker AMD64 failed to attain the ubiquity that it should have. It is unlikely that we shall ever see a processor war of such proportions ever again, but somewhere even Intel engineers might be rubbing their hands in glee rather than twiddling their thumbs, thanks to Ryzen.