Tutorial #23: Taming the beast (Dell XPS 7590 Core i9)


One of the significant purchases I made over the past 6 months is the Dell XPS 7590 with Intel's Coffee Lake Core i9-9980HK, Samsung's 32 GB DDR4-2667 RAM, Toshiba's 1 TB SSD, Nvidia's GTX 1650 and the crème de la crème that is the 4K OLED panel made by Samsung. But before you get any ideas, this is not a device that I would have otherwise purchased but for the fact that I found a single piece listed on Amazon 2 days prior to its official launch of the device at a price that was lesser than the 2019 Acer Helios 300. The risk was worth it as it came sealed with a valid 12-month Premium Plus support from Dell. There are instances in one's life where one doesn't mind getting lucky and this was certainly one of those.

Normally, I would be prompt in reviewing devices within the first few weeks of purchase. However, in this case I think I am too biased towards the device to perhaps put up a worthwhile review. Hence, I thought it better to post a tutorial that would be of some assistance to fellow users. One thing that I am certain of is that the hardware has outgrown the XPS chassis design over the years and the Core i9 pushes things a bit too far in terms of what the chassis is capable of handling thermally. Hence, I went on an optimisation quest with the intention of lowering the temperatures and increasing the overall efficiency of the device. I will own up to the fact that I don't intend to use the device on battery at all unless I am forced to but for that eventuality I decided to find a compromise which would at least provide stock performance at lower battery consumption as against higher performance when operating directly on AC.

The tool of choice in this case for the CPU is Throttlestop which offers significantly more tweaking potential than Intel's Extreme Tuning utility. As for the GPU, the mainstream tool to use is MSI Afterburner. However, in case of this GPU, I found that the temperature limit setting on MSI AB was locked for some reason even after unlocking all the advanced options and the Auto Overclocker resulted in far too frequent game crashes. Hence, I instead went ahead with Asus GPU Tweak II which allowed the GPU temperature target to be set upfront. By default, this is set to 75 Celsius and I instead bumped it to the stock value of GTX 1650 which is 87 Celsius. However, the idea in general is to still not exceed 75 Celsius during most strenuous tasks but to provide the headroom to exceed that if needed.

With this background, in the interest of time, I have decided to simply post the screenshots of the various screens from the tools since further elaboration on each parameter can be found on their respective forums. In case of the GPU, I eventually stuck with simply pushing up the clocks by 10% as undervolting using the frequency curve resulted in far too many instability issues. Is this the most optimum setting possible, most probably not. However, I believe this is the best setting I could identify with trial and error, as attested by the 88 unexpected reboots on record. I could certainly push the clocks and voltages quite a bit more but in general it led to instability and I am certainly no fan of BSODs. Another point to note is that while Asus GPU Tweak II can be set to start on reboot, Throttlestop requires additional effort in setting up the task scheduler which is what I have indicated below.

Starting Throttlestop on Login:

Throttlestop settings for AC profile:

Throttlestop settings for Battery profile:

Now to focus on the fruits of the labour or the pudding so to say. I am not a fan of benchmarks in general but in this case, I needed something to comparatively measure the impact of the changes and a few basic benchmarks provide the easiest reference in this case. Note that I ran all the benchmarks with only the discrete GPU enabled with the overclock settings, so it represents the worst possible scenario in case of thermals.

UserBenchmark:
This might not be the first benchmark utility that springs to mind but for the fact that it allows comparative analysis for similar hardware components and is of considerably short duration. In this case, the CPU came up at 97th percentile and the GPU at 100% percentile which, considering the fact that is mostly going up against much bulkier gaming laptops with much better thermals, is noteworthy. Overall, the CPU efficiency is excellent with the tweaks providing higher performance at lower power. The discrete GPU however doesn't scale up in terms of efficiency and while it is possible to get more performance out of it, it comes at a significant cost in terms of power and heat.

Cinebench:
Cinebench really pushes the CPU and is thus a good test of its ultimate performance. A sequence of 2 consecutive runs also pushes the CPU to its thermal limits. Not surprisingly then, the 1st run score of 3684 is more than 20% better than stock and even the 2nd consecutive run scores better than the stock settings with lower average temperatures.

Heaven:
This benchmark was run at the Extreme preset. As I have already mentioned previously, pushing the GPU doesn't really yield huge benefits in this constrained form factor as the any performance benefits come with equally higher power consumption and heat generation. However, as can be seen in the results, a 3% performance boost in Heaven comes with lower CPU temperatures and the GPU power consumption is lower even though it hasn't been undervolted. So, a win-win overall.

Lastly, how do these modifications fare with a modern game. I happen to have Hitman 2 installed at present, so I thought I'd give it a go with the in-built benchmarks which I frankly didn't find to be entirely consistent across different runs. But I believe it should give at least give an idea of what the laptop is now capable of, even though it is not meant to be a gaming laptop.
I set all the details to the maximum possible apart from lowering it a notch to 'High' for 'Level of Detail', 'SSAO' and 'Shadow Quality', besides turning 'Motion Blur' to 'Off'. The Mumbai benchmark produced a score of 70.95 FPS with CPU averaging at 79C and the GPU at 70C. The more demanding Miami benchmark chewed out 54.04 FPS with CPU/GPU temperatures averaging at 78C/69C respectively. A more than serviceable gaming machine if I may say so.

Musing #60: PC Overclocking



Having grown up through the megahertz and subsequently the gigahertz war, I can only say that speed matters. Over the years, I fought to get the last ounce of performance out of the system that was "machinely" possible. This was the case until Sandy Bridge arrived. On one hand, it offered the most value for money in an eternity and on the other, set a trend where overclocking meant buying in to the most expensive processors and motherboards.

Hence, it was a practical decision at the time to go with the i5-3470, a processor with locked multiplier, along with a H77 chipset motherboard that was not meant to assist overclocking. It still offered the option to run all the cores at the turbo frequency of 3.6 GHz instead of the base frequency of 3.2 GHz and that is how it ran for nearly 6 years. It met every requirement I had of the system and a bit more so as to not be concerned about upgrading.

However, as is always the case, my hand was forced, like it was in the past when I upgraded to the GTX 1060. Only this time, I had no intention of upgrading the trio of processor, motherboard and RAM considering the inflated memory prices as well as with AMD's Zen 2 and Intel's 10nm processors around the corner. For the first time, I was left in a rather peculiar situation where I needed to change a component for a platform that has been discontinued for years.

Luckily, there is always the web that one can turn to. Scourging the tech forums for a desired motherboard is akin to hitting the lottery and sure enough I didn't luck out. Then, I decided to go with one of the B75 chipset motherboards that were still mysteriously available on Amazon, only to discover that they were OEM boards with a locked BIOS and lacking compatibility with my RAM. So, after I made the most of Amazon's gracious return policy, I decided to uptake the final resort and go ahead with the purchase of a used motherboard, admittedly with my fingers crossed, on AliExpress.

The shipment had its fair bit of drama over a period of 3 weeks but finally made its way through and was surprisingly well packaged. The absence of dust was a welcome sight, though the rusted socket screws immediately gave way to the fact that the board was used. All things considered, the motherboard was in good condition and thankfully the mounting bracket was included.


The board, an Asus P8Z77-V LX, opened up CPU overclocking opportunities in ages, albeit limited ones on account of my existing hardware. Overclocking can't be thought of in isolation as due consideration is needed to be given toheat. Intel's stock cooler is anything but the perfect foil for overclocking and hence I had to first stock up (pun intended) on an after-market cooler. For this, I again first turned to the used market and amazingly found an open box Deepcool Gammaxx 300 for INR 1200 ($17) as opposed to a new unit price of INR 2000 ($29). It isn't something on any ardent overclocker's wishlist but it gets the job done with its 3 heat pipes and a ginormous 120 mm fan.


To capture the difference that even a budget after-market cooler can make, I ran the stock cooler back-to-back with the Gammaxx 300 on the exposed motherboard. To check the stress temperatures, I simply bumped up the CPU multiplier over the default settings. Even in this setup, the Gammaxx 300 lowered the temperatures by over 20 degrees when under load while also ensuring a much lower idle temperature.


The bigger test however is ensuring lower temperatures in a constrained environment. In that sense, my cabinet (a generic old one at that) is not located in the most optimum position due to cabling constraints. Hence, I was expecting the temperatures to be much worst than they actually turned out to be. It also indicates that using the stock cooler was not even an option, unless you are looking for fried eggs and expensive paperweights.


Being out of the overclocking game for so long, I read up on the motherboard's features while the board was still in transit to fathom some of the newer terms and pretty much decided on a list of settings I would go around changing in my pursuit of performance with the lowest power consumption and heat generation. Thankfully, up until Ivy Bridge, Intel provided limited unlocked multipliers 4 bins above the maximum turbo frequency. This meant that my i5-3470 with a base multiplier of 32 and turbo multiplier of 36 was capable of being run at 40 multiplier. This doesn't imply that all 4 cores can simultaneously hit the 4 GHz mark as it is limited to 3.8 GHz by design. However, what it means is that it can certainly hit the magical 4G mark when one or two of the cores are loaded. I suppose there is some satisfaction in finally getting an old horse to learn new tricks.


Setting the multiplier at its maximum is easy and can even be done using the Auto or XMP overclock option. The difficult part is controlling the temperatures while also finding the limits of the RAM. To that end, I found the Load-Line Calibration to be an indispensable tool in tightening up the voltages and thereby lowering the offset. After much trial and error, I was able to set a stable CPU offset of -0.045V with the high (50%) LLC option which lowered the temperatures by a few more degrees and ensured next to no vDroop.

Running quad-channel RAM from different manufacturers is always a tricky proposition, even when the timings are the same. I had my initial CAS 9, DDR3-1600, 2 x 4 GB Corsair Vengeance teamed up with a similar GSkill RipjawsX set from 4 years later. This meant the job of overclocking the RAM was anything but easy and involved numerous failed boots. Eventually, I was able to get them to run stably at 1800 MHZ, CAS 10 with only a minor bump up in voltage to 1.53V. However, the impact on memory performance was not insignificant.

I suppose it makes sense to go all-in when you have entered the game. Hence, I decided to overclock my GPU as well. For over 2 years, I never overclocked the Zotac GTX 1060 Mini, being as it is, a single fan device. Size can be misleading though and the added CPU cooler certainly aids the overall air flow. It didn't take me long to figure out the memory isn't going to be up to the task, which is understandable considering it is not protected by a heat sink. In the end, I conservatively increased the memory clock by 100 MHz and the core clock by 200 MHz without touching the voltage.

A final tool available in pushing the clock even further is the base clock. Unfortunately, after setting up the overclock for all the other components, I found that the base clock increment to even 101 caused significant instability. Increasing the CPU and RAM voltage brought some modicum of stability but inexplicably reduced the performance across all benchmarks while simultaneously raising the temperature. Thus, there was no use pursuing this path any further.

The performance comparison presents of the overclocked system with the default one certainly provides some satisfaction. The XMP overclock is set to use the maximum CPU multiplier of 40 but it was unable to run the RAM at 1800 MHz at the same time. Going by the incredibly higher temperatures, it is obvious that the XMP overclock pushes the voltages a lot higher. The only upside here is that it is capable of running all the cores simultaneously at 4 GHz which produces a minuscule performance advantage. However, the manual settings are more than a match and come with a significant upshot in memory performance with much better thermals.


While the upshot in CPU and RAM performance is quite evident looking at the table, the GPU performance is not. As it happens, PCMark doesn't stress the GPU much whereas Hitman seems to be constrained by the CPU. Thus, the need of the hour was a GPU intensive benchmark which came in the form of Heaven. As can be seen in the results, the overclock results in an FPS improvement of over 8% compared to the stock speeds. At the same time, it makes sense to set a custom fan curve as it can keep the temperatures down under full load.


To round up the post, no overclock is worth its salt without a stress and torture test. The idle CPU temperature of 27 is pushed up to 63 by AIDA64's stress test and then stratospherically to 77 by Prime95's torture test. However, this is well within the processor's specifications and represents the worst possible scenario that normally doesn't manifest itself in the most taxing of daily use cases.


To conclude, this entire episode was brought about by an unforeseen failure in ageing hardware and hence the overclock exercise is strictly incidental, but the thrill of it as much as anyone would get when setting up a new system.

P.S.: If you followed my earlier post on Meltdown and Spectre, then you'd know it is something I thought of when buying the motherboard. Like with the ASRock boards, there was a helpful soul patching the unsupported Asus boards as well. However, when I went about flashing the BIOS, I found it to be incompatible due to the way it was packaged. Thankfully, Microsoft has fully patched Windows to support the latest microcodes from Intel (1F in the case of the i5-3470). It wasn't auto installed over Windows update and I had to manually install the KB4100347 patch for Spectre.

Review #19: Zotac GTX 1060 Mini (6 GB)


This purchase was never on my radar until the day I made the purchase. My existing GPU - GTX 660 was serving me well enough for the few hours I spent every weekend on gaming. It is true that I was using Medium to High details on recent games and perhaps not hitting 60 FPS, but it was not that distractingly visible as compared to the experiences one had more than a decade back with an underpowered GPU. However, the 660 made the decision for me when it simply burnt out at the stroke of midnight on Independence Day (the irony!). It was strange to see burn marks on the PCB along with broken capacitors, but I wasn't too perturbed because I guess I secretly did want to upgrade. I have been sheepishly keeping an eye on the AMD RX480 and the Nvidia GTX 1060 since their launches, unable to convince myself to jump the fence, so the (un)timely demise of the GTX 660 wasn't much of a shock, especially as I was at the sweet 3-generation gap between GPU purchases.

When it came to deciding between the RX480 and GTX 1060, I feel that the only thing the RX480 really had going for it was the price which isn't a factor at all in India due to some absurd pricing. The RX 480 does seem to have better DX12 Async performance as per current benchmarks but it has a much higher power consumption and certainly runs hotter even as AMD has apparently fixed some issues using drivers. This was important for me as I have been using the same Corsair 450W power supply for 8 years now and was in no mood to change it. The GTX 1060 in fact has a lower TDP than the GTX 660, so I was in fact reducing my power consumption while getting much higher performance. Another thing AMD has going for it is CrossFire support but I never have and never will get a dual GPU setup. It is simply unrealistic to do so at this price point. Keeping objectivity aside, I must admit that I feel Nvidia is more invested in PC after losing out to AMD in the console arena. The driver support is much better and features like Ansel and Simultaneous Multi-Projection technology indicate that they are totally invested in PC. Also, I have borne allegiance to Nvidia for over 17 years now starting with the Riva TNT2 M64 and then subsequently moving on to FX 5200, 9600 GT, GTX 660 and now to GTX 1060. So perhaps, the decision was already made even before I started to make it.

Coming to the GPU itself, the form factor is indeed small compared to the GTX 660 I discarded. It has a single fan with a direct GPU contact aluminium heatsink which on the face of it seems to be a downgrade compared to the dual fans and copper cooling pipes I had on the GTX 660, though looks can be deceiving. On the flip side, it meant much better spacing in my cabinet which I hope will afford better ventilation throughout. It might seem illogical to go for the mini-ATX form factor with an ATX cabinet, but to be honest, I have been on the lookout for the cheapest GTX 1060 I could find and the Zotac Mini at about 3k less than the AMP edition filled the bill perfectly. Also, the 2 + 3 years warranty on registration is simply phenomenal and I can certainly have some peace of mind knowing that I am covered should anything go wrong like it did with my GTX 660. The packaging is barebone and comes with literally nothing, so make sure you already possess any screws needed for installation. It runs on a single 6-pin power connector, so almost any decent power supply unit should have you covered. The card itself has 1x DVI, 1xHDMI 2.0b port and 3x Display Port 1.4.
Since the package doesn't come with a disc, you have to rely on GeForce Experience for the drivers until which Windows shows something generic like 'Microsoft Display Device'. Zotac makes its own GPU tool called Firestorm and I decided to give it a go before trying out something else like MSI Afterburner. Firestorm seems to have the tools needed for some basic tweaking but doesn't look to be the most elegant. On idle, after installation, my GPU reported a temperature of 35 deg. Celsius with core clock speed of 139 Mhz and memory clock of 405 Mhz. The base (1506 Mhz), boost (1708 Mhz) and memory (2002 Mhz) frequencies are at reference values, so there is no overclocking out of the box, unlike the AMP edition. 
While I have no need to run benchmarks since I have nothing to compare against, I decided to give 3DMark a go to see how the combination of this GPU and my CPU (i5-3470) performs with respect to some review rigs running this GPU. The Graphics score of 13,123 for Firestorm - Performance compares favourably with the GTX 1060 Founder edition scores that can be found on the web. The Physics score of 6120 indicates that my CPU (i5-3470) may be the weak link in my setup. However, the combined score of 4592 is only about 2-3% lower than reviews with much beefier CPUs, so I wouldn't deem the CPU to be much of a bottleneck as far as gaming is concerned. Taking a quick peek at games, it was simply a pleasure to see the Geforce Experience optimisation turn up all the settings to 'Extra High' on MGSV: The Phantom Pain. I am one of those who has a huge backlog on Steam because of which I don't purchase any big ticket games on release, but there was no way I could resist not pre-ordering Deus Ex: Mankind Divided and am salivating at the prospect of maxing that one too using DX12. 
A key point of this card is the single fan setup, so it is imperative to keep an eye on the temperatures. I decided to stress test the GPU using FurMark to see how well it copes with the pressure. From an idle temperature of 35 deg. Celsius, the card hit 80 deg. shortly after 3 min, but thereafter it stayed stable at that temperature till the 5 minute mark when the fan was running at 72% of its max. speed. Interestingly, the core clock was constantly over 1800 MHz at full load which is higher than the stated boost frequency of 1708 MHz. A custom fan profile that bumps up the fan speed at higher temperatures ought to reduce the temperature compared to the "Auto" mode used by me and will mostly be essential on overclocking. As for me, I can't justify a need for overclocking at this moment considering that all my needs at 1080p are taken care of.
As you can tell, I am extremely pleased with this purchase. I think Nvidia has hit the nail on the head with the release of the GTX 1060. It felt great to be able to purchase it just a month after its global release, something that was unheard of when I made my GPU purchases in the past. The price may be a bit all over the place at the moment depending on the seller, but I got mine for a shade less than 20.4k through a local seller on eBay, taking advantage of eBay's high value 12% discount coupon. When you think over it, the premium paid with all the import duties isn't as obnoxious as it used to be in the past. Hence, I can heartily recommend this to anyone looking for a VR ready card that is going to max out absolutely anything at 1080p (and may be at 1440p) for years to come.

Update (Aug 29): I ran the DX: MD benchmark and it yielded the following results:

Very High preset (DX11): AVG: 45.1; MIN: 37.4; MAX: 57.4
Geforce Experience Optimal (DX11): AVG: 39; MIN: 30; MAX: 50

When compared to the results available online for a GTX 1080 with the same CPU, the 1060 offers 80% of the performance which is simply phenomenal. Also, for those interested in the temperatures, the card hit a maximum of 69 degree during the benchmark with the fan at 61%. Idle temperature was 34 deg. and the average was 49 deg.