Tutorial #27: TWEAKING THE ASUS TUF GAMING A15 - PART TROIS

 


Update (June 1): 
The curve I ended with in the original article ended up being unstable in scenarios where the frequency dropped down to the 1600s on account of the thermal limit being breached. Hence, I tweaked the curve further to increase the voltage for the lower frequencies. Turns out it is not straightforward as even minor voltage-frequency changes at the knife edge can result in crashes. Eventually, I settled on the curve below which happens to be taken at 41 degrees Celsius ambient temperature and is thus levelling off at 1800 MHz, though it comes down by 15 MHz at over 50 degrees, making the maximum frequency same as the original curve.

Despite the reduction in undervolting, it actually lead to an increase in overall performance at a lower maximum GPU temperature. Nice!

Finally, the comparison with stock performance. The efficiency is on par with stock as diminishing returns kick-in for the GPU at its limit. Considering that the GPU TGP is now 25W higher, the chassis is doing a good job of handling the additional heat. At the same time, the CPU performance is now up to stock, but at lower power and temperature values. Overall, this is a really nice setting as the performance has increased by nearly 15% from stock and is ever so closer to desktop performance, at a much lesser TGP.


Original Article:

Tinkering with the device can be a progressive exercise. Part 1 was about getting more performance out of the box whereas Part 2 was a more holistic change aimed at efficiency. However, there is always more to be found by pushing the limits and this article is all about it.

In my previous article, I had lamented the 90W TGP limit on the GPU, especially as I wasn't close to hitting the thermal limit on the GPU after the tweak. As it turns out, extending the TGP limit to 115W that this GPU is capable of is quite a trivial task.

Even though I mention trivial, it carries a certain element of risk and hence anyone attempting the same is forewarned about the consequences. Having said so, it is a rather quick and easy process. Firstly, you will need NVFlash to take a backup of the current VGA BIOS and also to flash a new one. Secondly, you will have to find the correct replacement BIOS to flash. Any of the bios files with a Boost Clock of 1560 MHz are the 115W ones but in my case the one that worked was not an Asus one, which resulted in the USB-C DP output not working but the MSI one listed here. Note that you have to use nvflash with the "-6" option to override the PCI Subsystem ID error as follows in the case of the above BIOS file.

nvflash -6 232273.rom

Voila! You now have the full TGP unlocked. At first, I decided to give the official overclocking method a go which can be accessed from within Geforce Experience by enabling "experimental features" in the Settings and then selecting the "Performance" option from the overlay (Alt + Z). The automatic performance tuning option displayed a boost of +75 MHz though in GPU-Z, I could see that all the clocks had been increased by 100 MHz. At the end of the day, it produced a TimeSpy score of 6840 which you will recall is higher than the maximum score from last time. The GPU can be observed to now use the full 115W, though it also hits the thermal limit which was not possible with the TGP at 90W. All things considered, this overclock is on the conservative side.

The better solution then is to use the legacy scanner which is more at the edge of performance besides allowing a more generous memory overclock. One thing I failed to mention previously is that the OC Scanner on MSI Afterburner now uses the official NVAPI method of overclocking which does not detect cards lower than the 3xxx series. Hence, to get the Voltage-Frequency Curve, you will need to use the legacy OC Scanner instead which can be easily enabled in MSI Afterburner by editing the MSIAfterburner.cfg file in the root installation folder ("C:\Program Files (x86)\MSI Afterburner" by default) as follows:

LegacyOCScanner = 1

Doing so immediately produced a better result that breached the 7000-mark with the 500 MHz memory clock that I usually run the card at. It wouldn't surprise you to know then that it hit the GPU thermal limit along with the power and utilisation limits. 


The next step then was to see if I could extract the same performance at a lower temperature and power consumption. As you can imagine, it was quite a time-consuming exercise to find the ideal voltage-frequency curve. I did manage to get the maximum Time Spy score of 7155 with a maximum clock of 1800 MHz at 825 mV but that too hit the thermal limit, besides being unstable. Eventually, I found the sweet spot at the max frequency of 1785 MHz at 818 mV, going any lower resulted in instability. I also experimented with a lower frequency of 1770 MHz at lower voltages but that brought down the overall score, besides surprisingly resulting in a higher peak temperature, albeit at a lower power consumption.


Drum rolls then as I reveal the final results for the above curve which happens to be my (final?) choice for the Turbo fan profile. A score of 7148 with an average CPU and GPU power consumption of around 16W and 61W respectively is quite good in my opinion. At the same time, it did not even hit the thermal limit during the test. All in all, the best profile I could hope for at present for this hardware combination.

However, an important thing to note is that an absolute performance profile does not work well for lower fan profiles. Case in point, when I switched to the Performance mode with this profile, the score dropped to 6581 points which is far too less. Turns out that the higher voltage-frequency combination is far too much for the lower fan RPM. 

Thus, for the performance profile, I ran the OC scanner again and applied the +500 Mhz memory overclock to come up with a score of 7045 which isn't too far from my absolute profile with the Turbo mode. However, it comes with a huge spike in GPU power consumption compared to my Turbo mode settings besides still hitting the thermal limit.


I tried to tweak the curve further for the Performance mode but the lower fan RPM requires the clocks to be dropped significantly. From my tests, this would require dropping the max clock to 1600s at may be mid-700 mV but that also drops the score quite a lot, in the region of 6800-6900. That might be a feasible solution but for now I have left it at the default OC Scanner curve as I wouldn't be using the Performance mode often.

Final words also on the clock speeds reported in GPU-Z. The official Geforce Experience performance tuning with NVAPI displayed higher GPU clocks in GPU-Z, so you may be mistaken in terms of it offering better performance. As you can see below, this is a screenshot of the clock speeds reported for my final profile which doesn't seem that significant but offers significantly more performance. It is all about the frequency-voltage combination rather than the reported figures, so don't simply go by the higher numbers as it doesn't always offer the best performance.


With this, I hope I have finally closed the chapter on tweaking the hardware for the Asus Tuf A15. Before I go, here is the final comparison with the stock settings...


…and also the previous tweak at 90W. As I have mentioned previously, incremental GPU performance comes with a comparatively higher power consumption and hence the efficiency is still highest for the 90W tweak. However, at the same time, the 115W tweak still offers better efficiency than stock and hence it is a win-win in my opinion. Hope you have a great time tweaking your hardware and let me know if you have any queries.

 

Tutorial #25: TWEAKING THE ASUS TUF GAMING A15 - Part Deux

 



My previous post about the A15 was about plucking the low hanging fruit of performance. However, there is always the scope of optimising the settings further to gain the greatest benefit for the lowest cost. That is what I was up to on and off since the last post and having reached a satisfactory result, I have decided to share the same for anyone trying to squeeze the little bit extra from this hardware.

The base concept is still the same, to get more out of the GPU at the expense of CPU within the permitted power and thermal budget. To that end, I went through the process as follows:

1. Reduced the CPU temperature and Normal/Short/Long TDP limits to 85 and 25/35/45 respectively in Ryzen controller to provide further headroom to the GPU.

2. Reset the MSI Afterburner settings to stock which had the following curve for my RTX2060.

3. Ran an actual game as I would like to play it (in this case Dishonored 2 at 2K Ultra with HBAO and Triple Buffer) and noted the Average and Maximum GPU frequencies attained in the middle of the game.

4. Few observations first. Neither the CPU or GPU are thermally throttled in any way. Instead, the GPU hits the power limit which in the case of the A15 is 90W. Note that the included GPU is the RTX 2060 Notebook Refresh and thus it is a 110W TGP part. This indicates that the laptop does have thermal capacity to spare, especially as I had conducted these test with an ambient temperature close to mid-30 degree Celsius. Having the option to push the GPU Power further would have been great but with that being an impossibility with a locked BIOS, the next step was to figure out how to extract the most from the hand I have been dealt with.

To that end, I noted the frequencies which yielded the sustained performance (1560 MHz @812 mV) and the peak performance (1755 Mhz @918 mV) in-game.


This concluded the stock performance analysis. Now, there might be multiple guides present that put forth different suggestions as to how you can proceed with undervolting or overclocking but I decided to use these figures to try to set a target that I wanted to attain. In this case, it was to try to push the stock sustained performance to the lowest voltage (i.e. 1560@700) and the peak stock performance to the sustained voltage (i.e. 1755@812). Doing so manually with a smooth curve was going to be quite a challenge, so I decided to take a bit of a shortcut in attaining this objective as follows:

5. Executed the Nvidia OC Scanner within MSI Afterburner to produce an OC curve. The curves are not always the same, so I executed it a few times, also at slightly different CPU TDPs to come up with the curve that resulted in the highest boost frequency. In this case, it was as indicated below.


If I look at the frequencies at the concerned voltages, then it is 1515@700 and 1755@812. Thus, it seems I have almost attained the target I set out for without doing much.

 
Taking a look at the HWInfo figures again with the OC curve, it can be seen now that the sustained frequency has jumped to 1725 MHz from 1560 Mhz which is a decent OC. Also, the peak frequency now is at 2040 MHz which is an even bigger leap but it comes at a much higher voltage (1006 mV). The effect of this however is that the GPU is now hitting all the performance limits apart from the thermal one.

Almost there, but "almost" is not good enough, so I had to push it a bit further.


6. At this point, I decided to try to move the curve to the left, in effect overclocking the OC curve even further to see how much more performance can be extracted from it. I started by essentially shifting the curve to the left by 25 mV but as soon after I started encountering artifacts within the game indicating that I had pushed it a bit too far. As a result, I shifted the curve by 12.5 mV instead and found it to be perfectly stable. 

The other change I did was to flatten the curve at the half-way mark of the complete voltage range which is at 975 mV. There are various reasons to do so, primary of which was that the GPU never really reaches the frequency associated with that voltage and if it does as stated in the point above, it is for a fraction of a second. Consequently, it also saves the effort of manually adjusting the curve in futility. An argument could be made that the curve can be flattened even earlier to essentially attain an undervolt but I wanted to allow the GPU to boost to its practical maximum as much as possible.

With the above, after smoothing out the double frequency jumps (15 Mhz instead of 30 Mhz for a single increment in voltage step), I was left with the curve indicated below.


It starts at 1560@700 (surprise!), reaches 1755@800 and peaks at 2010@975. So how does this curve now fare within the game?


Firstly, we are back to only hitting the power and utilisation limit. The sustained in-game frequency is now 1755 Mhz, a further 30 Mhz boost from the default OC Scanner curve. The peak frequency is now 1965 Mhz though as against 2040 MHz earlier, but as I mentioned previously, it is transient and if you look at the average GPU power, it has come down to 69.4W compared to 70.3W for the OC scanner curve and 71.3W for the default curve. Amusingly, the maximum power consumption was over 97W with the stock curve and I also observed it breaching the 100W barrier in an intermediate test. May be it is due to some quirk in HWInfo or otherwise, the card is indeed capable of going over its locked TDP of 90W in some cases, though without much benefit.

7. With the GPU OC sorted, next, I wanted to see if I can push the CPU a bit more in co-ordination with this curve. You will have to take my word for it, but I tried increments and decrements for all the TDP values while keeping the temperature limit at 85 degrees Celsius and I finally found the best performance at Normal/Long/Short TDP of 25/40/50 respectively.


The proof, of course, is in the pudding. Thus, I present to you now, the comparison between the stock performance and after the CPU/GPU tweak. The duration of the HWInfo figures spanned from the launch of the Time Spy test to the calculation of the score.

Stock:

Post tweak:

A good jump and also a slightly higher score than the tweak in my last article (6703). What it doesn't indicate though is that the power consumption is lower than last time.

8. One last thing! I didn't at any point mention anything about the GPU memory overclock because I kept it for the last. After trying out different increments, I settled for a boost of 500 MHz as it was stable and didn't lead to any noticeable increment in power consumption and thermals. With that, here is the final result.


To put things in perspective then, this is how the tweak stacks up against the stock setting.








Tutorial #24: Tweaking the Asus Tuf Gaming A15


Previously, in my review of the laptop, the only tweaking I had undertaken was an auto-overclock of the GPU which, as per expectation, yielded a performance improvement of around 6% overall with only a slight loss in CPU performance, purely on the basis of the additional available thermal headroom.


During that time I had left the CPU untouched because AMD does not officially support tweaking on laptops and Ryzen Controller did not work for me then. However, later I came across Renoir Mobile Tuning and found it to be operational for this laptop, albeit with a few bugs. I switched to Ryzen Controller again and found that it too now worked well for Renoir with the additional benefit of applying the setting automatically on boot.

With a CPU tuning tool in place, the next thing was determining what to do with it. While these tools often end up as overclocking utilities, my intention couldn't be further opposite to that. The idea was to effectively underclock the system without losing performance i.e. to reduce the temperatures while still maintaining a performance boost over the stock settings.

To cut a long story short, I played around various combination of settings to finally settle on one that seems to work the best. Not that it an exhaustive analysis but rather the most practical among the ones I had tried. Note that I only experimented with the Boost TDPs and the temperature limits. The boost duration seemed pretty logical and I did not want to introduce yet another variable that muddied up the testing. Eventually this resulted in the following changes:
  • Temperature Limit: 90
  • Long Boost TDP: 54 
  • Short Boost TDP: 50
For reference, the default temperature limit is 95 with long and short boost TDPs of 60 and 54 respectively. Also, I auto-overclocked the GPU again to make the most of any benefit available from reduction of the CPU performance. So, how did this theoretical reduction in CPU performance impact the benchmark scores for Fire Strike and Time Spy compared to the ones from the review?


As expected, this has quite an impact on the CPU performance as it has dropped by nearly 5% but on the other hand the graphics score has jumped by 1% resulting in an overall gain of 0.7% on Fire Strike, taking it past 16,000 for the first time. However, the result for Time Spy was more interesting as there was a minor loss instead overall indicating the underclock has more of an impact of DX12 than it does on DX11, which is probably not unexpected. Note that this is an indication of the gain over the gain already achieved by overclocking the GPU originally, so overall the incremental gain is still worth it.

Lastly, the laptop has a secret weapon up its sleeve. Until now, all the tests were conducted using the default Performance mode. However, there is also a Turbo mode which sets the fans whirring to possibly the maximum setting under full load. Yes, it boosts up the scores even further. Below I have again attached a comparison of the Turbo mode performance for the stock CPU settings in comparison to the underclocked one and it is quite the same as earlier. While the DX11 performance is higher with the underclock, it is lower by an equal proportion in case of DX12. 

It has to be kept in mind though that apart from the scores, the underclock has an additional benefit in reducing the overall temperatures and also prolonging the life of the components. Also with the combination of the 4800H with the RTX2060, it is the latter that is going to hit the limit rather easily compared to the former, so a sacrifice of CPU performance for a GPU gain makes a lot more sense.

Finally, I leave you with a comparison of the current profile comprising of a GPU Overclock and CPU Underclock on Turbo with the stock GPU and CPU settings.

A jump of 7.8% on DX11 and 6.6% on DX12 with lower overall temperatures to boot is nothing shoddy. Seems something called as free lunch does exist after all.

Review #65: Asus TUF Gaming A15 Laptop (Ryzen 7 4800H | RTX 2060) ★★★★✭

 Team Red + Team Green - A killer combination!


Introduction:

Ever since Y2K, when AMD stole the limelight for a bit with breaking the 1 GHz barrier and releasing AMD64, AMD as a company failed to impress on me the need to purchase their products. I had opted for Intel just prior to the Athlon breakthrough and every upgrade cycle of 4-5 years led me to opt for Intel. Hence, I was simply enthralled at switching to Team Red after nearly two decades of being stuck with Team Blue. My GPU always has been Team Green but with the integrated Vega 7, there is a dash of Red over there as well.

The Choice:

During the holiday sale 2020, it was between this and the Acer Predator Helios 300 for the princely sum of "not quite" one lakh INR. I could see the reviews racking up for the Core i7 variant on Flipkart and I had even purchased the same but cancelled it as soon as I came across this Renoir masterpiece. It helped that Amazon also offered a much higher exchange price for an old laptop that was lying around, compared to Flipkart.

To put it straight, the Helios 300 has only one thing going for it compared to this one and that is the screen. On the flip side, this comes with a monster CPU, DDR4-3200, a 2000+ Mbps 1 TB SSD from Western Digital, a large 90 Wh battery, lighter weight, higher travel keyboard and about as good a cooling solution as the Helios. It also looks more professional than the Helios, so you can use it in formal environments without having people snickering at you. So overall, it is a win for the A15 over the Helios 300.

Display:

To address the elephant in the room, Asus gimped on the screen, using a Panda panel that has only about 65% sRGB colour gamut and >20 ms response time with quite some screen flex. It pales (no pun intended) in comparison to the 90% sRGB panel with 3ms response time on the Helios, but that is about it. I still managed to get popping colours out of it by increasing the saturation on Radeon Software and calibrating the display from within Windows. Sure, it throws accuracy out of the window in favour of something eye-pleasing but I am not looking to do any colour-work on it and even otherwise, I am looking to connect it to my 120 Hz 4K TV at home for gaming. I am unsure about it, but with the HDMI or DisplayPort output being driven by Vega, it should also support FreeSync directly compared to laptops having output routed through the Intel GPU.

Hardware:


The primary reason for getting this laptop is the Ryzen 4000 series. The 4800H puts the Core i7 to shame. I ran Cinebench after updating the system and without any tweaks. It registered nearly 500 on the single core and 4386 on the multi core, that even the Core i9-9980HK can't touch in most laptops, due to it being a blast furnace rather than a processor. The 4800H did not even touch 80 degrees on the Cinebench multi-core test. It did go past 90 on Firestrike but it never thermally throttled whereas the under-volted 9980HK in my earlier laptop hit 100 degrees within seconds and throttled like it was being asphyxiated.

The RTX 2060 is also the 2020 "refresh" variant with the 1.25V GDDR6 and higher TDP. It passed 15,000 on Firestrike on the first run but with the CPU running much cooler, it opens up the possibility of over-clocking the GPU farther than you can on an Intel machine.

Among other points, the machine ran without much noise on the benchmarks, but I expect it to reach whirring heights with demanding games, something that is to be expected of most gaming laptops. I haven’t checked the battery life and probably never will over the life of the laptop, as I always used it plugged, but the 90 Wh battery with the 4800H will provide a longer battery life than any Intel gaming laptop. The lonely USB 2.0 port on the right-hand side is a bit of a let-down but I have my fingerprint reader permanently plugged in so that I can use Windows Hello. Not having TB3 is also disappointing but I can’t see myself needing it over the lifespan of the laptop as DLSS will most probably help with higher resolutions in the near future.

Tweaking:

As expected, the UEFI on the laptop is barebone. AMD also doesn't support Ryzen Master on laptops, leaving it to OEMs to decide on the thermal envelope. That leaves Ryzen Controller as a tool of choice as it has experimental 4000-series support but with it currently being limited to STAPM settings, it is more likely to be needed to extract more performance rather than to lower temperatures, and thus is not the need of the hour.

However, as I mentioned previously, there is light at the end of the tunnel in terms of extracting more performance from the GPU. As the following 3DMark screenshots indicate, the GPU is able to provide 6-7% more performance using Auto-Overclock at the loss of less than 1% CPU performance. The GPU temperatures too are similar, though the CPU temperature does go up by 4-5 degrees at idle and 2-3% degrees at full load, but still does not throttle.

Warranty:

The unit received from Amazon was manufactured just 2 weeks before as per the warranty registration date. It can be changed to the invoice date by providing Asus with the invoice and a photo of the laptop serial number. An additional year of warranty, after using the 10% off code provided with the laptop, costs about $35 which is quite respectable.

Conclusion:

To sum it up, at the sale price, you can only go wrong with a gaming laptop if you choose Intel. Asus got most things right apart from the screen which is gut-wrenching but not a deal breaker, especially if you use a monitor or TV. In this case, it is what’s inside that counts and this thing is as TUF as it gets.

P.S.: It comes with a huge 16A plug that would probably go well with a microwave in the kitchen. Thankfully, the power adapter has a standard connector as a desktop PSU, so I was able to connect a 16A cable with the regular sized plug. You can also probably get away with a lower amperage cable but it is best to get a 16A one if you can.

Tutorial #23: Taming the beast (Dell XPS 7590 Core i9)


One of the significant purchases I made over the past 6 months is the Dell XPS 7590 with Intel's Coffee Lake Core i9-9980HK, Samsung's 32 GB DDR4-2667 RAM, Toshiba's 1 TB SSD, Nvidia's GTX 1650 and the crème de la crème that is the 4K OLED panel made by Samsung. But before you get any ideas, this is not a device that I would have otherwise purchased but for the fact that I found a single piece listed on Amazon 2 days prior to its official launch of the device at a price that was lesser than the 2019 Acer Helios 300. The risk was worth it as it came sealed with a valid 12-month Premium Plus support from Dell. There are instances in one's life where one doesn't mind getting lucky and this was certainly one of those.

Normally, I would be prompt in reviewing devices within the first few weeks of purchase. However, in this case I think I am too biased towards the device to perhaps put up a worthwhile review. Hence, I thought it better to post a tutorial that would be of some assistance to fellow users. One thing that I am certain of is that the hardware has outgrown the XPS chassis design over the years and the Core i9 pushes things a bit too far in terms of what the chassis is capable of handling thermally. Hence, I went on an optimisation quest with the intention of lowering the temperatures and increasing the overall efficiency of the device. I will own up to the fact that I don't intend to use the device on battery at all unless I am forced to but for that eventuality I decided to find a compromise which would at least provide stock performance at lower battery consumption as against higher performance when operating directly on AC.

The tool of choice in this case for the CPU is Throttlestop which offers significantly more tweaking potential than Intel's Extreme Tuning utility. As for the GPU, the mainstream tool to use is MSI Afterburner. However, in case of this GPU, I found that the temperature limit setting on MSI AB was locked for some reason even after unlocking all the advanced options and the Auto Overclocker resulted in far too frequent game crashes. Hence, I instead went ahead with Asus GPU Tweak II which allowed the GPU temperature target to be set upfront. By default, this is set to 75 Celsius and I instead bumped it to the stock value of GTX 1650 which is 87 Celsius. However, the idea in general is to still not exceed 75 Celsius during most strenuous tasks but to provide the headroom to exceed that if needed.

With this background, in the interest of time, I have decided to simply post the screenshots of the various screens from the tools since further elaboration on each parameter can be found on their respective forums. In case of the GPU, I eventually stuck with simply pushing up the clocks by 10% as undervolting using the frequency curve resulted in far too many instability issues. Is this the most optimum setting possible, most probably not. However, I believe this is the best setting I could identify with trial and error, as attested by the 88 unexpected reboots on record. I could certainly push the clocks and voltages quite a bit more but in general it led to instability and I am certainly no fan of BSODs. Another point to note is that while Asus GPU Tweak II can be set to start on reboot, Throttlestop requires additional effort in setting up the task scheduler which is what I have indicated below.

Starting Throttlestop on Login:

Throttlestop settings for AC profile:

Throttlestop settings for Battery profile:

Now to focus on the fruits of the labour or the pudding so to say. I am not a fan of benchmarks in general but in this case, I needed something to comparatively measure the impact of the changes and a few basic benchmarks provide the easiest reference in this case. Note that I ran all the benchmarks with only the discrete GPU enabled with the overclock settings, so it represents the worst possible scenario in case of thermals.

UserBenchmark:
This might not be the first benchmark utility that springs to mind but for the fact that it allows comparative analysis for similar hardware components and is of considerably short duration. In this case, the CPU came up at 97th percentile and the GPU at 100% percentile which, considering the fact that is mostly going up against much bulkier gaming laptops with much better thermals, is noteworthy. Overall, the CPU efficiency is excellent with the tweaks providing higher performance at lower power. The discrete GPU however doesn't scale up in terms of efficiency and while it is possible to get more performance out of it, it comes at a significant cost in terms of power and heat.

Cinebench:
Cinebench really pushes the CPU and is thus a good test of its ultimate performance. A sequence of 2 consecutive runs also pushes the CPU to its thermal limits. Not surprisingly then, the 1st run score of 3684 is more than 20% better than stock and even the 2nd consecutive run scores better than the stock settings with lower average temperatures.

Heaven:
This benchmark was run at the Extreme preset. As I have already mentioned previously, pushing the GPU doesn't really yield huge benefits in this constrained form factor as the any performance benefits come with equally higher power consumption and heat generation. However, as can be seen in the results, a 3% performance boost in Heaven comes with lower CPU temperatures and the GPU power consumption is lower even though it hasn't been undervolted. So, a win-win overall.

Lastly, how do these modifications fare with a modern game. I happen to have Hitman 2 installed at present, so I thought I'd give it a go with the in-built benchmarks which I frankly didn't find to be entirely consistent across different runs. But I believe it should give at least give an idea of what the laptop is now capable of, even though it is not meant to be a gaming laptop.
I set all the details to the maximum possible apart from lowering it a notch to 'High' for 'Level of Detail', 'SSAO' and 'Shadow Quality', besides turning 'Motion Blur' to 'Off'. The Mumbai benchmark produced a score of 70.95 FPS with CPU averaging at 79C and the GPU at 70C. The more demanding Miami benchmark chewed out 54.04 FPS with CPU/GPU temperatures averaging at 78C/69C respectively. A more than serviceable gaming machine if I may say so.

Review #39: AmazonBasics USB 3.0 Extension Cable (1 meter, 3.3 feet)

The pursuit of (not) back breaking speed!
I can't imagine this cable being useful to anyone else other than the poor souls whose only option to experience USB 3.0 speeds is through the ports on the motherboard at the back of the desktop cabinet. I have had to call my gymnastic skills to action on a number of occasions, re-enacting Mission Impossible style laser grid scenarios to simply plugin USB 3.0 devices. External hard drive manufacturers must be in cahoots with cabinet and motherboard manufacturers by providing the shortest USB 3.0 cables possible and thereby exacerbating the situation.

In my case, I wouldn't blame the motherboard manufacturer for they have provided front USB 3.0 headers. But I can't for the life of me convince myself to buy a new "box" and hence the decade old cabinet with USB 2.0 front panel continues to thrive. I had alleviated this situation a few years ago by purchasing a USB 3.0 hub but, as if by design, it barely made it to the top of my table. Thus, it has been a constant tussle with gravity when using the hub. Moreover, the infatuation of laptop and tablet manufacturers to only include a single Type-A USB port ends up making the hub a travel companion and thus subject to frequent unplugging from the desktop. Thus, my decision to purchase the USB 3.0 cable materialised.

In the past, I have had really troublesome experiences purchasing cables online as well as offline since cables are difficult to judge by appearances alone. It is true that you can filter out the worst of them on the basis of the thickness of the cables and the moulding of the ports, but beyond that it is complete guesswork. Hence, AmazonBasics has become a go-to brand for me for cables as it offers a modicum of peace of mind in terms of quality. Going by the quality of other AmazonBasics products, I can expect it to be a barebones product that does the job. It is always true to the specifications even as durability remains a question mark over the long term. But that is true of any cable and Amazonbasics is best of the bunch in that regard. I imagine a rotten apple seeping through once in a while but in all other cases, there simply isn't anything better for the price you pay.