HW News – Steam Says No One Uses AMD or RTX 20 GPUs, NVIDIA Gaining Power, 15.3TB SSDs – GamersNexus

05:14|AMD & & RTX GPUs Almost Non-Existent in Steam Survey

Outside of that, these surveys also tend to highlight other intriguing trends– or lack thereof– among PC users. While the surveys dont reveal the entire picture, primarily because it cant see the number of RGB LEDs you have in your computer system … yet, Steam does survey thousands of users for their hardware configurations, and the outcomes supply a significant picture. Here are some essential takeaways.

The most recent Steam hardware study from August is now readily available, and in the light of Nvidias upcoming RTX 3000-series, it seems clear why the Turing cards got snubbed at Nvidias current event. Turing was a sluggish burn that never got off the ground the way Pascal did, and taking a look at Steams study information enhances that..

Quad-core CPUs are still king, representing 45.76% of the study.
The majority of users (41.21%) are still running 16GB of RAM, with just ~ 9% having any configuration beyond 16GB. 8GB is now at 32%.
Still, the most popular resolution is 1920 x 1080, a resolution which Steam states represent over 65% of its users. The next most popular is 2560 x 1440, at a distant 6.59%. 1440p never rather got the same focus as 1080p and 4K, and so has actually been a strange middle-ground. 3840×2160 is just 2.24% of the market, up 0.01% considering that last duration. UltraWide resolutions havent moved much, looking at the 3440×1440 outcome. Perhaps unsurprisingly, 1366×768 is caught by 10% of the market, with very little motion. This is indicative of a great deal of laptop computer users on Steam, either playing lighter-weight video games or simply installing it for the buddies list.
8GB GPUs have actually surpassed 6GB GPUs, with a reported 22.73% of users having 8191 MB of VRAM. 6GB GPUs (6143 MB) was available in at a close 2nd, making up 21.69% of surveyed users.
The most popular GPU is still Nvidias GTX 1060, with 10.75% of Steam users owning the card. The most popular RTX 20-series card is the RTX 2060– at a far-off 2.88%. The RTX 2080 Ti still accounts for less than 1% of Steam users.
AMD is hardly in the leading 10, with the RX 580 in rank 9, and its next card is the RX 570, at rank 16. As for the RX 5700 XT, it doesnt appear until method down the list, in some way below the 2080 Ti despite being about 1/3 of its price. The 5700 XT has about 0.88% of the market.

Source: https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam.

09:56|The First Round of Displays Supporting Nvidia Reflex.

After learning more about Nvidia Reflex, we now know its composed of 2 main parts: The Nvidia Reflex SDK and the Nvidia Reflex Latency Analyzer. Of the SDK, Nvidia specifies that this is “A brand-new set of APIs for video game developers to decrease and measure rendering latency.

As part of Nvidias RTX 3000-series announcement, it mentioned the new Nvidia Reflex feature, which is focused on minimizing end-to-end system latency. While Nvidia didnt offer a lot of information on this at first, we speculated that its an initiative to reduce overall end-to-end system latency as raw FPS scale to a point of being lesser (i.e., 240 FPS vs 360 FPS).

Nvidia made it clear that the new Reflex feature would correspond with bringing brand-new 360Hz G-Sync displays to market. Now, Nvidia and its main display screen partners– Asus, Acer, Alienware, and MSI– are co-announcing the preliminary of those displays are coming later on this fall. These are as follows: MSI Oculux NXG253R, Asus PG259QN, Alienware 25, and Acer Predator X25..

https://www.nvidia.com/en-us/geforce/news/g-sync-360hz-gaming-monitors.

At a high level, it appears that this SDK successfully takes what Nvidias Ultra Low Latency Mode (ULMB) does at the motorist level, and bakes it straight into a game. The concept here is to accelerate the general rendering pipeline..

The Nvidia Reflex Latency Analyzer is a feature that will be baked straight into the G-Sync boards on G-Sync displays. G-Sync displays supporting Nvidia Reflex will come with a “Reflex Latency Analyzer USB port” for users to plug their mouse into.

Furthermore, peripheral makers such as Asus, Logitech, Razer, and SteelSeries, will be using mice that will enable determining mouse latency. Nvidia will also be making a public database of typical mouse latencies, and Nvidia Reflex software metrics will be included into the GeForce Experience suite..

Source: https://www.theverge.com/2020/9/2/21418080/nvidia-g-sync-reflex-360hz-refresh-rate-gaming-monitors-alienware-msi-asus-acer.

14:04|HDMI Single-Cable 8K Support.

NVIDIA made a big offer about 8K gaming for its RTX 3090, something well hopefully be screening, but that brought about concerns of cable television assistance. In a follow-up architecture day occasion, NVIDIA kept in mind that HDMI 2.1 enablement allows a single cable for 8K HDR TV assistance at 60Hz.

https://www.hdmi.org/spec/hdmi2_1.

15:18|PC Specific CPU Optimizations for Marvels Avengers.

Source: https://www.intel.com/content/www/us/en/gaming/play-avengers.html.

Marvels Avengers launched in complete swing this previous week, and Intel just recently revealed that it collaborated with Crystal Dynamics to make CPU specific optimizations for the game. The optimizations no doubt focus on the physics engine for the game, with Intels marketing highlighting a few bottom lines.

Neither Intel nor Crystal Dynamics disclosed any genuine technical details. Theres a video that shows the enhanced enemy and environmental destruction, as well as the improved water simulation. The video compares frames rendered with these settings on versus off, and the improvements seem to be noteworthy.

Weve seen Intel include functions like this in the past, like self-shading shadows in some racing video games.

https://www.theverge.com/2020/8/31/21408911/intel-pc-exclusive-avengers-graphics-improvements-cpu-gpu-physics-water.

” The force and shockwave of each special move will develop more detailed debris and particles”.
” With every effective blow, stomp, smash or blast, youll see more persistent armor shards, in more information, flying in more pieces and more locations”.
” With the ideal balance of cores, threads and frequency, any interaction with water ends up being a richer, more responsive experience. Water splashes and reacts as it naturally would in the real life.”.

Intel is also promising to support Marvel Avengers for the next two years, and is likewise preparing to deal with GPU-specific optimizations for the video game also, probably for its upcoming Xe Graphics GPUs..

17:18|Mellanox and Cumulus Become Nvidia Networking.

Source: https://www.tomshardware.com/news/mellanox-technologies-absorbed-and-rebranded-as-nvidia-networking.

With both Mellanox and Cumulus under Nvidias roof, Nvidia can scale its HPC platform across not only its chips, but likewise software and networking..

Nvidia likewise has a brand-new Nvidia Networking Twitter account that combines Mellanox Technologies and Cumulus Networks. Nvidia acquired Cumulus Networks for an undisclosed amount back in May. Cumulus offers the Cumulus Linux OS for networking switches, of which Mellanox has been using because its Spectrum line of switches in 2016..

Nvidia recently finished its $7B acquisition of Mellanox, and it seems to have wasted no time integrating it into the company and rebranding it. Mellanox will now be referred to as Nvidia Networking, and while Nvidia hasnt expressly validated this, Mellanoxs upgraded site and Twitter account basically do..

” NVIDIA Networking previously Mellanox Technologies and Cumulus Networks. Ethernet and InfiniBand solutions that are turning the information center into one calculate unit,” reads the Nvidia Networking Twitter page..

18:59|RTX 3000-Series Recap.

GeForce RTX 3080.

Cuda Cores.

The biggest news of the week was Nvidia lastly taking the wraps off of its RTX 3000-series cards. We wont go back over all the details here; we have both a video and a post on this. We will quickly evaluate the essentials, however, which has actually mainly been aggregated into the table below.

GeForce RTX 3070.

GeForce RTX 3090.

Model.

10496.

8704.

5888.

Base Clock.

1.4 GHz.

1.44 Ghz.

1.5 GHz.

Increase Clock.

1.7 GHz.

1.71 GHz.

1.73 GHz.

VRAM.

24GB GDDR6X, 19.5 Gbps.

10GB GDDR6X, 19Gbps.

8GB GDDR6, 16Gbps.

Memory Bandwidth.

935.8 GB/s.

760 GB/s.

512 GB/s.

Bus Width.

384-bit.

320-bit.

256-bit.

RT Cores.

82.

68.

46.

Tensor Cores.

328.

272.

184.

Ampere.

Ampere.

ROPs.

Architecture.

Ampere.

96.

88.

64.

* Graphics Card Power.

350W.

320W.

220W.

Advised PSU.

750W.

750W.

650W.

Yes.

Yes.

Yes.

Custom-made Samsung 8nm.

Yes.

Yes.

Yes.

Production.

Customized Samsung 8nm.

DirectX 12 Support.

Yes.

Yes.

Launch Date.

Customized Samsung 8nm.

Nvidia DLSS.

Yes.

PCIe Gen 4.

09/24/2020.

09/17/2020.

10/2020.

Source: GN.

MSRP.

The most fascinating aspect– to us, at least– is the new cooling style for FE cards, and what ramifications it will have. We think it will add a brand-new angle to the air versus liquid cooling debate, and we plan to test this extensively.

The RTX 3000-series will see Nvidia rearrange its item stack to reestablish the RTX 3080 as the series flagship at $700, and the RTX 3090 will be a Titan-class card with a titanic $1500 price to match. The RTX 3070 is supposed to be more powerful than the RTX 2080 Ti for $500. Nvidia recently revealed some video footage of Doom Eternal performing at 4K with framerates well above 100FPS, captured on a RTX 3080..

$ 1,500.

* Nvidia lists power specs as “Graphics Card Power,” which isnt necessarily the exact same as TGP or TDP. This number could represent GPU-only or GPU + memory, but we will confirm this in our evaluation. All numbers above are from NVIDIAs main products and have not yet been independently confirmed.

Numerous AIB partners have likewise revealed some of their preliminary designs. These will have more traditional cooling services..

$ 500.

$ 700.

21:54|Intel Tiger Lake and Rebranding.

Memory.

GPU Freq.

Cache.

GPU.

Single Core Turbo.

C/T.

Max Turbo.

Design.

Xe LP.

EUs.

Base Freq.

UP3.

i7-1185G7.

Intels calling paradigms continue to get more complicated, as its introducing even more product identifiers with Tiger Lake. To begin, Tiger Lake will come in one of two packages, depending on the power target: UP3 or UP4. Each plan leverages a 10nm SuperFin processing pass away, and a 14nm I/O die.

Intel lastly took the wraps off its much rumored 11-gen (Tiger Lake) mobile CPUs this past week. While the chips appeared remarkable enough by themselves, Intel couldnt appear to steer the conversation away from AMD and its Ryzen 4000-series of APUs. We also have a video on the Tiger Lake statement, so well keep this section short.

Power.

Package.

4/8.

96.

12MB.

DDR4-3200LPDDR4x-4266.

12W-28W.

3.0 GHz.

4.8 GHz.

4.3 GHz.

1.35 GHz.

Xe LP.

i7-1165G7.

4/8.

96.

12MB.

DDR4-3200LPDDR4x-4266.

12W-28W.

2.8 GHz.

4.7 GHz.

4.1 GHz.

1.30 GHz.

Xe LP.

i5-1135G7.

4/8.

80.

8MB.

DDR4-3200LPDDR4x-4266.

12W-28W.

2.4 GHz.

4.2 GHz.

3.8 GHz.

1.30 GHz.

UHD.

i3-1125G4.

4/8.

48.

8MB.

DDR4-3200LPDDR4x-3733.

12W-28W.

2.0 GHz.

3.7 GHz.

3.3 GHz.

1.25 GHz.

i3-1115G4.

UHD.

2/4.

48.

6MB.

DDR4-3200LPDDR4x-3733.

12W-28W.

3.0 GHz.

4.1 GHz.

4.1 GHz.

1.25 GHz.

UP4.

i7-1160G7.

Xe LP.

4/8.

96.

12MB.

LPDDR4x-4266.

7W-15W.

1.2 GHz.

4.4 GHz.

3.6 GHz.

1.1 GHz.

i5-1130G7.

Xe LP.

4/8.

80.

8MB.

LPDDR4x-4266.

7W-15W.

1.1 GHz.

4.0 GHz.

3.4 GHz.

1.1 GHz.

i3-1120G4.

UHD.

4/8.

48.

8MB.

LPDDR4x-4266.

7W-15W.

1.1 GHz.

3.5 GHz.

3.0 GHz.

1.1 GHz.

UHD.

i3-1110G4.

2/4.

48.

6MB.

LPDDR4x-4266.

7W-15W.

1.8 GHz.

3.9 GHz.

3.9 GHz.

1.1 GHz.

Considering that AMD beat Intel to the punch when it pertained to PCIe 4.0 on desktops, Intel was eager to highlight Tiger Lake as the markets first PCIe 4.0 platform for laptops. Other connectivity choices consist of WiFi 6 and Thunderbolt 4. Tiger Lake concentrates on a number of improvements over Ice Lake, particularly in greater continual frequencies, improved memory assistance, much better graphics efficiency, and a renewed focus on power effectiveness aimed at improving battery life.

The arrival of Tiger Lake likewise marked some substantial rebranding for Intel. Intel has essentially rebranded the U and Y-series, and trotted out new logos for its Iris and Core i-Series brands.It likewise debuted its Evo brand name, which changes Project Athena. Intel has also switched up its business logo, keeping in mind that this was only the 3rd time it has actually ever revamped its logo in such a method..

Source: https://newsroom.intel.com/news-releases/11th-gen-tiger-lake-evo/#gs.f9p5wl.

24:44|TeamGroup Launches 15.3 TB SSD at $4,000.

The QX SSD will use 3D QLC NAND from an unspecified provider, and a Phison E12DC controller, combined with an SLC cache mode and DRAM buffer. TeamGroup is declaring speeds at 560 MB/s (read) and 480 MB/s (write), and a write life of 2560 TBW with a three-year service warranty..

Source: https://www.teamgroupinc.com/en/news/ins.php?index_id=140.

TeamGroup has actually now formally taken the most expensive “customer” SSD crown. The business announced its most recent line of spacious SSDs, the QX series. The inaugural TeamGroup QX SSD will provide a capacity of 15.3 TB in a 2.5″ form aspect and SATA III interface..

https://www.techradar.com/news/teamgroup-launches-consumer-ssd-with-not-so-consumer-capacity.

TeamGroups QX SSDs will be made to purchase, for an approximated price of $3990..

25:54|Nvidias RTX 3000-Series Reddit Q&A.

After learning more about Nvidia Reflex, we now know its made up of 2 main components: The Nvidia Reflex SDK and the Nvidia Reflex Latency Analyzer. NVIDIA made a huge deal about 8K video gaming for its RTX 3090, something well hopefully be testing, however that brought about questions of cable television support. Nvidia also has a brand-new Nvidia Networking Twitter account that combines Mellanox Technologies and Cumulus Networks. The most significant news of the week was Nvidia lastly taking the covers off of its RTX 3000-series cards. The RTX 3000-series will see Nvidia rearrange its product stack to reestablish the RTX 3080 as the series flagship at $700, and the RTX 3090 will be a Titan-class card with a titanic $1500 rate tag to match.

Extra memory is always great to have however it would increase the price of the graphics card, so we need to find the ideal balance.”.

Among the larger sticking points weve seen surrounds the truth that the RTX 3080 will feature a 10GB VRAM buffer, which is 1GB less than the GTX 1080 Ti.

Nvidia on support for RTX I/O and the DirectStorage API:.

The NVIDIA Reflex Latency Analyzer is an innovative new addition to the G-SYNC Processor that enables end to end system latency measurement. Additionally, NVIDIA Reflex SDK is integrated into video games and enables a Low Latency mode that can be used by GeForce GTX 900 GPUs and up to lower system latency.

In order to do this, you need a very powerful GPU with high speed memory and adequate memory to meet the needs of the video games. A few examples – if you look at Shadow of the Tomb Raider, Assassins Creed Odyssey, Metro Exodus, Wolfenstein Youngblood, Gears of War 5, Borderlands 3 and Red Dead Redemption 2 running on a 3080 at 4k with Max settings (including any applicable high res texture loads) and RTX On, when the game supports it, you get in the range of 60-100fps and utilize anywhere from 4GB to 6GB of memory.

” RTX IO and DirectStorage will need applications to support those features by incorporating the brand-new APIs. Microsoft is targeting a designer preview of DirectStorage for Windows for video game developers next year, and NVIDIA RTX players will be able to benefit from RTX IO boosted games as soon as they end up being offered.”.

Nvidia recently hosted a Q&A on Reddit, where it answered some typical questions up its upcoming RTX 3000-series cards. Below well detail a few of the more interesting points..

Source: https://www.reddit.com/r/nvidia/comments/ilhao8/nvidia_rtx_30series_you_asked_we_answered/.

Nvidia likewise provided some information on whether Nvidia Reflex was software application, hardware, or both.

With the RTX 3000-series, Nvidia and Microsoft will be bringing some improved I/O features to the PC, which are similar to what the Xbox Series X and PS5 will leverage. Nvidia addressed a number of questions about RTX I/O, including how it will deal with SSDs and Microsofts DirectStorage API..

” Were continuously examining memory requirements of the most current games and frequently review with game developers to comprehend their memory requires for existing and approaching video games. The goal of 3080 is to offer you piece de resistance at as much as 4k resolution with all the settings maxed out at the finest possible cost.

Nvidia also kept in mind that while there isnt a minimum SSD requirement to take advantage of RTX I/O, undoubtedly faster SSDs will produce better outcomes.

” RTX IO allows checking out information from SSDs at much higher speed than conventional approaches, and allows the information to be kept and checked out in a compressed format by the GPU, for decompression and usage by the GPU. It does not allow the SSD to replace frame buffer memory, however it enables the data from the SSD to get to the GPU, and GPU memory much faster, with much less CPU overhead.”.