PlayStation 5 Pro with multi-GPU tech outlined in new Sony patent – TweakTown

VIEW GALLERY – 9 IMAGES
A brand-new Sony patent mean some enthusiastic things with the PlayStation 5 and the PS service as a whole. The patent, which was submitted in January 2019 and published simply days ago, outlines a variety of different possibilities. The most interesting one is mention of dual-GPU (and even double APU) consoles for enormously increased performance. Sony could be setting itself up for a PS5 refresh or PS5 Pro with multi-GPU assistance. This patent isnt that basic, though. Theres mention of cloud video gaming tossed in the mix so its not all just based around local hardware.
Prior to we dive in, I desire to remind everyone that this is a patent and not settled information, implying nothings been validated or revealed. Theres no verification that the PS5 Pro is even real or that it will have numerous GPUs, and most importantly, the tech/info laid out herein might not ever be developed or released for– or in– a commercially-available system.

FIG. 3 highlights an example non-uniform memory gain access to (NUMA) architecture, in which a single material 300 holds two APUs 302, 304 on a single die or on respective dies, it being understood that the NUMA architecture may be implemented by more than 2 APUs. When carried out on particular die chips on the very same material 300, communication paths, which may be referred to as “busses” for generality, may be established by through layers of the material.
As shown, each APU may include several CPUs 304 and one or more GPUs 306, typically one CPU and one GPU per APU. Each APU 302 may be connected with its own particular memory controller 308 that controls access to memory 310 such as random-access memory (RAM). Communication between APUs may be affected by one or more communication courses 312, referred to herein for benefit as “busses”.
Thus, each APU (or specific GPU) has its own memory controller and for this reason its own dedicated memory, such as RAM. There can be a (cache-coherent) shared bus in between the GPUs, enabling one GPU to access memory of the other GPU.

PlayStation 5 Pro

Sony submitted a new patent that recommends a higher-end PS5 Pro console with dual GPUs (as well as a myriad of other possibilities).

FIG. 4 is a block diagram of a shared memory architecture in which 2 APUs each consisting of a CPU 400 and GPU 402 are revealed with each CPU and each GPU being executed on its own respective die, it being understood that the architecture might be carried out on less or even one die which more than two APUs might be carried out.
The APUs share a typical memory controller 404 that controls a memory 406, and the APUs may communicate with each other and with the memory controller over particular interaction paths.

The $100 PlayStation console “stick”.

Accordingly, in some embodiments the server 80 may be an Internet server or a whole “server farm” and may include and carry out “cloud” functions such that the gadgets of the system 10 might access a “cloud” environment via the server 80 in example embodiments for, e.g., network video gaming applications. Or, the server 80 might be implemented by several video game consoles or other computers in the same room as the other devices revealed in FIG. 1 or close by.

In other examples, each GPU is programmed to render all of some, however not all, lines of a frame of video, with lines of a frame of video rendered by a GPU being different from lines of the frame rendered by the other GPU to offer a particular output. In another example, the first GPU includes at least one scanout system pointing only to buffers managed by the very first GPU and is set to receive frames of the video from the second GPU by means of direct memory access (DMA) to output a total series of frames of the video.
In yet another example method, the first GPU consists of at least one scanout system pointing to at least a very first buffer managed by the very first GPU and a second buffer managed by the second GPU. Again, the very first GPU can include at least one scanout unit pointing to at least a first buffer handled by the very first GPU and not to a second buffer managed by the second GPU. In this application, the very first GPU may be configured to cycle through buffers to output a complete sequence of frame of video utilizing 1-N lines associated with the very first buffer and (N +1)- M lines associated with the 2nd buffer and received by the very first GPU via direct memory gain access to (DMA).

PS Now server farms might boost console power and boost in-game efficiency and resolution. Sony might efficiently rent access to their husky servers (which are powered by Microsoft Azure, I may include) similar to NVIDIAs GeForce Now service.
This might be relevant to the PlayStation 5 and even an affordable digital-only box. Which leads us into our next section.

The patent kicks things off with an interesting background and summary area that speaks about the benefits of using several GPUs and connecting them together. The summary area clearly discusses a light variation of a console (presumably the base PS5) that might use a single SoC, sand a high-end variation (the PS5 Pro) that could utilize multiple SoCs.
Remember the PlayStation 5 uses a single 7nm SoC from AMD outfitted with a Navi GPU and Zen 2 CPU.
The patents primary objective is to present the possibility of a multi-GPU console with both local and network access to the 2nd GPU. Theres 2 primary ways this might work: A physical console that consists of 2 GPUs, whether it be 2 SoCs/APUs, or one SoC and a 2nd GPU; or using a GPU from a cloud server network.
The latter is how the PlayStation Now service is powered.
The patent acknowledges theres significant hurdles to tackle such as frame buffer management for rendering when it comes to physical dual-GPU setups. The patent is detailed and extremely varied and aims to cover all the bases for a solution to this issue.
Rather of describing every single possible service, well give you the essence (well likewise consist of a full copy of the summary at the bottom of the article for your perusal).
Some of the embodiments have actually the rendered video broke up between the GPUs. One GPU renders one part, the other renders the other part, and the system utilizes multi-plexing to combine the rendered images and output them to a screen.

Theoretically, Sony might make a base PlayStation 5 with a single SoC and a PlayStation 5 Pro with double SoCs or double GPU tech for better power.
This updated design would theoretically be better suited for high-end 8K video gaming however theres also thermals to consider– the PS5s Navi GPU and PCIe 4.0 could get rather hot, and including in another GPU might be troublesome. However this patent isnt necessarily worried about thermals.
The patent also mentions how power usage increases with frequency, and points out the system will require to vary either power use or clock frequencies. For recommendation, the PlayStation 5s redesigned SoC architecture has variable frequency and keeps voltages constant in an effort to minimize fan sound and focus on heat mitigation.
At the same time, the patent bewares to discuss that this extra GPU access could be provided by PlayStation Now servers and not regional hardware.

A couple of years back, Microsoft was working on Project Hobart, a streaming-only “stick” that plugged directly into your television.
The idea is that Hobart would be a mini-console with very minimal power and abilities. Instead of utilizing an onboard SoC fit for video gaming, Hobart would obtain power kind Microsofts magnificent cloud server banks. The hardware was deserted because didnt have a game streaming facilities (it does now, and its called Project xCloud).
Sony has its PlayStation Now service, and this patent might see the service pumping out serious gaming power to low-priced boxes or even streaming sticks for massively-accessible video gaming.
By simulating Project Hobart, Sony could efficiently introduce the least expensive PlayStation console ever. It d be a little, low-power box or HDMI stick that plugs right into a television and leverages PlayStation Now servers for game performance and streaming. Sony could outfit its PS Now servers with PlayStation 5 consoles to provide PS5-esque performance to these boxes over the cloud.
This might likewise enable Sony to natively include PlayStation video gaming to its line of premium Bravia TVs. These UHDTVs could be equipped with these boxes and straight link to PlayStation Now to allow built-in PS4 and PS5 cloud gaming.
An older Sony patent also appears suitable here. In it, Sony laid out how the PlayStation 5s DualSense controller could link straight to the PlayStation Now network without requiring a console. This would let consumers link their next-gen Dualsense controllers directly to the low-cost box (or perhaps the aforementioned console-free Bravia TVs) for cloud gaming.

From our understanding, this patent has great deals of potential for brand-new PlayStation hardware that exceeds the PS5s next-generation capabilities. Theres multiple personifications (or permutations/different methods to present the idea) that describe a variety of different methodologies for the tech. And these personifications are quite extensive.
The most apparent is a PlayStation 5 Pro, however theres also hints that Sony might launch a smaller, more affordable PlayStation “stick” with a limited-power APU or SoC chip thats specifically created for PlayStation Now. The stick would basically be only effective adequate to display and process basic images and the force of the image making and processing would be done server-side.
First lets dive into the PlayStation 5 Pro possibilities.

FIG. 5 is a block diagram of a shared memory architecture in which two APUs (each consisting of a particular CPU 502 and GPU 504) are shown with each APU being executed by itself particular pass away 500 and with a shared memory controller 506 being carried out on one of the dies 500, it being comprehended that the architecture might be implemented on one die which more than two APUs might be executed. The shared memory controller 506 controls access to a memory 508. and the APUs may interact with each other and with the memory controller 506 over one or more communication courses 510.
The patent likewise states the console hardware could have 2 GPUs on a the exact same die and use the same memory controller and RAM swimming pool, which is unusual for consoles.

BACKGROUND.
Simulation consoles such as computer system game consoles usually use a single chip, described as a “system on a chip” (SoC) that includes a central processing unit (CPU) and a graphics processing unit (GPU). Due to semiconductor scaling challenges and yield concerns, numerous little chips can be linked by high-speed meaningful busses to form big chips. While such a scaling option is a little less ideal in efficiency compared to building a big monolithic chip, it is less expensive.
SUMMARY.
As comprehended herein. SoC technology can be applied to video simulation consoles such as game consoles, and in specific a single SoC might be attended to a “light” variation of the console while plural SoCs might be utilized to supply a “high-end” version of the console with greater processing and storage ability than the “light” variation. The “high-end” system can likewise include more memory such as random-access memory (RAM) and other features and may also be used for a cloud-optimized variation using the same video game console chip with more efficiency.
As further comprehended herein, however, such a “high-end” multiple SoC design presents challenges to the software and simulation (video game) design, which need to scale accordingly. As an example, challenges arise associated to non-uniform memory gain access to (NUMA) and thread management, as well as providing hints to software application to use the hardware in the finest way. When it comes to GPUs operating in performance, the framebuffer management and control of high definition multimedia (HDMI) output might be resolved. Other difficulties too might be resolved herein.
Appropriately, a device includes a minimum of a first graphics processing unit (GPU), and a minimum of a 2nd GPU communicatively coupled to the very first GPU. The GPUs are programmed to render particular portions of video, such that the very first GPU renders very first portions of video and the second GPU renders 2nd parts of the video, with the very first and 2nd parts being various from each other.
Stated differently, the first GPU may be set for rendering very first frames of video to offer a first output, while the second GPU is configured rendering some, but not all, frames of the video to provide a 2nd output.
The frames rendered by the second GPU are different from the frames rendered by the very first GPU. The second and first outputs might be integrated to render the video.
In addition, or additionally, the very first GPU may be configured for rendering all of some, however not all, lines of a frame of video to offer a very first line output and the second GPU might be programmed for rendering some, however not all, lines of the frame of the video to supply a 2nd line output. The lines rendered by the second GPU are various from the lines rendered by the first GPU. The first- and second-line outputs can be combined to render the frame.
In some personifications, the second and very first GPUs are carried out on a common die. In other personifications, the second and first GPUs are executed on respective very first and 2nd dies. The very first GPU might be connected with a very first main processing system (CPU) and the 2nd GPU may be related to a second CPU.
In some implementations, a very first memory controller and very first memory are related to the first GPU and a second memory controller and 2nd memory are connected with the second GPU. In other applications, the GPUs share a common memory controller managing a common memory.
In some examples, each GPU is set to render all of some, however not all, frames of video different from frames of the video rendered by the other GPU to offer a respective output. In other examples, each GPU is set to render all of some, however not all, lines of a frame of video, with lines of a frame of video rendered by a GPU being different from lines of the frame rendered by the other GPU to provide a respective output.
In an example method, the very first GPU includes a minimum of one scanout unit indicating a minimum of one buffer managed by the 2nd GPU. The very first GPU can be configured to cycle through buffers to output a total sequence of frames of the video. In another example, the very first GPU consists of at least one scanout system pointing just to buffers managed by the very first GPU and is programmed to receive frames of the video from the second GPU through direct memory gain access to (DMA) to output a total series of frames of the video.
In yet another example technique, the very first GPU consists of a minimum of one scanout unit pointing to at least a first buffer handled by the very first GPU and a second buffer managed by the 2nd GPU. In this technique, the very first GPU is programmed to cycle through buffers to output a total sequence of frame of video using 1-N lines related to the first buffer and (N +1)- M lines related to the second buffer. The 1-N lines are various lines of the exact same frame connected with the (N+)- M lines.
Again, the very first GPU can include at least one scanout unit pointing to at least a very first buffer handled by the very first GPU and not to a second buffer handled by the 2nd GPU. In this execution, the very first GPU might be configured to cycle through buffers to output a complete series of frame of video using 1-N lines connected with the very first buffer and (N +1)- M lines related to the second buffer and received by the very first GPU via direct memory gain access to (DMA). The 1-N lines and (N +1)- M lines are various lines of the frame of video.
In still another technique, the first GPU includes at least one scanout system indicating at least a first buffer interacting with the common memory controller. The 2nd GPU includes a second buffer interacting with the common memory controller. The first GPU is programmed for rendering 1-N lines related to the first buffer and the second GPU is set for rendering (N +1)- M lines connected with the 2nd buffer.
In some examples, the first GPU manages video data output from the very first and 2nd GPUs. This may be impacted by physically linking a HDMI port to the very first GPU. In other examples, the GPUs output video information to a multiplexer that multiplexes the frames and/or lines from the respective GPUs together to output video.
In another aspect, in a multi-graphics processing system (GPU) simulation environment, a method consists of triggering plural GPUs to render respective frames of video, or to render respective portions of each frame of video, or both to render particular frames and particular parts of frames of video. The method includes controlling frame output using a first one of the GPUs receiving frame information from at least one other of the GPU( s), or multiplexing outputs of the GPUs together, or both utilizing a very first among the GPUs receiving frame info from a minimum of another of the GPU( s) and multiplexing outputs of the GPUs together.
In another aspect, a computer simulation apparatus includes a minimum of a very first graphics processing system (GPU) configured for rendering a respective very first portion of simulation video, and a minimum of a second GPU programmed for rendering a particular second portion of simulation video. At least the very first GPU is programmed to combine the second and first parts and to render an output establishing a complete simulation video.

” Similarly, a user may be allocated more passes away (and for this reason more APUs) on a cloud server by paying additional charges, with only a single die being assigned to lower-paying users.
” This may be done when an application starts by programming an API to require system metrics and identify and spawn threads quality of service based on the metrics. System metrics can be filtered for lower-paying users who are allocated for fewer dies. Higher-paying users wanting the advantage of a multi-threaded game with synchronised processing can be designated more passes away than lower-paying users.”

As we stated previously, remember that this is a patent. Theres no concrete evidence Sony is making a PS5 Pro with multiple GPUs. Theres lots of issues with a dual-SoC or -APU system to consider and well most likely see more patents pertaining to this in the future.
But for now, we a minimum of understand Sony is attempting to cover its bases and future-proof its plans for the PlayStation brand name. Theres no telling whatll come of this patent.
The PlayStation 5 is due out Holiday 2020. No prices has actually been revealed.
Inspect below for complete specs of the PS5, together with a complete background/summary of the covered patent:.

It might also use two different SoCs with two different CPUs connected to the very same die and utilizing NUMA access, a memory architecture that permits much faster access to RAM memory data. The double SoCs would likewise have their own RAM pools and memory controllers and would be collaborated through an interaction bus.

Wrap-Up.