Image source: Getty Images.
British chip designer Graphcore just recently revealed the Colossus MK2, also referred to as the GC200 IPU (Intelligence Processing Unit), which it calls the worlds most complex chip for AI applications. The chip offers 8 times the efficiency of its predecessor, the Colossus MK1, and is powered by 59.4 billion transistors– which goes beyond the 54 billion transistors in NVIDIAs (NASDAQ: NVDA) latest top-tier A100 data center GPU.
Graphcore plans to install four GC200 IPUs into a new device called the M2000, which is roughly the size of a pizza box and delivers one petaflop of calculating power. By itself, the system is slower than NVIDIAs A100, which can handle 5 petaflops on its own.
Graphcores M2000 is a plug-and-play system that enables users to connect up to 64,000 IPUs together for 16 exaflops (each exaflop equates to 1,000 petaflops) of processing power. To put that into viewpoint, a human would need to perform a single calculation every second for almost 31.7 billion years to match what a one exaflop system can do in a single second.
The GC200 and A100 are both clearly very effective machines, but Graphcore delights in 3 unique benefits versus NVIDIA in the growing AI market.
1. Graphcore is establishing custom chips for AI jobs
Unlike NVIDIA, which expanded its GPUs beyond video gaming and professional visualization purposes into the AI market, Graphcore designs customized IPUs, which vary from CPUs or gpus, for artificial intelligence jobs.
On its site, Graphcore claims: “CPUs were designed for workplace apps, GPUs for graphics, and IPUs for maker intelligence.” It discusses that CPUs are developed for “scalar” processing, which processes one piece of data at a time, and GPUs are developed for “vector” processing, which processes a large selection of integers and floating-point numbers simultaneously.
Graphcores IPU technology utilizes “chart” processing, which processes all the information mapped throughout a single chart simultaneously. It declares the IPU structure processes machine-learning tasks more efficiently than CPUs and GPUs. Numerous machine-learning structures– consisting of TensorFlow, MXNet, and Caffe– already assistance chart processing.
Graphcore declares the vector processing design used by GPUs is “far more restrictive” than the chart design, which can allow scientists to “check out brand-new models or reexplore locations” in AI research.
2. Graphcores GC200 uses cheaper per-petaflop processing power
NVIDIAs A100 costs $ 199,000, which equals $39,800 per petaflop. Graphcores M2000 system provides one petaflop of processing power for $32,450. That difference of $7,350 per petaflop might generate millions of dollars in savings in multi-exaflop systems for data.
That might spell difficulty for NVIDIAs information center business, which grew its profits 80% every year to $1.14 billion last quarter and accounted for 37% of the chipmakers leading line. NVIDIA recently acquired data center networking equipment maker Mellanox to strengthen that company, but that increased scale might not hinder Graphcores disruptive efforts.
Image source: Getty Images.
3. Graphcore is backed by endeavor capital
Unlike NVIDIA, an openly traded chipmaker that is regularly scrutinized over its costs practices, Graphcore is a private start-up that can focus on research study and development (R&D) and growth instead of its short-term profits.
Graphcore was founded just 4 years earlier, but was already valued at $1.95 billion after its last funding round in February. Its backers consist of investment companies like Merian Chrysalis and Amadeus Capital Partners, in addition to huge companies like Microsoft (NASDAQ: MSFT). Microsoft currently users Graphcores IPUs to process artificial intelligence work on its Azure cloud computing platform, and other cloud giants might follow that lead over the next couple of years.
Should NVIDIA investors be worried?
NVIDIA delighted in an early-movers advantage in information center GPUs, however it deals with a growing list of oppositions, consisting of first-party chips from Amazon, Facebook, and Alphabets Google. Graphcore represents another looming risk, and NVIDIAs financiers must be cautious of its new chips– which appear to provide a cheaper, more structured, and more flexible method to tackling artificial intelligence and AI jobs.
Graphcores IPU technology utilizes “chart” processing, which processes all the data mapped across a single chart at as soon as. It declares the IPU structure processes machine-learning jobs more efficiently than CPUs and GPUs. Numerous machine-learning frameworks– consisting of TensorFlow, MXNet, and Caffe– already assistance graph processing.
Graphcores M2000 system uses one petaflop of processing power for $32,450. That distinction of $7,350 per petaflop could generate millions of dollars in cost savings in multi-exaflop systems for information.