It’s easy to look at the benchmark numbers of Apple’s home-grown processor with wide, astonished eyes—and some heart-felt expletives, too. The M1 is no doubt impressive enough to capture the interest of the most die-hard of die-hard PC users, and it’s clear that Apple’s gamble with making its own in-house chips is already paying off. It might not pay dividends overnight, but those already entrenched in the Mac ecosystem will reap a lot of benefits.
But as amazing as Apple’s M1 is and how great the new MacBook Air and MacBook Pro are for the price, it’s nearly impossible to do a direct comparison to any Windows-based Intel and AMD system—or even any macOS-based Intel system—because of the architectural differences between the processors. What the M1 excels at and where it falls short compared to its competition varies by program. Yes, it’s extremely fucking fast, according to Cinebench. And yes, some staunch PC users may even switch to Mac the next time they need a new laptop. But the M1 isn’t a clear winner over Intel and AMD. It exists in its own world—kind of like Apple as a whole.
I have a much more in-depth explanation of the differences between Intel/AMD and Apple’s processors further down, which will help interpret some of these benchmark results, but I wanted to start with those results up-front. In addition to the usual benchmark suite we run on all laptops, I included a few more tests to get a better idea of how Apple’s M1 performs compared to a few Intel and AMD models when it comes to different tasks. I included a mix of synthetic and real benchmarks, because synthetic benchmarks don’t always tell the full story. This is especially true with the M1, due to some of the programs running via Apple’s Rosetta 2 instead of natively, which could have a performance impact depending on how well the program translates the code.
I ran the same tests on four separate laptops:
- Apple MacBook Pro 13-inch: M1 processor @ 3.20 GHz, 8-cores (4 “big,” 4 “little”), 16GB DRAM
- MSI Prestige 14 Evo: Intel Core i7-1185G7 @ 3.00 GHz, 4-cores/8-threads, with Iris Xe Graphics, 16GB DRAM
- Lenovo Yoga 7i 14-inch Evo: Intel Core i5-1135G7 @ 2.4 GHz, 4-cores/8-threads ,with Iris Xe Graphics, 12GB DRAM
- Lenovo IdeaPad Slim 7: AMD Ryzen 7 4800U @ 1.8-4.2 GHz, 8-cores/16-threads, with Radeon Graphics, 16GB DRAM
G/O Media may get a commission
The M1 dominates over Intel and AMD for the most part when it comes to synthetic tests. There are a few exceptions: It couldn’t keep pace with the Intel Core i7-1185G7 in the Geekbench 5 GPU compute test, it fell behind the AMD Ryzen 7 4800U in the Cinebench R23 multi-core test, and it was a few frames behind both the Core i7-1185G7 and AMD Ryzen 7 4800U in the GFX Bench test.
All that is pretty straightforward, but when we move away from synthetic tests, the picture becomes more complicated. In the real world, the M1 is neither “better” nor “worse” than its competitors. It’s better or worse at different tasks, but what it’s better at seems to largely depend on if programs are running natively on the M1 or not, and if Rosetta 2 is doing a good job at translating code from an x86 (Intel and AMD) program to ARM (M1).
And all that has to do with the way the CPUs process information, which is fundamentally different. Apple’s M1 is a Reduced Instruction Set Computer (RISC) chip, while Intel and AMD’s processors are Complex Instruction Set Computer (CISC) chips. Modern CISC chips today have RISC qualities and vice versa, but I’ll get into that in a minute.
Clock cycles aren’t necessarily the most important thing to the M1 architecture. RISC processors only use simple instructions that can be executed within one clock cycle, or a single electronic pulse. On the surface, it seems like an inefficient way to process information, but if a program is coded properly, a RISC chip can execute a command in about the same amount of time as a CISC chip. That means more of the processing responsibility is on the software itself, rather than the hardware. That also means a RISC chip needs less space to store temporary data on itself, which makes it a perfect candidate for a System on a Chip (SoC).
In the case of Apple’s M1, the CPU, GPU, and DRAM are all on the same integrated circuit, which not only saves space, but also allows each component to communicate more efficiently for faster processing times and lower latency, and also reduces power consumption.
However, one of the downsides of RISC architecture is that it uses more lines of code to do the same task as a CISC chip. More lines of code generally mean this kind of processor relies on more DRAM to store instructions. It also takes more work to translate complex code and execute it.
CISC processors can process the exact same information, but with shorter lines of code. As such, less DRAM is needed to store instructions, because there are fewer of them. They’re also good at handling high-level statements, or statements that are basically a fully-formed sentence (e.g. if this happens, then do this)—complex code, basically.
But one of the downsides to CISC architecture is that it relies on hardware, specifically the transistors, for storing complex instructions. That means the chip itself is usually much larger than a RISC chip. Not only that, but it could take several clock cycles just to process one instruction. For processors with slow clock speeds (generally anything under 3.5 GHz today is slow), that means it could take a while to open a program or load a scene in a game. That’s not ideal.
Historically, it took a long time before RISC chips took off in the consumer tech world, largely due to a lack of software support. Back when Apple’s Power Macintosh line was around, few companies were willing to take a chance on RISC, a new chip architecture at the time. But then gadgets like iPods, smartphones, smartwatches, and a bunch of other pocket-sized tech devices emerged, all with RISC chips. The world wasn’t ready for an RISC-based Apple computer 30 years ago, or even 10 years go, but it is now.
CISC and RISC are more alike today thanks to how small process nodes have gotten over the years. The smaller the process node, the more transistors that can fit on a chip. Apple’s M1 is on a 5nm process—which is a smaller process than AMD (7nm) and Intel (10nm and 14nm)—with 16 billion transistors. If a RISC processor has more transistors, it doesn’t need to rely so much on DRAM and can process more high-level, CISC-like commands. Also, now that processor speeds have increased tremendously, CISC chips can execute more than one instruction per clock cycle.
There are still distinct differences between the two architectures, though. Apple’s M1 does not use threads, where Intel and AMD chips too. (Threads allow a single core to process two separate tasks at the same time—or, rather, threads switch between tasks so fast it looks like they’re being processed at the same time.) Apple’s M1 is also part of the big.LITTLE ARM processor family, meaning it has separate, dedicated cores for large workloads and light workloads. In the case of the M1 in the MacBook Pro, it has four big cores that handle power-intensive tasks and four little cores that are designed for power efficiency rather than performance. Intel and AMD processors don’t make that distinction.
All of that said, synthetic benchmarks are not a definitive measure of performance anymore. Yes, we run them to get an idea of how something can perform, especially when it comes to gaming, but for the average computer user, it’s more of a question of, “How fast will this file convert?” or, “How long will it take to open this program?” The biggest barrier for Apple Silicon is still software support at the moment.
The above benchmarks cover a range of everyday tasks, from exporting a Word document to a PDF, rendering a 3D image in Blender, and exporting video. This is where it becomes apparent that the new Apple Silicon is not better or worse than Intel and AMD. It’s just different.
I started with Microsoft Office tasks, exporting a 802-page Word to a PDF, a 10,000+ row Excel spreadsheet to PDF, and a 200-slide PowerPoint to PDF.
While it’s not such a huge deal that the MacBook was, on average, between 10-20 seconds slower than the rest of the laptops, it absolutely killed at the Excel task. It converted that massive spreadsheet to a PDF in a little over a minute, while the rest of the systems took anywhere between two to four times as long.
Results from Blender, Handbrake, and Adobe Premiere Pro are all from running the programs through Rosetta 2, and saw wildly varying results. The MacBook Pro easily beat the Intel systems in both Blender CPU and GPU compute rendering tests, but it was not faster than the AMD system. The Mac struggled quite a bit to convert a 45-second 4K video from MP4 to HEVC in Premiere. In Handbrake, it was only faster than the Intel Core i5 system.
However, running the same transcoding test with the beta version of Handbrake specifically for the M1 cut the transcoding time in half, from 13.6 minutes to 7.8 minutes. (The result is notated on the graph with “beta.”) This is a massive testament to how efficiently a program can run natively on the M1 compared to Intel and AMD when that program is optimized for M1.
The games benchmarks are, thankfully, more straightforward and in line with the Geekbench and GFX results for the most part. Both Civilization VI and Shadow of the Tomb Raider were run via Rosetta 2, as well.
The Intel i7 system (MSI Prestige 14 Evo) barely edges past the MacBook Pro in the Civilization VI AI test, which measures CPU performance. The GFX benchmarks are inflated compared to the real frames per second I measured on Shadow of the Tomb Raider, but frame rate is always going to differ by game. The above scores reflect a 1080p resolution (or equivalent on the Mac, because it uses a different aspect ratio) on the low graphics settings. Intel’s Iris Xe graphics on the Core i7 pulls ahead of the MacBook Pro’s integrated GPU, but just by several frames. The MacBook Pro is miles ahead of the AMD system and Core i5 system.
I doubt most people interested in the newest MacBook Pro will buy it to play games, even occasionally. How the M1 handles software that has and hasn’t been ported to the ARM-RISC architecture is much more important. The faster things load, render, or convert, the faster anyone can get their work done, even if it’s only 30 seconds faster. No one likes to stare at a loading bar for a long time.
And I would be remiss not to mention battery life, which is another huge selling point for M1 laptops. The M1 MacBook Air clocked in with an impressive 14 hours of battery life in our video rundown test, and the MacBook Pro handily outlasted it with an 18-hour battery life. Compared to the Intel-based 13-inch MacBook Pro that dropped earlier this year, which died after 8 hours and 10 minutes, this is a massive improvement and proves the M1 is seriously power-efficient.
Yes, Apple’s latest computers and laptops have Rosetta 2, which automatically translates programs coded for Intel and AMD processors into a language that the M1 understands. But Rosetta 2 isn’t a magic cure-all—not all apps are guaranteed to work with it. The number of programs that run natively on Apple Silicon are still few and far between, and the ones that do seem to have a few kinks to work out. Adobe, for instance, has only released beta versions of Photoshop and Lightroom for Apple Silicon—and those versions don’t even have their full feature set. A native version of Premiere Pro is still in the works.
The problems that plagued Apple RISC processor a long time ago have not gone away, but there is a much greater acceptance of the architecture now. More developers of popular software willing to create a new version of their apps specifically for the M1, and that’s good news for the longevity of the processor. Depending on what you need a Mac for though, I’d wait until more major software developers have finalized ARM versions of their software before upgrading to see a more definitive boost for your workflow.