Why E cores make Apple silicon fast

2026-02-0811:31416eclecticlight.co

Apple silicon architecture is designed to get background processes out of the way of our apps running in the foreground, by using the E cores.

If you use an Apple silicon Mac I’m sure you have been impressed by its performance. Whether you’re working with images, audio, video or building software, we’ve enjoyed a new turn of speed since the M1 on day 1. While most attribute this to their Performance cores, as it goes with the name, much is in truth the result of the unsung Efficiency cores, and how they keep background tasks where they should be.

To see what I mean, start your Apple silicon Mac up from the cold, and open Activity Monitor in its CPU view, with its CPU History window open as well. For the first five to ten minutes you’ll see its E cores are a wall of red and green with Spotlight’s indexing services, CGPDFService, mediaanalysisd, BackgroundShortcutRunner, Siri components, its initial Time Machine backup, and often an XProtect Remediator scan. Meanwhile its P cores are largely idle, and if you were to dive straight into using your working apps, there’s plenty of capacity for them to run unaffected by all that background mayhem.

It’s this stage that scares those who are still accustomed to using Intel Macs. Seeing processes using more than 100% CPU is terrifying, because they know that Intel cores can struggle under so much load, affecting user apps. But on an Apple silicon Mac, who notices or cares that there’s over a dozen mdworker processes each taking a good 50% CPU simultaneously? After all, this is what the Apple silicon architecture is designed for. Admittedly the impression isn’t helped by a dreadful piece of psychology, as those E cores at 100% are probably running at a frequency a quarter of those of P cores shown at the same 100%, making visual comparison completely false.

This is nothing new. Apple brought it to the iPhone 7 in 2016, in its first SoC with separate P and E cores. That’s an implementation of Arm’s big.LITTLE announced in 2011, and development work at Cray and elsewhere in the previous decade. What makes the difference in Apple silicon Macs is how threads are allocated to the two different CPU core types on the basis of a metric known as Quality of Service, or QoS.

As with so much in today’s Macs, QoS has been around since OS X 10.10 Yosemite, six years before it became so central in performance. When all CPU cores are the same, it has limited usefulness over more traditional controls like Posix’s nice scheduling priority. All those background tasks still have to be completed, and giving them a lower priority only prolongs the time they take on the CPU cores, and the period in which the user’s apps are competing with them for CPU cycles.

With the experience gained from its iPhones and other devices, Apple’s engineers had a better solution for future Macs. In addition to providing priority-based queues, QoS makes a fundamental distinction between those threads run in the foreground, and those of the background. While foreground threads will be run on P cores when they’re available, they can also be scheduled on E cores when necessary. But background threads aren’t normally allowed to run on P cores, even if they’re delayed by the load on the E cores they’re restricted to. We know this from our inability to promote existing background threads to run on P cores using St. Clair Software’s App Tamer and the command tool taskpolicy.

This is why, even if you sit and watch all those background processes loading the E cores immediately after starting up, leaving the P cores mostly idle, macOS won’t try running them on its P cores. If it did, even if you wanted it to, the distinction between foreground and background, P and E cores would start to fall apart, our apps would suffer as a consequence, and battery endurance would decline. Gone are the days of crashing mdworker processes bringing our Macs to their knees with a spinning beachball every few seconds.

If seeing all those processes using high % CPU can look scary, the inevitable consequence in terms of software architecture might seem terrifying. Rather than building monolithic apps, many of their tasks are now broken out into discrete processes run in the background on demand, on the E cores when appropriate. The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news, and the more of those that are run on the E cores, the faster our apps will be. The first and last M-series chips to have only two E cores were the M1 Pro and Max, since when every one has had at least four E cores, and some as many as six or eight.

Because Efficiency cores get the background threads off the cores we need for performance.


Read the original article

Comments

  • By ricardobeat 2026-02-0813:18

    > The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news

    Not when one of those decides to wreck havoc - spotlight indexing issues slowly eating away your disk space, icloud sync spinning over and over and hanging any app that tries to read your Documents folder, Photos sync pegging all cores at 100%… it feels like things might be getting a little out of hand. How can anyone model/predict system behaviour with so many moving parts?

  • By roomey 2026-02-0813:16

    Genuine question, when people talk about apple silicon being fast, is the comparison to windows intel laptops, or Mac intel architecture?

    Because, when running a Linux intel laptop, even with crowd strike and a LOT of corporate ware, there is no slowness.

    When blogs talk about "fast" like this I always assumed it was for heavy lifting, such as video editing or AI stuff, not just day to day regular stuff.

    I'm confused, is there a speed difference in day to day corporate work between new Macs and new Linux laptops?

    Thank you

  • By tyleo 2026-02-0812:272 reply

    These processors are good all around. The P cores kick butt too.

    I ran a performance test back in October comparing M4 laptops against high-end Windows desktops, and the results showed the M-series chips coming out on top.

    https://www.tyleo.com/blog/compiler-performance-on-2025-devi...

    • By murderfs 2026-02-0812:391 reply

      This is likely more of a Windows filesystem benchmark than anything else: there are fundamental restrictions on how fast file access can be on Windows due to filesystem filter drivers. I would bet that if you tried again with Linux (or even in WSL2, as long as you stay in the WSL filesystem image), you'd see significantly improved results.

      • By philistine 2026-02-0813:04

        Which still wouldn’t beat the Apple Silicon chips. Apple rules the roost.

    • By cubefox 2026-02-0813:15

      Here is a more recent comparison with Intel's new Panther Lake chips: https://www.tomsguide.com/computing/cpus/panther-lake-is-int...

HackerNews