Intel’s CEO has pulled the curtains on a new Xeon Scalable “Sierra Forrest” processor, that’s yet to be named. The “Emerald Rapids” part is a 5th generation model with 288 efficient or E-cores and 288 threads, a clear attempt to capitalize on cloud-native workloads that hyperscalers are so fond of.
The company’s press release states that the part will deliver 2.5x better rack density and 2.5x higher performance per watt compared to 4th generation Intel Xeon. And that includes the 288-core model.
AMD (128-core), Amazon (64-core), Ampere (192-core) and others are also focusing on single-core, single thread products but have lower core counts compared to Intel’s new product.
Paul Alcorn from Tomshardware reckons that Intel is using two chiplets, each containing 144 E-cores, but posits that Intel could add a third chiplet pushing the total core count to a staggering 432 cores. In comparison, the Xeon Phi 7295, Intel’s last attempt at manycore products, topped 72 cores and 288 threads.
Such a large amount of cores tightly bundled together generate its own set of problems: power consumption and dissipation, memory bandwidth, cache coherence, clock speed etc. What we do know is that this new Xeon is built using the new Intel 3 manufacturing process alongside the Intel 7 one.
The cloud computing market is moving towards a gamut of products that offer a combination of more powerful and less powerful cores. Arm brought this paradigm that it coined the big.LITTLE concept more than a decade ago and it’s only recently that this paradigm moved into the data center and desktop due to growing concerns about energy consumption and higher density computing.
AMD and Intel used different ways to target the same audience. Intel went arm’s way with two entirely different cores while AMD developed a slightly different version of its existing Zen-4 based core, differentiated only by marginal changes (cache, I/O). The launch, yesterday, of its Siena EPYC 8004 processor points in that direction.