AMD confirms that hybrid client and server CPUs are on the way, and will continue to push core numbers forward as well.

AMD has confirmed that it is working on hybrid CPUs and will also continue to push core counts forward with next-generation designs.

Hybrid CPUs with increased core counts are on the way, AMD confirms in an interview

In an interview with Tomshardware, Mark Papermaster, CTO at AMD, spills the beans on some of its plans for the future which include hybrid chip designs, increasing the core count, and relying on AI in chip design and manufacturing.

Mark points out that we have reached the point where one chip does not fit all needs and this is more prevalent in the server sector and that is why the company offers a variety of solutions in the Zen 4 EPYC lineup, ranging from the classic Zen 4 Genoa to the 3D V-Cache Genoa-X, and Bergamo in Zen 4C and Siena flavors for TCO & Power-Optimized platforms. Recent rumors have highlighted a more diverse lineup of EPYC products in the upcoming Zen 5 and Zen 5C lineup.

According to AMD, we will not only see differences in core densities on eg Zen 4 & Zen 4C, but also differences in the types of cores themselves. This is a similar approach that Intel and Apple take when designing the current generation of CPUs, mixing high-performance cores with low-power cores that are optimized for maximum efficiency. The hybrid approach would also allow AMD to stack multiple 3D layers that would either include cache or use different accelerators meant for workloads.

AMD also reaffirms that the technology that enables core count expansion will continue to move forward, but that’s not the only path that will evolve in future chips. Increasing the core counts can be an important aspect for one customer but the other client may require the exact same core counts but some have added acceleration as above. Mark continues to stress that the current Ryzen 7040 CPUs are a bait for this hybrid technology and that we’ll see more of it in the future.

But what you will also see is more differences in the cores themselves, you will see high performance cores mixed with power saving cores mixed with acceleration. So where, Paul, are we now going over not just the differences in nucleus density, but the differences in the type of nuclei, and how the nuclei are formed. Not just the way you optimize it for performance or power efficiency, but the stack cache of the apps it can take advantage of and the accelerators you put around it.

When you go to the data center, you’ll also see variance. Certain workloads move slower, you might have a business where you haven’t used AI yet and you’re running transaction processing, you’re closing your books every cycle, you’re running an organization, you’re not in the cloud, and you might have a fairly stable base count. You might be in that sweet spot of 16-32 cores on a server. But many companies are already adding AI applications and analytics. As AI doesn’t move from just being in the cloud, that’s where extensive training and large language model inference will continue, but you’ll see AI applications at the edge. And, as you know, it’s going to be in enterprise data centers as well. They will also need different core enumerations and accelerators.

I really think I can sum it up by saying that we see technology continue to enable core numbers going forward, but that’s not the only path to meeting customer needs. It should depend on the application, and you must be able to provide customers with the kind of variety of account items that they need. And those CPUs and different types of CPUs along with accelerators. And you need to give them flexibility in how they select that solution based on the applications they’re running.

Paul Alcorn: Therefore, it is probably safe to say that hybrid architecture will come to the client [consumer PCs] sometimes?

Mark Papermaster: definitely. It’s already out there today, and you’ll see more to come.

AMD CTO, Mark Papermaster (via Tomshardware)

And about the possibility of AMD using artificial intelligence to help design and develop its chips, Mark said that they are already using software to help them design chips, and while it will not necessarily replace the engineering work done by humans, it can certainly help in creating better designs. NVIDIA has said the same things in the past and is also taking advantage of AI to help make next generation chips and also implementing new and advanced technologies to simplify and speed up the production of these chips.

AMD CEO Dr. Lisa Su revealed the fourth generation of EPYC Genoa CPUs featuring Zen 4 architecture. (Image credits: PC-Watch)

The short answer to your question is that we will solve all of these limitations and you will see more and more AI used in the chip design process itself. It is used in point applications today. But as an industry, over the next couple of years, one to two years really, I think we’ll have the appropriate constraints to protect IP and you’ll start to see generative AI production applications to speed up the design process.

It won’t replace designers, but I think it has a huge potential to speed up design.

Will it speed up future chip designs? definitely. But we do have some hurdles to overcome in the short term.

AMD CTO, Mark Papermaster (via Tomshardware)

AMD has already made AI its number one strategic priority and with hybrid designs to come, it looks like AI will become a major deal for AMD going forward. The company has the potential to become one of the biggest names in the AI ​​sector and we can’t wait to see what they have in store for us in the coming years.

#AMD #confirms #hybrid #client #server #CPUs #continue #push #core #numbers

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top