Ampere’s 128-Core Processor Challenges Intel and AMD in a Cloud-Based Processor Showdown – News – All About Circuits

Ampere recently announced that its 128-core cloud-native processor will be leading the way for addressing the demanding workloads in data centers.

Because workloads in data centers are rough on servers, processorspeed and adaptability are critical to prevent bottlenecks. The workloads simultaneously consist of analytics, high capacity management, and application testing verification. Amperes Altra processors drive efficiency in the data centers infrastructure workloads with a specialized design methodology and precise EDA tools.

Earlier this year, Ampere was able to design the worlds first cloud-native processor, a system built for cloud computing with a modern 64-bit Arm server-based architecture. Amperes Altra processor family is planning to continue to give users the freedom to accelerate the delivery of all cloud-computing applications. Ampere recently shared the news of its Altra Max 7 nm, 128-core processor that will launch at the end of this yearaneffort to alleviate data centers.

Ampere initially launched an 80-core processor called Altra earlier this year. This processor is said to addressinfrastructure workloads found in data centers. Amperes Altra Max is the expansion of the newly-released Altra family.

The Altra Max processor will be usefulfor applications that take advantage of scale-out and elastic cloud architectures. The highly anticipated 128-core processor will also be compatible with its predecessor's robust rack servers.

There is a big lingering question to answer:how will Amperes new processors stand against competition such as Intel or AMD?

Intels Xeon Gold 6238R processor provides 28 cores and 56 threads. Whenthese cores and threads are combined, the device provides performance similar toan 84-core processor. A processors amount of threads is a hardware support line; if a workload is running on each core and it stalls due to a memory access issue, a thread can start executing on the free core with minor setbacks.

Ampere Altra avoids hyper-threading, which is the method used by Intel to compensate for a lower amount of cores by breaking up physical cores into virtual ones to increase performance. However, in cases where the user needsto improve the cloud workloads and requirements found in data centers, there is little to no room for error or setbacks that can be brought on by relying on threads more than physical cores.

Intel has its 3rd generation of processors available that are built specifically to run complex artificial intelligence (AI) workloads on the same hardware as existing workloads, stepping up embedded hardware performance. Amperes family of Altra products claims to improve cloud workloads by utilizing all 128 high-performing cores and high-memory bandwidth while power management workstowardlow consumption.

AMDs EPYC 7662 is asecond-gen 64-core processor coupled with 128 threadsthe real competition in addressing data center workloads.

AMDs EPYC processors offer a consistent set of features across the product line, allowing you to optimize the number of cores required for the workload without sacrificing features like memory channels, memory capacity, or I/O lanes.

Amperes Chief Executive Jeff Wittich states, If you can scale out to a ton of cores with 128, Ampere Altra Max is going to give you the highest socket performance and the highest overall performance for those applications. Regardless of the number of physical cores per socket, AMD and Ampere surpass Intel in per-core performance.

AMD holds a slight advantage in terms of cache memory; the way AMDs EPYC handles copies of data closer to the processors core is at memory size of 4 MB at its level 1 cache while Amperes Altra is at 64KB. There are three levels of cache memory. Each level gets slower as it increases in memory size but improves the performance of previous levels. AMDs EPYC has more cache memory spaceto store frequently opened programsthan Amperes Altra.

Amperes Altra addresses many data center workloads including data analytics, artificial intelligence (AI), database storage, edge computing, and web hosting. It is an aptchoice for data centers since it avoids relying on threads by offering a higher number of physical cores.

However, for AI-based workloads, Intels Xeon Platinum and 3rd-gen scalable processors provide an accelerated inference performance for these deep learning workloads.

Ampere, AMD, and Intel all have processors that are pushing the boundaries to provide clients with dependable, reactant, high-performing processors. Each manufacturer has processors designed for high-performance computing workloads with a supporting eight-channel DDR4-3200 MHz memory. But for addressing the demanding workloads of data centers, evaluating memory storage, speed, and per-core performance, Ampere and AMD may be the most fruitfuloptions.

View original post here:
Ampere's 128-Core Processor Challenges Intel and AMD in a Cloud-Based Processor Showdown - News - All About Circuits

Related Posts

Comments are closed.