While generative AI tools like ChatGPT and DALL-E have sparked much recent publicity, organizations in many industries are beginning to leverage AI internally too much success. These advances, however, have been driven by more than innovative software development. Intel, for its part, has introduced its 4th Gen Intel Xeon Scalable Processors and hardware technologies to make AI acceleration possible.
Regarding general compute, 4th Gen Intel Xeon Scalable Processors offer up to 1.53x average performance gain over the third generation. So right away, no matter the workload, the 4th gen is prepared to perform. And regarding vRAN workloads, you can expect up to 2x capacity at the same power envelope gen-over-gen. Finally, to support data analytics, the 4th gen offers up to 1.6x higher input/output operations per second (IOPS) and up to 37% latency reduction for large packet sequential reads. As a whole, these improvements enable your environment to deliver greater results while remaining efficient.
Other New Capabilities:
PCI Express Gen5 (PCIe 5.0) - Enjoy up to 80 lanes of I/O bandwidth for maximum CPU throughput (up from 40 with PCIe 4.0).
DDR5 - Eliminate data bottlenecks with an up to 1.5X bandwidth improvement over DDR4.
CXL - A new IOPS protocol supported by 4th Gen Intel Xeon Scalable Processors, CXL lowers the total cost of ownership and reduces compute latency.
All of the above improvements set the stage for Intel's latest advances in AI computing. In the 4th generation, Intel offers easy-to-use, fast cores with large memory capacity, which empower organizations to build their AI applications end-to-end on Xeon and leverage an up to 8x AI performance increase over the prior generation.
Before discussing the latest accelerations, it's important to note the ones that have already set Intel Xeon Scalable Processors apart.
For example, Intel Advanced Vector Extensions (Intel AVX-512) is a single-instruction, multiple-data instruction set which enables the conduction of numerous operations with a single instruction. This capability enables CPUs to take on large integer arrays and process workloads like 3d modeling and scientific simulation. Intel AVX-512 and other acceleration features were packaged as part of a suite of technologies known as Intel DL Boost.
While AI inferencing receives most of the attention in the traditional AI pipeline, data preprocessing often demands the most processor activity. In response, Intel created Intel Advanced Matrix Extensions (Intel AMX) to balance the processing load between these two activities.
As a result, 4th Gen Intel Xeon Scalable Processors offer up to 3.5X to 10x higher gen-over-gen AI training performance. And at the same time, they provide up to 5.7x to 10x higher real-time inference performance. As a result, organizations can benefit on both ends of their AI pipeline, which allows them to introduce higher-performing AI applications.
And beyond AI pipeline performance, Intel AMX allows developers to code both work general compute and AI workloads. According to Intel, it's like having a vehicle that can handle city driving and the Formula One circuit.
Intel understands that great AI is not the result of superior development or high-performing hardware alone but from the marriage of the two. Therefore, they've ensured that the AI development community has as much access to their hardware as possible. Intel's one API is an open specification standard and a library of end-to-end tools that give developers the code they need to leverage the full power of Intel CPUs.
The associated models have grown in complexity and diversity with increased AI usage. By using them, companies and their customers are accomplishing more than ever.
Some examples include:
eBay - As one of the world's largest online marketplaces, eBay relies on delivering the best possible experience regarding product search. Leveraging Intel AI acceleration, they achieved a 2.5x speed up in single query latency (for single product searches) and system throughput (referring to the number of searches that can be handled simultaneously).
Tencent - For Tencent, a large tech and entertainment conglomerate, Intel Xeon Scalable Processors improved FeatherTTS speech synthesis performance by 4x. They also delivered a 15.2x speed up in the performance of Honor of Kings, its popular MOBA game.
Alibaba - For Alibaba, a popular online marketplace and global rival to Amazon, 4th Gen Intel Xeon Scalable Processors provided 15.9x performance gains over the 3rd generation.
Based on the performance of 4th Gen Intel Xeon Scalable Processors and their many AI accelerations, the time has come to consider Intel your end-to-end processing provider - and UNICOM Engineering your value-add system integrator.
As an Intel Technology Provider and Dell Technologies Titanium OEM partner, UNICOM Engineering stands ready to design, build, and deploy the right hardware solution for your next AI, Deep Learning, or HPC initiative. Our deep technical expertise can drive your transitions to next-gen platforms and provide the flexibility and agility required to bring your solutions to market.
Leading technology providers trust UNICOM Engineering as their application deployment and systems integration partner. And our global footprint allows your solutions to be built and supported worldwide by a single company. Schedule a consultation today to learn how UNICOM Engineering can assist your business.