Storage dilemmas have been around for years, which have lead to limits in the ability to run applications and slowing down critical operations, but recent technological advances have caused those issues to diminish. What happens when you pair Intel's new class of memory technology (Intel® Optane™ DC Persistent Memory) with its newly released 2nd Generation Intel® Xeon® Scalable processors? We'll tell you. But first, let's take a closer look at Intel Xeon Scalable and Intel Optane individually.
2nd Generation Intel Xeon Scalable Processors
The 2nd Generation Intel Xeon Scalable Processor, code-named Cascade Lake, made its debut in April. The latest version of the processor features all kinds of optimizations for those running parallel workloads. While the microarchitecture remains the same as the first generation Skylake-based processors, it delivers a few surprises, such as the new Intel® Deep Learning Boost (Intel® DL Boost), also known as Vector Neural Network Instructions or VNNI. Parallel workloads used with many HPC and AI applications perform much faster due to the reduced instruction set on the chip. This exciting news is also backed by Intel’s report that the new technology can increase the inference performance of AI/Deep Learning in some applications by 17 times1 compared with Intel Xeon Scalable Platinum processors at its 2017 debut.
Intel® Optane™ DC Persistent Memory
Intel® Optane™ DC Persistent Memory is Intel’s next-generation memory technology. Optane DC Persistent Memory comes in a DDR4 form factor and works with 2nd Generation Intel Xeon Scalable Processors, enabling up to 6TB in a dual-socket platform. Optane DC PM allows for a much higher memory density per socket and will be available in three different sized modules; 128 GB, 256 GB, or 512 GB. While Optane will not completely replace DDR4 since at least one module of standard DDR4 must be included on any memory channel with Optane, a system where 128G of DDR4 with 512G Optane can provide 768G of total memory. This might be less expensive and higher performance than 256 GB of pure DDR4 backed with NVMe.
Intel Optane & Xeon Scalable Processors Working Together
Individually, Optane and Xeon Scalable processors are powerful technologies. When used together, you can not only optimize existing workloads but also create and deploy applications featuring deep learning capabilities that could potentially enhance your bottom line.
In an article entitled, “Intel Xeon Scalable and Optane: Transforming the data centre,” Justin Wheeler, technical architect at Intel’s Non-Volatile Solutions Group (NSG) sums it up by saying, “the R&D is designed around optimizing Optane storage for and with Xeon Scalable Processors and actively looking at the architecture of the CPU to increase the efficiency, the capacity, and the performance of the storage." The fact of the matter is that these technologies were developed in tandem to provide what the other needs to perform.
Intel and Dell Collaborate to Give HPC Solutions a Boost
The faster speeds of these emerging technologies mean that both Intel and Dell EMC see the power and potential of AI and their latest HPC innovations can help customers innovate faster with their existing applications. Intel’s HPC & AI Converged Clusters, two solutions architectures, were also announced at the launch. Both focus on augmenting resource managers to support broader workload co-existence and workflow convergence across simulation & modeling, analytics, and AI. Meanwhile, Dell EMC PowerEdge servers featuring the 2nd generation Intel Xeon Scalable processors are being tested at Dell EMC HPC and AI Centers of Excellence. One example is specific to extreme-scale science workloads, in fields ranging from medicine and materials design to natural disasters and climate change. To learn more, read Dell EMC’s blog entitled, “Deep Learning Gets a Boost with New Intel Processor.”
Are Your Workloads Ready for the Power of 2nd Gen Intel Xeon Scalable and Intel Optane?
As an Intel Technology Provider and Dell Technologies Titanium OEM partner, UNICOM Engineering works diligently to prepare for the transition to next-gen embedded platforms and the launch of enabling technology. In addition to our deep technical expertise that drives transitions to next-gen platforms, we are flexible and nimble in our pursuit to bring your solutions to market. And, our global footprint allows your solutions to be built and supported around the world from a single company. Now you can begin to understand why leading technology providers trust UNICOM Engineering as their application deployment partner.
To learn more about 2nd Generation Intel Xeon Scalable processors, Intel Optane, and to see how UNICOM Engineering can streamline your speed to market, visit us at www.unicomengineering.com or contact us by telephone at (800) 977-1010.
1DL Inference: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intelpstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x8664. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMPAFFINITY=’granularity=fine, compact‘, OMPNUMTHREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forwardonly” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. Tested by Intel as of July 11th 2017 -. compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/7/2018.