Intel Data-Centric Innovation Day 2019

One day in the first week of April at a shwanky San Francisco hotel, Intel introduced a new line of products to the press. The Tech Field Day Exclusive delegates and a couple hundred folks gathered to see what Intel was doing and then grill them on the technical details. The product announcement included the Ethernet 800 series adapter, Optane Solid State Drives (SSDs), Optane DC Persistent Memory (DCPM), 2nd Generation Xeon Scalable processor, Xeon D-1600, and Agilex 10nm FPGA ASIC. Each of these products had one thing in mind…deliver performance improvements to the customer.

The key note speeches were a packed 3 hours with discussions with Intel partners and demonstrations showcasing the “performance improvement” (queue the Intel-chime) theme. There were two products that stood out to me as game-changing innovation. Well, maybe not “game-changing”, but certainly enhancing. The first is the Optane DC Persistent Memory chips, and the other the 800-series network Ethernet controllers. Intel seemed to be focused on CDN and edge networks with these products.

Optane DC Persistent Memory

128GB Intel Optane DC Persistent Memory

Optane DCPM generation seemed to be the brain-child of Mohamed Arafa, Sr. Principal Engineer Datacenter Engineering & Architecture Datacenter Group. I got to know Mohamed at dinner the previous evening and he is a thoughtful, quiet, and amicable man. For Mohamed, he has spent much of his career developing this new memory technology and it was quite an accomplishment to see it released.

In an over-simplified way, Optane acts as both a hard drive and RAM so that CPUs aren’t required to broker transactions between your physical storage on one hand, and memory on the other. You’re allowed to pre-load application data into the memory regions of your Optane sticks, and that data will survive reload. Think about the performance improvement you may experience with applications like SAP HANA or moving many of your databases to in-memory locations. From the performance numbers, reads are fantastic…writes, not anything to “write” home about.

What about the security of your data that isn’t in volatile memory? What if someone removed the DCPM chip and took it home to get the secret sauce of the company product? Needless to say encryption is used on your in-memory data at rest. There are two keys generated and stored on the DCPM and the system. These are re-keyed manually or after power cycle so pulling the stick isn’t going to help the bad guy much; that also means it won’t help you if you have some kind of catastrophic system failure. Definitely read the white papers on this. I’d also recommend reading Enrico Signoretti’s discussion of this product; his article “DATA CENTER OPTANIZATION!” can get you up to speed on the ins and outs of Optane DC Persistent Memory.


The 800-series line of Ethernet controllers, codename Columbiaville, is designed to bring your systems Ethernet connectivity up to 100Gbps. The SKUs include 10, 40, and 100Gbps cards with the target market is certainly service provider and CDNs based on Intel’s keynote. But so what? There is an abundance of Ethernet controllers that can do that and if you’re enterprise, carrier, or CDN…if you need it, you need it. For this Intel product, it isn’t just the speed to the server but it’s the additional programmability of Application Device Queues (ADQ) which Intel believes will lower latency and improve throughput especially when paired with their Optane and 2nd gen Xeon processors.

Performance Improvements with ADQ/DDP

ADQ took shape when Intel engineers began to look at the ultimate cause of latency, jitter, and other network-based performance problems. They looked at the problem from a different perspective and noticed that the issue at the hardware level was predictability. The variable traffic patterns within applications passing through a network controller makes it tough to forward the impulses across the hardware queues. If you can’t scale the apps hitting the CPU core, then performance suffers.

The ADQ technology allows systems engineers to apply application filtering for dedicated lanes within the Ethernet controller. It’s basically QoS on a per application basis within the chassis. The idea isn’t very new and neither is the implementation. When you think about it, offloading workload from the CPU onto a “card” has been around a while. We do this with GPUs and have for years. Why not do this for network traffic especially when talking about the speeds and feeds of modern application processing?

The ADQ has about 2000 programmable queues to classify applications. Couple this programmability to the available CPU cores–and virtual cores–in the new Intel Xeon Scalable processors, and you have the ability to send predictable streams to the processor and smooth out response times. If an application is hammering the CPU for processing time, the ADQ carves out lanes for the other applications so they won’t suffer the cratering process. Clearly the ADQ feature is meant to leverage the suite of new Intel releases.

Parting Thoughts

There were quite a few system-related products in the announcement, so it’s not much in my wheelhouse to dig into the finer points of the silicon. Ping me on Twitter and I’ll put you in touch with folks who bathe in that topic. The Ethernet Series-800 was very intriguing because of the ADQs and for me that was a differentiator. I immediately started thinking about how the network, application, and systems teams could work together to improve application response times. Assuming these teams can break out of their respective silos there could be some awesome user-experience enhancements.

Application team says they need some upgraded equipment because of slow response times and they’ve adjusted code to be more efficient. Server team says to the app team they are upgrading and may be able to help because the Ethernet controller is programmable and can smooth out NIC to CPU interrupts. The network team gets wind and says they can modify the end-to-end QoS settings across the MPLS to accommodate the troubled app. The final product is performance enhancements through the enterprise and the IT customer (read, the end user) is happy. And by happy I mean they don’t call you. Sounds like Valhalla, but it takes a lot of work.

IT does it’s best work when we aren’t noticed. We’re the ninja’s of the business and I believe this recent Intel release may help us stay in the shadows.

Learn more by watching the TFDx stream.

[fvplayer id=”5″]