Enhancing Cloud Efficiency: Intel's Processor Technology

Enhancing Cloud Efficiency: Intel's Processor Technology

favicon
Yellowbrick
5 Min Read
A Deep Dive into Intel® Advancements and Optimization Opportunities

In this webinar series, industry experts Arijit Bandyopadhyay from Intel®, Doug Henschen from Constellation Research, and Mark Cusack and Heather Brodbeck from Yellowbrick Data explore cloud cost optimization and data warehousing.

Episode 4 features Arijit Bandyopadhyay, CTO – Enterprise Analytics & AI, Head of Strategy and M&A – Enterprise & Cloud (DCAI Group) at Intel® Corporation.

Discover how Intel® is pushing the boundaries of processor technology to meet the evolving demands of cloud computing.

In this video, Arijit explores the latest trends, strategies, and solutions from Intel®, including:

  • Advancements in Intel® processor technology and opportunities for cost optimization in the cloud.
    New processor lines like Ice Lake, Sapphire Rapids, Emerald Rapids, and Granite Rapids.
  • Silicon accelerators for tasks like in-memory acceleration, data streaming, and AI integration.
  • Intel® processor lines with efficient cores like the Sierra Forest line, and vector-based or AI capabilities like the Advanced Matrix Extensions.
  • Powerful security features like Software Guard Extension which address the critical need for data protection in today’s digital landscape.
  • Software-based optimizations, including language-based optimizations, Kubernetes and cloud-native capabilities, and Intel®-optimized libraries.
Intel®'s Revolutionary Processor Technology

Learn how you can leverage Intel®’s cutting-edge solutions to drive innovation, enhance decision-making, and gain a competitive edge.

For more best practices on performance optimization, download the Why Data Warehouses Are Ground Zero for Cloud Cost Optimization report.

Transcript:

Heather Brodbeck:
I think historically, we used to always talk about Moore’s Law and advances in technology capabilities. Is that true with cloud computing or is it really just a volume commodity play in the cloud today?

Arijit Bandyopadhyay:
We are doing a lot with respect to data warehousing companies from Intel. So I’ll do this in three parts.

One is with respect to the aspect of the processor itself. There are obviously new evolutions of performance cores or efficient cores which will come up.

We have Ice Lake, where already the existing data warehouses are running. Then we have the newer ones, which is the Sapphire Rapids. Which has already been launched and it is expected in the cloud at any moment. And so, that is another angle.

There are the newer, upcoming processor lines from Intel, which is Emerald Rapids and Granite Rapids. So these are on the performance side of the course.

There is also evolution that is coming up from Intel on the efficient cores with a platform line which is known as the Sierra Forest line. And that will give another vector of how things can be looked at. But just on Intel as a platform, there are others.

When we look at a particular workload, we look at the in silicon accelerators that we have. So if you guys have looked and there is the aspect of in-memory accelerator, the number of them. If you go to the Intel website and look into the Sapphire Rapids capabilities and get into the accelerator line.

One is something known as the in-memory acceleration or analytics acceleration or what we internally call IAA. There are certain workloads which are mapped to that.

There is the data streaming accelerator is another one, which maps to certain workloads in the case of data warehouses as well. There are, when you look at anything which are more vector-based or AI capabilities, AI integrated into databases and data warehousing.

Then there is something from Intel coming up in the Sapphire Rapids platform known as Advanced Matrix Extensions or what is known as AMX. And it is clubbed with B416 or VMI-based deal boost improvements.

EVX512 is a vector which is being looked at with Yellowbrick for optimization. So this is the, in silicon accelerators is one angle. There is the.. which improves performance, reduces costs, brings in an element of efficiency.

There is an aspect of security, privacy, and security vectors. So Intel platforms have something known as SGX and TDX. So SGX is Software Guard Extensions which bring in an element in the infrastructure of what we call confidential computing.

There is also an advent of Project Amber, which you’ll see from certain hyperscalers when you deploy them. That’s an angle.

From some kind of network optimization, there is the QuickAssist Technology or what we call QAT and Dynamic Load Balancer or DLB. And then there are certain databases.

Obviously, depends on the workload. In certain cases, it applies. It doesn’t apply.

And that’s what we do when we look at the workload and optimize from the Intel Disruptor program.

From an engineering perspective, we look whether that particular database, or analytics engine, or data warehouse fits in. So there is technologies like VBMI and VMD which help.

So this is as far as the core processor line itself goes in when we engage with an ISV for technology or engineering optimization.

Second is the work that we do with our hyperscalers for the core construct of the instances. And also look at cost optimization from the aspect of the selection of which instances should be better for a particular workload.

Now, in this particular case, we talked about I3ens and storage-optimized or IO-optimized workload I4Is, but it could vary from the C series, which is the compute-optimized to memory-optimized, which is the R series.

So you could actually. R7IZ is a preview of Sapphire Rapids which is available. ISVs of choice or customers of choice can come into the Intel Disruptor website or Intel and request it and that can be provided.

So bottom line is you get Sapphire Rapids access or the standard ones, which is the M series. And then there are some dedicated special purpose instances as well. So that’s an angle.

In the case of… we do another level of optimizations in certain cases here. Data Plane Dev Kit is being enabled for Yellowbrick, which is what we call the Intel DPDK. That has been done for better optimization. So this is on the cloud instance side and also the processor side.

Besides this, there is also an angle of various software-based optimizations that we do from an angle of languages, from an angle of Kubernetes, or the cloud-native capabilities.

There is an asset that Intel has which is known as Granulate, which does certain language-based optimizations or workload-based optimizations as an add-on, as a service. So it depends.

We have some partnerships on the FinOps side of the fence. The Intel cost optimizer, which we apply or provide it to customers as part of this engagement itself, the ISVs. So whether it’s Densify or Vantage or any of those vendors.

There are certain libraries or Intel-optimized libraries, like the Intel distribution of Python and all those libraries are there. So depending on whether the particular workload has been developed in whatever language or whether it’s Java or the JDK versions, they are optimized specifically for Intel platforms.

And so whether it’s Go or Java or Rust or Python or any of the XG Boost or any of the libraries and frameworks mapped to that, we do that as well. AVX-512, obviously we talked about it and the associated software optimizations.

There is also something we have in our software portfolio evolving for not only the CPU line but also our accelerator line. Which we mean by Intel GPUs and also the A6 like Habana, which is the one EPI platform.

The benefit of that is you do it for one software stack and it will scale across the board. It depends which workload it’s contextually relevant to, but it’s an angle that we do from an engineering perspective.

And then, there are obviously the work that we do with the similar thing. Not only from the cloud perspective with the center of excellence, but also with the OEMs and map to the pay-per-view IAS capabilities like the OEMs have.

Whether it’s the Dell APEX or HPE GreenLake and things of that nature. So it’s a 360-degree angle. We work with the ISVs for the best core processor and software capabilities add-on. We work for the best cost optimization.

So it is important that the customer understands all this and does the selection mapped to this, rather than just go in with whatever is visible.

If you have any questions on it, please reach out to the ISV of choice or reach out to Intel and we will be able to guide you across the board for understanding your workload and what would be the best for you. Even in the context of whether you have selected a particular ISV.

So in this case, for Yellowbrick and we will. Or any other data warehouse, we would help you to understand and give you the guidance of the right…how you should deploy it and what are other things to look at. And also, triangulate with the ISV of choice and the hyperscaler of choice for the best for the customer.

Sign up for our newsletter and stay up to date