Data Warehouse on Kubernetes

Yellowbrick Logo
SaaS: How AWS Graviton Moves Money From Snowflake Customers’ Wallets to Shareholders’ Pockets

SaaS: How AWS Graviton Moves Money From Snowflake Customers’ Wallets to Shareholders’ Pockets

aws graviton

…a 76% product gross margin – up from 70% just over a year earlier. Furthermore, they announced approval for a $2bn stock buyback program. Much of the savings from Graviton isn’t resulting in lower prices for Snowflake customers and instead is going to stock buybacks.

Fat cat on Snowflake logo with a happy AWS smile

I’ve been contemplating whether or not to write this blog for a while. It’s a bit controversial, there’s certainly a little bit of “connecting the dots based on some assumptions” — however it’s clear that some things in the SaaS and technology world are changing, and not in the best ways for consumers of technology.

The goal here is to talk a bit about how Moore’s Law should be continuing to enable reduced compute cost for customers of SaaS services but is now being held hostage by large service providers that don’t pass on the economic benefits of innovation to customers.

Why Moore’s Law (Or Something Like It) Is Good

The technical essence of the “law” observed by Intel founder Gordon Moore was that the number of transistors on an integrated circuit would double every couple of years. Although the exponential growth has decelerated, more transistors meant more compute power for less money, and it’s indisputable that this continues to be the case. In processors, this has enabled higher clock rates, bigger caches, and more CPU cores as well as all sorts of special purpose accelerators for graphics, AI, networking, and whatnot.

In the early days of the cloud and before, new processors came out from Intel and other vendors that offered more compute capability per dollar spent. This was passed on to customers by server OEMs and cloud providers, resulting in us being able to do more for less: Less money spent for the same amount of compute, or more compute for the money. It’s what’s driven the innovation enabling the processor in my mobile phone to be more powerful than many full-sized PCs from not long ago.

About the Graviton

The cloud vendors now have more technical capability than the server OEMs ever had. Although some of them tried to design processors they were not particularly successful at doing so. Now we see our favorite cloud vendors building network adapters, AI accelerators, switches, virtualization layers, SSDs, controllers, and heaven knows what. Because the services are consumed at such a high level, as an AWS customer I’m not really too bothered with what machinery is keeping things ticking, giving the cloud vendor the opportunity to re-implement technology behind the same high level interface.

The AWS Graviton 3 is a real CPU contender, designed in-house using off-the-shelf ARM IP and manufactured (of course) by TSMC. It’s a pretty substantial beast: 8x high bandwidth DDR5 memory channels, gen5 PCIe, and 64 CPU cores with quite wide pipelines implemented in 55 billion transistors.

Intel still has the best CPU cores, and a Graviton core is still nowhere near as capable. However, AWS isn’t buying these parts through a third party CPU vendor, so there’s less margin stacking. This enables them to sell out each core on Graviton at roughly half the price of an x86 core. The Graviton cores aren’t hyperthreaded. AWS bills per vCPU, each Intel core has 2 vCPU and each Graviton core has 1 vCPU. So although the price per vCPU is roughly the same, for many workloads a Graviton vCPU actually gets more work done than an Intel vCPU since hyper-threading isn’t in play. In many benchmarks, a 64-vCPU Graviton (with 64 cores) outperforms a 64-vCPU x86 part (with 32 cores) for less money.

Workloads that run best on Graviton are those that don’t do HPC-style vector supercomputing at scale, and those that parallelize very well across large numbers of cores with very little inter-core chatter. For example, workloads that process data by chopping the data into lots of independent bits and processing it in as parallel a way as possible with minimal shuffling and coordination. Such workloads sound a lot like parallel analytic databases or data warehouses. No wonder SAP HANA and Snowflake have aggressively ported to it for their clouds.

Bringing Us to Snowflake

In Snowflake’s earnings call in March 2023, they made note that they had completed porting their platform to use the Graviton, at that time the Graviton2, for all their active commercial AWS deployments. This came along with a new AWS partnership agreement, with Snowflake agreeing to spend $2.5bn dollars over the following 5 years. AWS is keen to motivate customers to migrate to Graviton since they don’t have to pay Intel margin and gets to lock in customers even more since Graviton isn’t available in Azure, GCP, or on-premises.

The same earnings call announced a 76% product gross margin – up from 70% just over a year earlier. Furthermore, they announced approval for a $2bn stock buyback program.

Although this is putting some things together, I posit that much of the savings from Graviton isn’t resulting in substantially lower prices for Snowflake customers (in fact, the ones I talk to are rapidly becoming discouraged by the cost of the service); and instead is going onto their balance sheet to buy back shares, driving up the stock price for their shareholders.

This shouldn’t be how Moore’s Law helps customers compute and analyze more for less. Likely the same story will play out with SAP’s HANA cloud (again, customers held hostage complaining about the price).

What Does This Mean for SaaS?

Most SaaS companies are now feeling the end-of-free-money crunch due to higher cost of capital and lower valuations. They need to reduce the cost of operating their services while at the same time, SaaS budgets for their customers are coming under far more scrutiny. SaaS vendors we buy from are all feeling the pressure and at the same time, companies such as ourselves are being far more frugal with our spend.

SaaS grew and thrived in a world of free money. Snowflake raised over $2bn in funding, much of which was used to buy cloud capacity and resell it to customers at an ever-increasing markup. Business models that depend on reselling cloud capacity will now be far harder to execute due to the increased cost of borrowing and lower valuations.

Before the free-money SaaS boom, companies purchased software and ran it on infrastructure they owned: A more efficient supply chain, instead of buying from SaaS vendors who have to mark up the infrastructure to sell it on.

Perhaps we will see a new world where the SaaS user experience is delivered running on a customer’s own cloud infrastructure: The vendor makes a higher margin and doesn’t have to buy large amounts of cloud infrastructure to resell, and at the same time the customers get to take advantage of Moore’s Law without a middle man pocketing the difference.

This is what we believe at Yellowbrick is the future: SaaS in your own cloud. Same user experience, the same support model, the goodness of Moore’s law, less middlemen and your money not going straight into shareholder’s pockets.

Get the latest Yellowbrick News & Insights
Why Private Data Cloud?
This blog post sheds light on user experiences with Redshift,...
Data Brew: Redshift Realities & Yellowbrick Capabilities –...
This blog post sheds light on user experiences with Redshift,...
DBAs Face Up To Kubernetes
DBAs face new challenges with Kubernetes, adapting roles in database...
Book a Demo

Learn More About the Only Modern Data Warehouse for Hybrid Cloud

Faster
Run analytics 10 to 100x FASTER to achieve analytic insights that have never been possible.

Simpler to Manage
Configure, load and query billions of rows in minutes.

Economical
Shrink your data warehouse footprint by as much as 97% and save millions in operational and management costs.

Accessible Anywhere
Achieve high speed analytics in your data center or in any cloud.