So thanks everyone for joining today, we’re going to be talking about a variety of topics around cloud cost and vendors and other things related to cloud cost. So we’ve got some great folks here with us today, which I’ll introduce first. So first, Doug, Vice President, Principal Analyst of Constellation Research joining us.
He focuses on data-driven decision-making, analyzing how organizations employ data analysis, reimagining business models, and gaining a deeper understanding of customers, optimizing operations, and minimizing environmental impacts.
Doug, would you like to do more to introduce yourself there?
No, just great to be here. I’ve been at Constellation Research for about eight years, and yeah, we see it as this continuum. It’s not just a silo of technology. So looking forward to today’s conversation.
Great, thank you. And joining Doug, we have Arijit, he is a CTO, enterprise analytics and AI, Head of Strategy and M&A, and Enterprise and Cloud from Intel®.
And Arijit, would you like to say a few words?
Yeah, nice to be here, Heather. And so I’ve been with Intel® for quite some time and I take care of databases, data analytics, the enterprise AI.
And I also had the strategy office for cloud and enterprise and some elements of the M&A have a long history with respect to Intel® and Intel® capital, also chair the program in Intel®, known as the Intel® Disrupter Program, and we’ll talk about it in this.
So yeah, really nice to be here. I’m looking forward to this discussion.
Great. All right, thank you. All right. And our third participant is Mark Cusack, our very own CTO from Yellowbrick.
Before joining Yellowbrick, who is a VP of Data and Analytics at Teradata, he led a variety of product management and technology teams and data warehoused and advanced analytics groups.
Mark, would you like to say a few words?
Absolutely. Hi, Heather. Hi everybody. Yeah, very nice to be here. Looking forward to the discussion.
I’ve been very much in the data warehousing space for probably the last 20 years at startups, at large enterprises, and now at Yellowbrick.
Looking forward to this discussion around how we manage cloud costs and data warehousing today.
Okay, great. Thank you. All right, so like we said, the conversation will revolve around how we get the most value out of cloud and driving down cloud costs, who’s responsible, what customers should expect, a variety of things.
So I think we’ll start off with Doug on the topic why data warehouses are ground zero for cloud cost optimization. And I think we have a few slides here for you.
Well, great. Thank you, Heather. Great to be here and to recap a bit of my report on Why Data warehouse Services are Ground Zero for Cloud Cost Optimization.
Now I’m happy to say that folks watching today’s discussion are going to get a free copy of this report. It’s loaded with lots of recommendations on how you can take control of data warehouse costs, and many of these points we’re going to be discussing today. And the first reason I started writing this report last late last year was all the talk of recession and growing complaints about cloud costs.
And if you had turn to that next slide, Heather, you wouldn’t know we’re in a recession talking to CXOs about their 2023 budgets. Now, we explored this topic as part of our Constellation CXO confidence survey published last November and found that the majority of the 78 C-suite respondents that were in our survey overwhelmingly expected to spend as much or more on technology in 2023 as they did in 2022.
As you see here, 59, nearly 60% raised their 2023 tech budgets either slightly, moderately, or significantly. Yet paradoxically, when we asked these respondents whether 2023 would bring a better business climate than 2022, nearly 70% said no.
So the bottom line businesses are preparing for worsening conditions and the recent layoffs we’ve seen this week’s banking crisis offer fresh evidence that these fears were well-founded. So tech budgets and cloud expenditures are going under the microscope.
If we turn to the next slide, still further evidence is offered by the Flexera State of the Cloud report. Now the 2022 installment of that very venerable respected report found that nearly 60% of cloud customers named optimizing existing use of the cloud as their number one priority.
So let’s face it, cloud expenditures have been growing for years. Companies are spending much more time on cloud cost optimization and the trend has given rise to the so-called FinOps movement.
Now, FinOps is not about financial operations, it’s a mashup of the words finance and DevOps, and it’s about finance, IT, and the business working together to plan their cloud migrations, plan their cloud workloads, and optimize their spend in the cloud.
Now if we turn to the next slide, we see yet more evidence that companies are frustrated with cloud costs and are even starting to move workloads back to on-premises data centers.
Now this study was released in February, it’s not my report, but it was by IT asset management vendor device 42. And I thought some of the findings were pretty startling. 50% of the 300 respondents said they’re going to continue to rely on these on-premises data centers. Only 13% said that cloud had delivered on the promise of cost savings. 50% said they don’t even know exactly what type of infrastructure they have in the cloud.
80% said the cloud is expensive and getting more expensive. And this was the thing that really stunned me, 20% said they have moved or planned to move certain workloads back to on-premises data centers. And we’ve talked to some data warehouse practitioners for example, when they have steady, consistent workloads, they don’t need the elasticity, serverless, all that. They’ve moved some of these workloads back on on-premises and all of this is fertile fuel for today’s discussion.
Let’s turn to my final slide. And the second reason I wrote this research is that cloud data warehouses are not only storage and compute-intensive, they’re also quite popular, ranking as the number one platform as a service offering in use by cloud customers.
This is according to the Flexera study, 38% of light cloud users, 54% of moderate cloud users, 60% of head heavy cloud users, using data warehouse cloud services. The second most popular choice of this nature is transactional databases.
So yeah, number one, that’s the number one reason we’re talking about it. There’s much more in that report including those recommendations, but I think these points sort of set the stage for today’s conversation and I know we’ll have some thought-provoking advice and comments to share. So back to you Heather.
Okay, thanks, Doug. Appreciate that. All right, well let’s go over to Mark. Maybe you could share a little bit, is this resonating with you?
I know you spent a lot of time with Yellowbrick’s customers and also with their prospects, so what are your thoughts?
Yeah, it’s not a surprise to me at all that we are still seeing this trend of data warehousing being the most widely used component in the cloud kind of data estate because traditionally that’s always been the case. You think of the privileged position a data warehouse has within a business.
It’s usually tied to the most upstream data sources and the most downstream consumers within a business. It’s a critical piece. It’s the crown jewel in terms of data management in most enterprises. And as enterprises transition to the cloud, that sort of status of the data warehouse remains.
But I do think to Doug’s point, a lot of customers are at a crossroads right now in terms of their cloud spend. CFOs are taking those cloud line items and cloud data warehousing SaaS spend and scrutinizing them under the microscope.
As we go into the economic headwinds, we see higher interest rates. This is a really important component to address.
And I think one of the issues that arises particularly in cloud data warehousing and managed services offerings here, is the number of levers you can pull and turn to tune and bring those costs of data warehousing in the cloud under control that there’s fewer there than there’s traditionally been in an on-prem operating environment.
So I think there’s going to be… CFOs will be drilling into the spend, they want these more controls and I think that’s what customers are telling me.
Okay, great. Thank you. All right, let’s see. I get some thoughts from Arijit on this. What are your thoughts on that and how does Intel® deal with cloud spend?
Yeah, thanks, Heather. So I think in any enterprise, I mean if you look at it, the data warehouse, whether it’s in a kind of hybrid mode or in a cloud SaaS capability or in pass or whichever the deployment model is, it becomes the fundamental fulcrum of the standard architecture of the enterprise either whether you treat it as a record of origin or record of reference.
So managing that in terms of performance, in terms of cost savings, and in terms of the overall power saving, Intel® helps in providing all those things that I talked about and as well as cost savings from the hardware performance optimizations and capabilities so we’ll from the Intel® Disruptor Program and where Yellowbrick is a part of.
So we help in various ways and then there are these aspects of what we work with the different hyperscalers or the OEMs of choice for what we call the center of excellence is with Intel® and our partners and that also helps.
So it is the most important cost factor, there are various ways to manage it. The customer and the ISV, they have to be intelligent in how they handle this, and understanding it is very, very important.
That’s what it is.
Great, thank you. Well, let’s maybe pivot to our next topic. So saving money when things are packaged as a managed service. So when buying cloud services, you inevitably buy some sort of package service, whether it’s virtual machine operating systems, storage, security, or data warehouse, so how do you start to save costs when so much seems to be outside of your control? So maybe Mark, we’ll start with you on that.
Yeah. And I think we’re getting to the heart of the matter here really to my last point about how can CFOs scrutinize and put better controls in place around spends around software as a service, and if we focus on data warehousing in cloud and managed services around that aspect in particular, there’s no transparency here.
And of course, from a user experience, it’s easy to consume this stuff, everything scales elastically behind the scenes. But of course, what’s happening under the hood is these cloud data warehouse SaaS vendors are essentially operating cloud infrastructure on your behalf and they’re going further than that.
They’re picking the cloud hardware they’re doing and they’re marking up those prices and selling it back to you at a larger margin. So what you get with a lot of SaaS vendors is this margin stacking effect, which at the end of the day impacts how much you are going to pay.
And you have very, very little transparency and ability to understand what you are spending on here. And another aspect I’ve seen is a trend and I’m hearing from a lot of prospects and customers is that a lot of cloud vendors in this space have then therefore the freedom to reduce their own infrastructure overheads.
They’ll pick Graviton processes in AWS for example, but they’re not necessarily passing on the price performance benefits to the end SaaS customer here because all of that is kept as transparent, in terms of the infrastructure. And so they’re able to change the infrastructure, increase their margins at that level, but we are not seeing those cost savings passing onto end users because of that lack of transparency.
Okay, thanks, Doug. Any thoughts to add to that topic?
Yeah, in addition to providing that transparency that Mark is talking about, it’s also helpful if vendors can give customers the option to procure infrastructure and or storage capacity through their public cloud provider.
Now sometimes vendors pass through storage costs for example, but not always compute. I mean usually as Mark mentioned, that’s sometimes kind of an abstracted scheme that they have for their own, but buying directly from the cloud provider not only lets you take advantage of discounts, it also helps to meet any buying commitments that you have that you made in order to meet those discounts.
So in cases where customers can buy compute capacity within their own accounts, they also typically have more control, more visibility, more transparency, and the vendors then are typically offering advice on best-fit configurations.
I think this is really important and I think one of the arguments in the past around providing this, pulling open the curtain and exposing a lot of this infrastructure is it increases complexity on the consumer, on the customer’s side of things, and customers get concerned about does this mean I have to change my skillsets in my company to include more cloud operations expertise, expertise in Kubernetes, and other things like this?
And I think we’ve come a long way in software maturity and data warehousing in the cloud where you can automate the deployment and provisioning of data warehousing even though you get to see the underlying infrastructure, once you put automation layers around this, you can really lower the barrier to operating this yourself.
So I think you can have your cake and eat it here. You can have automated control around how you provision data warehousing, but you are paying for the infrastructure yourself into Doug’s point. You are taking advantage of all the enterprise agreements you’ve got in place with the cloud service providers to help reduce your costs there.
Okay, great. Thanks. And we can move to Arijit maybe the same topic here, but anything you can share on Intel® and your experience working with cloud providers?
Yeah, so I think we have been working with all the hyperscalers. This extends into their varied offerings, I mean whether there is a Yellowbrick or a Snowflake or a Databrick or any of those are companies like Rockset in the Intel® Disruptor Program is part and parcel of engaging with them and managing it from the angle of performance. And that is an important element to consider both from…
The second one is the cost savings in general, so we do the optimizations with respect to our processors but also from with respect to the instance definitions. So we basically look at the newer technologies and the newer evolutions in the workload and work map that to our processors, extensions, software optimizations, as well as working them up as before this comes onto the hyperscalers in terms of how things are optimized.
So what’s important I think for the customers is to understand this in a 360-degree way, just don’t look at it from one dimension.
Great. Thanks. And so Mark, maybe we’ll end with you on this topic and get into the next one, but so how do platform costs align with growth? Are there economies of scale here and how should people think about that?
Yeah, and I’m definitely seeing a trend in the conversations I’m having around customers who want to start small in the cloud and expand and grow as their business scales, that’s critically important.
They want to start at a low price point but know that whatever cloud data warehousing solution they have will scale as their business grows and they will invest in that data warehousing solution in line with business growth here.
And I think I’m having a lot of conversations with existing cloud data warehouse users today that are concerned about that starting point and the ability for some of these solutions to scale quickly as their business grows as well. So that’s what I’m seeing.
It’s really customers are unwilling necessary to make an upfront, serious commitment. They want to see value from their cloud data warehousing solution, and they want to be able to scale that over time as their business grows.
Yeah, I got to agree with that one. I see oftentimes companies choose a tool expediently and then later on they find that it can’t scale to where they want to go.
So even if you’re starting small, you got to have a vision about where you’re going and that vision should recognize possibilities like merger mergers and acquisitions, big pushes in marketing that you didn’t have previously. You got to look long-term.
Okay, thanks, Doug. All right. Next topic related obviously is the ease of use and cost. So we are seeing different vendors out there that have this pay-for-what-you-use model.
Doug, could you talk a little bit about that and any risks related to that, and obviously the benefits that are there also?
Yeah, you hear about consumption-based pricing, you hear about serverless options or auto-scaling.
On the serverless side, it’s important to note that it’s an incomplete answer to cost control. Yes, there are oftentimes spiky workloads, data warehouse workloads where compute demands go up and down. You can’t really anticipate that’s where serverless capacity or auto-scaling is a fit.
But whenever it’s easy to add more data, add more users, add new workloads, scaling always seems to go in one direction and that is up. And similarly, on consumption-based pricing, it’s not really about cost control. The storyline is pay only for what you use, but in reality, it’s kind of an open invitation to use more, more, and more.
As more data is added, as more users gain access to analytics, consumption tends to go in one direction, and again that is up. So what really helps in our view is with cost control is a range of subscription models.
So you have choices and you have flexibility on what bets fits your usage patterns, your per query and capacity-based models are typical, all the better if there are multiple discounts for time-based and capacity-based commitments as well as some flexibility to exceed those levels maybe temporarily or by a certain percentage without facing punitive pricing.
Okay, thanks. Mark, anything to add on that?
Yeah, and we’re certainly seeing this in conversations at Yellowbrick, which was one of the reasons why we introduced a blended pricing plan of both on-demand and capacity-based subscription pricing that you could combine within the same deployment as well for the reasons Doug mentioned.
And I think not only are companies looking to reduce the magnitude of the overall spend, they want some cost predictability here, which is incredibly important for budgetary planning purposes as well.
I think to key in more on the serverless side of things as well, you really are exposed to a true black box here when you’re looking at serverless.
And to the extent it goes beyond just the unpredictability of being on pure on-demand, highly reactive sort of spend profile, but also the fact that some of the serverless cloud data warehousing solutions out there today will introduce more resources during the runtime of a single SQL query, and that’s kind of non-deterministic.
It becomes very, very difficult to run a couple of times and to truly understand whether you’re going to pay the same cost per query each time you run it. And so I think there’s the black box aspect of it, the unpredictability aspect of it, these are real forces acting against any sort of CFO’s department that’s looking to reign in the controls and get some predictability out of that.
So when I speak to customers, a typical way that we introduce Yellowbrick is we encourage them to start small. Perhaps they are using on-demand consumption while they’re characterizing their workload.
And to Doug’s point, when you see a fixed capacity component of your workload coming out of that, then move that piece to subscription, you get a much more favorable sort of spend when you do that knowing that you can always burst out of that fixed capacity as in when you need it, when your business requires it. But that’s something that we are getting a lot of traction on at Yellowbrick.
Great. Right. Arijit, back to you. Any thoughts on this topic and what you’re experiencing?
You get a choice in many of the deployments of choosing the instances in the hyperscalers. As an example, the Yellowbrick is one case, the I3ENs or I4Is are the instances of choice in AWS.
So when you are doing this or the customer has a choice to do this, the aspect of the selection of the instance is an important element on Intel®, because that is also related to performance IO optimized the performance and network optimized performance or core so that the selection of the instances is a vector. The aspect of which platform you’re choosing from Intel® is a vector.
Great, thank you. And to close out this topic before we get to the next one, Mark, I’ll just loop back to you.
If you could give a little bit of color. I know you talked earlier about how there’s costs or savings that happen at the vendor level that don’t always pass through to the customer.
Can you talk a little bit about what are the costs then for these serverless services and do we see those passing on to the customer when they’re savings or what does that look like?
Well, and I think the answer is no frankly, because quite often what a lot of SaaS vendors in data warehousing are doing is massively over-provisioning on the AWS side or Azure side or whatever, and in the course those costs are passed back onto the customer at the end of the day as another sort of markup.
And Arijit kind of opened the hood a little bit around what we do under the details around Yellowbrick, but to give you a little bit of flavor on how we’ve approached this, when you spin up data warehousing elastically with Yellowbrick in AWS, Azure, or what have you, we curate the Intel® instances that we choose here.
And we’ve deliberately chosen the AWS instances that we support to get the best price-performance out of those instances, and that’s completely transparent. We don’t mark up those prices.
You’re literally paying the list price for AWS for that instance that’s underpinning our deployment and again, we hide the details, you abstract the details away so the user experience is a SaaS-like experience, but you are running it yourself, which sounds like a massive contradiction in terms.
And maybe three or four years ago you really couldn’t get away with that. If you look at the history of running databases and data warehousing in the cloud four or five years ago, you’d be stitching together EC2 instances and having to manage the infrastructure with the advent of Kubernetes services like EKS and AKS that’s managing the elasticity for us in Yellowbrick.
And we mask all of these details and provide a simple SQL interface to allow you to create new compute clusters, expand them, contract them, et cetera, et cetera.
So again, I think we’ve latched onto the technical developments both on the Intel® side and in the cloud most recently to be able to provide a very, very simple, easy-to-consume data warehousing experience without exposing the details, but definitely just ensuring that customers only pay for what they need to as far as in infrastructure is concerned.
Okay. Great. Thank you. So that goes into our next topic around infrastructure advances and Arijit we can start with you.
I think historically we used to always talk about Moore’s law and advances in technology capabilities. Is that true with cloud computing, or is it really just a volume commodity play in the cloud today?
Yeah, we are doing a lot with respect to data warehousing companies from Intel. So I’ll kind of do this in three parts.
So one is with respect to the aspect of the processor itself. I mean there are obviously new evolutions of performance cores or efficient cores which will come out. So we have Ice Lake where already the existing kind of data warehouses are running.
Then we have the newer ones which is the Sapphire Rapids which has already been launched and it is expected in the cloud any moment. So that is another angle.
There are the newer upcoming processor line from Intel®, which is Emerald Rapids and Granite Rapids. So these are on the performance side of the course.
There is also a evolution that is coming up from Intel® on the efficient cores with a platform line which is known as the Sierra Forest line and that will give another vector of how things can be kind of looked at.
But just on the Intel® as a platform, when we look at a particular workload, we look at in silicon accelerators that we have, so if you guys have looked and there is the aspect of in-memory accelerator, the number of them.
If you go to the Intel® website and look into the Sapphire Rapids capabilities and get into the accelerator line. So one is something known as the in-memory acceleration or analytics acceleration or what we internally call IAA, that there are certain workloads which are mapped to that.
There is the data streaming accelerator is another one which maps to certain workloads in the case of data warehouses as well. There are… When you look at anything which are more vector-based or AI capabilities, AI integrated into databases and data warehousing.
Then there is something from Intel® coming up in the Sapphire Rapids platform known as advanced matrix extensions or what is known as AMX and it is clubbed with kind of B416 or VMI-based deal boost improvements.
AVX-512 is a vector which is being looked at with Yellowbrick for optimization. So this is the in silicon accelerators is one angle, there is the which improves performance, reduce costs brings in an element of efficiency.
There is an aspect of privacy and security vectors, so Intel® platforms have something known as SGX and TGX. So SGX is software guard extensions which are, we bring in an element in the infrastructure of what we call confidential computing.
There is also an advent of Project Amber, which we’ll see with from certain hyperscalers when you deploy them, that’s an angle. From some kind of network optimization there is a quick assist technology or what we call QAT and dynamic load balancer or DLB.
And then there are certain databases obviously depends on the workload in certain cases it applies, it doesn’t apply, and that’s what we do when we look at the workload and optimize from the Intel® Disruptor Program
From an engineering perspective, we look whether that particular database or analytics engine or data warehouse fits in, so there is technologies like VBMI and VMD which kind of helps. So this is as far as the core kind of processor line itself goes in when we engage with an ISV for kind of technology or engineering optimization.
Second is the work that we do with our hyperscalers for the core construct of the instances and also look at a cost optimization from the aspect of the selection of which instances should be better for a particular workload.
Now in this particular case we talked about I3ENs and storage optimized or IO optimized workload I4is but it could vary from the C series, which is the compute optimized to memory optimized, which is the R series.
So you could actually, r7iz is a preview of Sapphire Rapids which is available, I mean ISVs of choice or customers of choice can come into the Intel® Disruptor website or Intel® and request it and that can be provided.
So bottom line is you get Sapphire Rapids access or the standard ones which is the M series and then there are some dedicated special purpose instances as well. So that’s an angle in the case of…
We do another level of optimizations in certain case here, data plane dev kit is being enabled for yellow breaks, which is what we call the in Intel® DPDK, that has been done for better optimization. So this is only on the cloud instance aside and also the processor side.
Besides this, there is also an angle of various software-based optimizations that we do from an angle of languages, from an angle of Kubernetes or the cloud native capabilities. There is an asset that Intel® has which is known as granulate, which does certain language based optimizations or workload based optimizations as an add-on as a service, so it depends.
We have some partnerships on the FinOps side of the fence, Intel® cost optimizer, which we kind of apply or provide it to customers and as part of this engagement itself, the ISVs, so whether it’s Densify or Vantage or any of those vendors.
There are certain libraries or Intel® optimized libraries like the Intel® distribution of Python and all those libraries are there. So depending on whether the particular workload has been developed in whatever language or whether it’s Java or the JDK of versions, they are optimized specifically for Intel® platforms.
And so whether it’s Go, or Java, or Rust, or Python, or any of the XG Boost or any of the libraries and frameworks mapped to that, we kind of do that as well. AVX-512, obviously we kind of talked about it, and the associated kind of software optimizations.
There is also something we have in our software portfolio evolving for not only the CPU line but also our accelerator line, which we mean by Intel®, GPUs and also the basics like Havana, which is the one API platform, the benefits of that is you do it for one software stack and it’ll scale across the board. It depends which workload it’s contextually relevant to, but it’s an angle that we do from an engineering perspective.
And then there are obviously the work that we do with the similar thing not only from the cloud perspective with the center of excellence, but also with the OEMs and map to the pay-per-view IAS capabilities like the OEMs have, whether it’s the Dell APEX or HPE GreenLake and things of that nature. So it’s a 360 degree angle.
We work with the ISVs for the best for core processor and software capabilities add on. We work for the best cost optimization. So it is important that the customer understands all this and does the selection mapped to this rather than just go in with whatever is visible.
If you have any questions on it, please reach out to the ISV of choice, or reach out to Intel® and we will be able to guide you across the board for understanding your workload and what would be the best for you. Even in the context of whether you’ve selected a particular ISV.
So in this case, for Yellowbrick and we will, or any other data warehouse, we would help you to understand and give you the guidance of the right, how you should deploy it and what are other things to look at. And also triangulate with the ISV of choice and the hyperscaler of choice for the best customer.
All right, thank you. Mark, do you want to close us out on this topic around infrastructure advances, and then we can move on?
Yeah, very briefly and I think I’d assert to Doug’s point a second ago is that we are not seeing cloud data warehouse vendors passing on savings or performance from the increasingly released generations for example of Intel® chipsets here.
At Yellowbrick we’re very aggressive to move to the later generations. We already support Ice Lake, we previously supported Cascade Lake, and we’re going to be evaluating Sapphire Rapids soon. And what we do is pass those performance improvements directly onto the customer in the spirit of transparency.
And I think there’s a trend among some vendors to treat new generations of chipsets and improvements in performance as an opportunity to keep the performance as the user experiences the same, but just pocket the difference in terms of reduced infrastructure costs on their side that that’s what I’m hearing and seeing myself. Yeah.
Okay. All right, thanks. And we’ll keep with you just on the next topic. So what can we learn from the best-performing organizations?
So I know you know all spend a lot of time talking with different tech leaders. What do you see most successful organizations doing with respect to cloud costs and where’s the pressure being applied if there’s pressure to find savings in cloud?
You want to start Mark?
Yeah, and kind of goes back to my point earlier. I think customers and companies these days are taking a really conservative cautious aspect to making changes to their ca cloud expenditure.
They want to see the business value and the return on investment in being realized before they make big commitments. And in fact, we’re seeing this, in fact, a lot of cloud data warehouse companies whose business really relies on consumption and on-demand pricing.
We’ve seen a drop-off in consumption and you look only have to look at some of the end-of-year reports from some of the other public publicly listed cloud vendors to see some of the messages that are around dropping off of consumption there as well. So I think the message is that the best practice is start small, grow your expenditure as your business grows.
There are going to be opportunities, and Doug mentioned it at the start of the top of this call around are there situations where you might want to move that workload into your own data center as well in certain circumstances, and it’d be interesting to see how that trend changes over the next few years.
But again, Doug, to keen on Doug’s point around putting in cost controls guardrails, these are things that we’ve built into Yellowbrick, so you can’t burst out of your cloud expenditure limits, putting alerts based on here, but more importantly just push your vendor for more transparency about where your money is going at the end of the day and really, really scrutinize what value you’re getting from that investment with them.
Okay, thanks. Arijit we’ll go back to you. I know you had mentioned already in our last topic that you had recommended always have a method for continually reviewing what your licensing model is and if it still fits with your business model.
Do you want to expand at all on that around what you see, in Intel®?
Yeah, So stay current, future-proof it, understand the context given the aspects of how you are growing to the ISV of choice. Yeah, I think you will be able to map it to what your CIO’s requirements are.
Okay, thank you. Doug, you work with a lot of organizations, any insights or advice for the group here?
Well, I think the most successful organizations have a cloud center of excellence of some sort, even if they don’t call it that, so I would establish one.
This is a centralized team often has a FinOps approach with finance IT and the business represented, and is there to provide guidelines to promote best practices, to consolidate purchasing.
Sometimes organizations don’t know across the organization what they’re purchasing and they’re not taking advantage of discounts they can get.
And the best of these organizations aren’t these bureaucratic departments of no, they’re really helping to accelerate deployments, and adoption, both with their processes that they establish, the expertise that they provide and they’re helping companies to avoid unplanned adoption and poor post-merger integration.
I would also make sure that most successful companies are employing cost optimization policies and guardrails.
Examples include right-sizing those compute instances as we talked about vendors helping you with that, restricting permissible instance types, we see some runaway use of instance types sometimes that might be ill-advised, shutting down workloads automatically after hours, obviously making sure that this isn’t a planned and anticipated and required after hour workload, setting up expiration dates for unused storage.
I would also check for cost-saving options from public cloud providers or from third-party service providers. One example NetApp spot is a spend reduction service that can often yield sizable discounts because they’re reselling unused reserve instance capacity.
So there’s a lot going out on there in FinOps and cost reduction and a lot to explore. And again, my report includes some of the recommendations on some of these things.
Great. Okay. Thank you. We’re about 10 minutes left entire last topic here, which is really just takeaways from the discussion today.
So obviously not everybody is super technical and maybe doesn’t know all the right questions to ask, so any advice from the three of you?
What would maybe be the top three questions they should be asking and who should they be asking?
So, Doug, you went into some of that just now, but did you want to make it more pointed for the group?
Yeah, I’d say the only way to take control is to really get in there and get your hands dirty and take control.
And you could start with the recommendations in my report, which include establishing that cloud center of excellence, setting benchmarks, reviewing architecture, modernizing for the cloud, making use of public cloud and third-party management tools, the employing cost optimization policies and guardrails.
And again, all of these recommendations are detailed in the report and anybody watching this discussion is going to get a copy of that.
Okay, great. Thanks. Arijit, over to you. Any recommendations for the group here of who to ask and what to ask?
The vectors of what you call it, optimization and performance software, hardware processor, all of that cloud. It’s quite a varied landscape.
Keep current, reach out to the ISV, or reach out to Intel®, and we’ll be, as a neutral party, we’ll be able to guide you and it is to our benefit and it’s one of our goals that when you are deploying any workload and data warehouse definitely that you’re doing it and getting the bang for the buck, getting the right performance and doing it within the cost envelope that you always want.
Okay, great. Thank you. Mark, anything that you’d like to add for the group on the call today?
Yeah, I mean I know Heather, you mentioned upfront that managing the cloud budget can be a super technical exercise, but I think we’re at that point where you have to roll your sleeves up and start to really drill into how much I’m spending on my cloud infrastructure to support my cloud data warehousing needs.
I mentioned earlier there are very, very few levers particularly as software as a service delivered cloud data warehousing that you have control over.
So I would really look at am I getting the best value for money out of my cloud data warehouse vendor today? How can I take control myself on where I spend a cloud infrastructure level? How can I renegotiate my discounts with the cloud providers so I have a complete line of sight from the bare instances that I’m spinning up to support my data warehousing to what value I’m getting out of this.
And I think we’re at a point in an economic climate where it would be fiscally irresponsible not to take that level of scrutiny seriously, frankly.
And as I said, you really have to get into the details. I totally advocate Doug’s mention of companies like NetApp Spot, a great opportunity to just also take external services and see if you can use the internal market for reselling cloud capacity and take advantage of that to reduce your own costs as well.
But at the end of the day, really, really think about am I getting the best price-performance? Am I pulling the most efficiency out of the infrastructure that I’m deploying my data warehousing on, I think is my overriding recommendation.
Okay. All right. Thank you. And I know we just have a few minutes left, so I’m not going to open up an additional topic, but just want to thank the three of you for joining today.
I learned a lot on this call actually. It was great. Doug, looking forward to getting your report and reading through that. And then for anybody that joined, definitely just like Arijit said – and I’m sure Mark and Doug agree – if there’s any follow-up questions or looking for any guidance on anything to get connected, feel free to reach out to any of us for that.
So I think we will end here. Thank you all.