The Benefits of Embedded Analytics for SaaS Solutions
The Benefits of Embedded Analytics for SaaS Solutions
favicon
5 Min Read

In today’s digital world, businesses need to use data to their advantage. Embedded analytics are a valuable tool for businesses of all sizes. By providing users with real-time data insights, embedded analytics can help businesses to enhance user experiences, drive innovation, make data-driven decisions, and improve customer satisfaction.

Embedded Analytics: Cost-Effective Data Insights

In this webinar, our expert panel discusses the benefits of integrating data analytics capabilities within existing SaaS applications. They also share real-world examples of how businesses have used embedded analytics to improve their results.

Our panelists include:

 

How Embedded Analytics Can Help Your SaaS Solution

Our experts provide valuable insights on how embedded analytics can transform your SaaS solution by:

  • Improving decision-making: By providing users with valuable insights into their data, embedded analytics can help businesses to make better decisions faster.
  • Increasing customer satisfaction: Embedded analytics can help businesses better understand their customers and needs. This can lead to improved customer satisfaction and loyalty.
  • Reducing costs: Embedded analytics can help businesses reduce costs by eliminating manual data entry and analysis.
 

Real-World Examples of Embedded Analytics

With real-life examples and deep industry knowledge, we discuss strategies for achieving cost-effective SaaS app analytics experiences and providing unique customer insights without the need for extensive developer resources. 

Watch the webinar to learn how you can deploy embedded analytics faster, more securely, and at a fraction of the cloud computing costs.

 

Transcript:

Heather Brodbeck:

All right. Hello everyone. Thanks for joining us today. We have a great panel here. I think a lot to learn from this group of experts, a diverse group also. So just a reminder of what we’re going to be going over today. 

So Yellowbrick and Panintelligence, we’ve teamed up to provide embedded analytics underpinned by an ultra-fast, cost-effective data warehouse. In the panel, we’ll learn from a diverse group, their experiences, and insights on how to enable analytics with speed and agility at a fraction of the costs. 

I’m Heather Brodbeck. I’m the VP of Rev Ops at Yellowbrick. So I’ll do my best to moderate this group today. And so we thought we’d just start with some intros for each of them.

So I’ll start with Matthias Baumhof. He’s the CTO of LexisNexis and joined us to share strategies for achieving cost-effective in-app analytics experiences. 

Matthias is an engineering and technology leader with 15 years of experience working in security and in the anti-fraud industry. At LexisNexis Risk Solutions, he leads a technical strategy, execution, and program management for engineering and operations, with a focus on security and scalability.

Next, we have Christian Garcia. He’s a Product Manager at Apptricity. We’ll discover practical tips and techniques to unlock unique customer insights without requiring extensive developer resources. 

He’s a product manager there for supply chain management, comprised of web and mobile solutions for asset management, inventory management, and field service management.

Next, we will hear from Jim Curtis, Senior Research Analyst at 451 Research. Will provide industry validation and insights based on his years of experience. He’s a senior analyst for data, AI, and analytic channel there. Experience covering BI and reporting, analytics sector, and currently covers Hadoop, NoSQL, and related analytic and operational database technologies.

And lastly, we have Zandra Moore here, co-founder and CEO of Panintelligence. She leads a team focused on delivering analytics into the heart of SaaS applications, enabling users access to key information at the right moment with the right focus. 

She’s a passionate tech leader and entrepreneur with over 20 years experience in technology. And as the co-founder, she leads a team focused on delivering analytics to the heart of SaaS, enabling users access to key information at the right moment and with the right focus.

So that was long-winded. I think this is the first time I’ve had four panelists on, but it’s a great group. I think we’ll hear a lot of great insights. I’m eager to get through it with you guys. 

So we’ll start off with our first topic, kind of general, but what do you do and what are the challenges that you face? So getting some insight into real-world applications and what you’re doing today. 

So Matthias, we could start with you. Maybe explain to us a little bit about why data is so central to your business, and a little bit about your journey and where you are today.

Matthias Baumhof:

Yes. Hi everybody. Good morning or good afternoon, wherever you are in the world. ThreatMetrix was an Australian startup, and that’s also when I joined ThreatMetrix. It later moved, headquartered in the US. In 2018, it was bought by LexisNexis Risk Solution, so that’s the global story. 

ThreatMetrix is a fraud prevention and risk-based authentication platform. And now a product, and we support basically for digital transactions. And you have all heard of the digital transformation also fueled by COVID, where the service has to be offered on the digital channels. 

ThreatMetrix is a product that answers the most pertinent question there, is the current transaction actually a good returning user or is it a fraudster? And from the start of being a startup, we have decided that everybody, who wants to use our service, has to call into a central database.

We wanted to offer a software as a service platform and not an on-prem database. And that’s where over the years, we have started to see a lot of transactions and we actually now have built one of the largest repositories of digital identities around the globe here. And we call it LexisNexis Digital Identity Network. 

And of course, our challenges have been simply scale. We went to three times phases of 10x growth, and our challenge was to find technologies or technologies partners who could move with us to those phases of 10x growth. 

And we had quite an ambitious goal. We wanted to offer like a risk API, where we would make a risk decision within milliseconds. But on the other hand, we would also offer a web-based portal, where our customers can do forensics, and analytics, and review transactions, and have free queries into a data set of billions and they only should wait five seconds. 

And that’s basically how we ended up with Yellowbrick, with these ambitious goals to offer such a service.

Heather Brodbeck:

Great. All right. Thank you, Matthias. Next, similar vein, Christian, we’ll go to you. Why was embedding reporting and analytics so important for Apptricity in your customers?

Christian Garcia:

Yeah, absolutely. Hey everyone, I’m Christian Garcia. I’m the Product Manager of what we call our supply chain management applications here at Apptricity. We are an enterprise SaaS company, and we really focus on the areas of asset management and inventory management, and with a real focus on integrating IoT tracking technology. 

And that’s really been, in my five-plus years as product manager here at Apptricity, that’s been one of my key objectives, is to integrate these tracking devices, collect data, and then have that data visualize in our software in a variety of ways. 

And so in a nutshell, what we do is, we keep track of important assets, inventory, it could be vehicles or tools, or any sort of equipment that help people operate their supply chain. 

And so really, over the last five years of my work in this role, we’ve been integrating these different types of tags, RFID tags, GPS, Bluetooth, et cetera, and really connecting items that make up the supply chain to the internet of things.

And over the past year or so, as we started to realize we have all this data that we collect for our customers, what do we want to do with it? Because collecting data is just really the first step. 

As you progress, you want to start to analyze that data, start to analyze, are there trends that we can implement that we can make suggestions to our customers, hey, if you did things this way, you would get a better return or better operational efficiency, things of that nature.

So what we decided was, we needed to start looking into data science and data analytics, and business intelligence, and how could we implement that into our solutions and then offer that to our customers as a way to just better visualize the trends that make up their business and their operations. 

And so about a year ago, we started a search for a business intelligence tool and we formed a partnership with Panintelligence, to start visualizing some of those trends and some of that data with the tool that Panintelligence built, and using that as both embedded within our application and also as a cloud service that is hosted elsewhere, so we support both models.

But really, what we’ve been able to do with Panintelligence is reduce the time that it would take to create those charts and those analytics, allowing our development team to focus on new integrations with new tags and new sensor technology.

Building out really the key features of our supply chain management system and helping expand our product functionality. And leaving the business intelligence side to a tool that’s better suited for that. 

So that’s really what we’ve done in the last year. We’ve continued to develop our core feature set out on the supply chain side and we’re excited to continue to build out the data analytics side.

Heather Brodbeck:

Great. All right, thank you. Zandra, I’ll go to you. I know you spend a lot of your time speaking with folks like Christian, product managers. What are some common themes that you come across, whether it’s challenges or just things that they’re all focused on?

Zandra Moore:

Thanks, Heather. And I think that’s a brilliant example of just the sort of things that we hear all the time, and product managers, like Christian, are often faced with three core challenges.

One, they have the commercial drivers of accelerating the innovation in their product to lean into those sort of new areas of data insight that the market is expecting their products to deliver for competitive advantage mostly.

And then you’ll have the sort of delivery team – as Christian’s really highlighted – that have to make decisions about their resources around core product features that differentiate their product versus utility features like data insights, which is sort of core to every platform and should be there. So that’s quite a complex prioritization task for a product manager to do.

And then lastly, there’s usually customer success teams that have customers that are asking for better sort of access to data and self-service, and really for retention and also creating more sticky in-app customer experiences, keeping them in the platform, that is really something that they’re also trying to deliver.

So those kinds of competitive insights, resource management for dev teams, and retention of customers. And this is challenge that those product managers are sort of leaning into around data and that tends to lead them into that kind of build versus buy conversation.

And when they look out to the market, they’re looking at quite a large array of different business intelligence vendors, some of which would be considered, like legacy.

And what I mean by that is they’re built for the enterprise, they’re built for inside reporting in enterprises, and those that are built specifically for embedding in SaaS applications.

And certainly for Panintelligence, that’s where we absolutely focus, in being an embedded white label solution that is core to enhancing a product to enable those capabilities to be enabled for their users, their users to self-serve data insights and unlock their dev teams resources to focus on core product.

And the things that matter most to those product managers is that that’s fast. So that rapid deployment of a product into their platform is super key.

Also that it’s secure, that they can be really confident the third-party product is securing that data as well as they are, and that they have real confidence around that, not just from data security, but user access control, those things, and making sure that’s authenticated.

And also that we grow together, so that kind of in it together, we grow when you grow. So a usage-based commercial model, where you’ve got real visibility of those costs. And that’s a real alignment for us with how Yellowbrick works and I think there’s some real synergy really. So that’s sort of the main themes that we hear for most product managers and SaaS vendors.

Heather Brodbeck:

Great. Thank you. Jim, we’ll go to you. Are you hearing similar things? I know you obviously speak with a lot of organizations in your role, so maybe you could add some to what Zandra just shared with us.

Jim Curtis:

Yeah, I think these are all interesting points. I mean, certainly I’ve been studying databases, the data platform analytics for many years now. And so not to take away from my esteemed colleagues who are in the throes of it, I do more sort of studying and identify sort of market trends.

But it’s interesting, one of the questions we ask in a lot of our research is, and we ask this pretty regularly, we ask a response, how do you value data and will you value it more?

Now certainly you would think everyone would certainly value data high and that’s primarily the case. It’s somewhere right around sort of the mid to low 80% of enterprises say, yes, we have a great deal of emphasis and value that we put on data. However, we ask a separate question. We say, if that’s the case, what’s the percentage of questions that you answer with that data?

And you would think that there would be a higher correlation, if you value data, you would effectively sort of use that in your decisions, but that’s not always the case.

I’m finding that probably less than 20% of enterprises probably use data with the majority or all of their decision-making. Most enterprises fall in this middle tier of saying, we use it a lot of the time or relatively frequently. And then you have some outliers, who are probably not very good at it.

Stripping away some industry benefits or industry specifics on certain industries that may use data more than others or have the need to use that, generally there’s this gap there between how do we get from, we value it, but how do we get to actually using it and utilizing it. And I think that’s where others on this panel can sort of speak directly to that.

But this is the challenge that enterprises have. There’s friction. There can be organizational challenges. There’s a whole host of things that sort of prevent people from doing that.

The flip side is that if you ask enterprises the same question, say if you did have data and you were able to use that for your decision making, and it was readily available to you, you had access to that, what do you envision that to be?

Most enterprises will say, we could likely engage better with our customers, we can have better sort of relationships with them, we can be more agile if there are changes in the economy or disruptions in supply chains, and things like that that we just saw, we’re better able to deal with some of these things that come along.

We’re also able to deliver on products and services that we might offer, having those be enhanced. So there’s a whole host of benefits, but it’s jumping that chasm, getting across that challenge, I think, really gets to the heart of the issue.

Heather Brodbeck:

Okay, great. Thank you. Maybe that’s a good segue. Christian, back to you. You talked a little bit about the journey that you’ve gone on implementing BI and analytics capabilities.

Maybe you could talk a little bit about some of the challenges you had or what you may have done differently, as you look back on that experience to date?

Christian Garcia:

Yeah, sure. So the first thing that really comes to mind in our BI journey is, one of our biggest commercial clients is Verizon. And what we do for them is we track inventory inside of their vehicles that go out and do installations for home DVRs or for home internet.

And as our account with Verizon has grown, we’ve grown to use the same type of RFID scanning infrastructure inside of warehouses, inside of retail stores, distribution centers, and other places like that.

And what we’re doing for Verizon is, we’re putting RFID tags… Well, we’re not putting them, but the manufacturers of the goods are putting RFID tags onto the products.

We’re doing an import into our system and our RFID scanning infrastructure is picking up where things are, whether they’re inside a vehicle, inside a warehouse, retail center, et cetera.

About two or three years into that project, we have weekly conversations with Verizon, hey, what are you guys looking for? What are ways that we can improve the product to make it better for you?

And the topic of data analytics came up and so we went into a discussion about what are some data analytics that would make a lot of sense for your team, things that can reduce downtime, make sure that the controllers are properly functioning, making sure that you have the right amount of inventory in your vehicles, what are the things that you guys want to see?

And what they came back with was a list of 20 charts, basically trends that they wanted to see, to be able to predict if maybe a controller was going to fail or understanding the utilization of their vehicles. Maybe they have vehicles that are scanning, but they’re never picking anything up because those vehicles are empty, because they’re not using them.

So there’s those types of insights that they’re looking to get so that they can better manage their fleet, manage the inventory stocking process.

And so our development team – we have a very talented development team that I work with daily. We spent about two months understanding the data, understanding how to create the proper data linkages to prepare these charts, and then we had a front-end development team, who went and actually generated the charts.

And so that was about a two-month process with about another two to three weeks of testing, about all in three months to get this project done. We are not a large organization, and so we do not have the ability to halt all research and development to just focus on one thing. We’re typically working on multiple things at a time, across both web and mobile platforms.

And so for us to point all of our development resources towards one project, it’s got to be a very lucrative project and there’s got to be a lot of evidence for why we should do that.

And so we finished the project, we delivered it to Verizon, and it was okay and it was great. They enjoyed it. It provided them with insight. And so we had to start asking the question, well, who do we do this for next? Obviously, this was a successful project, but how do we scale this and start offering this to other customers?

And the question was, well, are we going to have to spend three plus months every time just to deliver requirements for one customer? And that’s really where the discussion started to formulate, is there a better option than building it in-house?

And so we started to evaluate different BI vendors and had a demo with Panintelligence that we really connected with them on, connected with the sales rep and the technical lead who was on. And once we were in kind of our 30-day trial period, they were able to build those same charts in two weeks versus three months.

And that’s a no-brainer for us because like I said, we’re not a huge organization. We don’t have multiple development teams working on multiple things, so we have to be very tactical in how we approach our development projects.

And so really that build by partner decision was so clear to us and the ability for us because we do a lot of government work and so we do a lot of development or we do a lot of deployments on-premise, the ability to host it internally and to embed it inside of our application was incredibly important.

Because if you work with the government, you basically have to go through everything that is cloud-hosted, make sure it’s secure, as I’m sure you’re all aware of. But the ability to embed inside our own application, but then also host the cloud, was a huge bonus for us.

And so that’s essentially the Verizon project in a nutshell, and it really helped us understand if we’re going to scale this out and offer this as an add-on to our product, we need to be able to do so rapidly, and Panintelligence has allowed us to do that.

Heather Brodbeck:

Great. Thanks for that insight. Matthias, we can move over to you, I guess, similar question, what are some of the challenges?

I know you already alluded to some of that in the last segment, but maybe just talk to us a little bit about the challenges around how you needed to scale quickly and how you solved those.

Matthias Baumhof:

Yeah, of course. Let me just elaborate on how we’re using the ultra-fast database at Yellowbrick to solve actual customers’ problems.

Imagine you would buy a headphone at your e-commerce shop of choice and the e-commerce shop would also happen to be a customer of ThreatMetrix.

The e-commerce shop would have then no real clue of who is interacting with them, because everything, what you type in there in the web form could also be, you can just take it from a synthetic identity or could be an identity that was stolen on the internet.

That’s where we come in. We place essentially a JavaScript in the webpage. We collect 200 properties. And later, the e-commerce shop would also then do a risk API to our system and essentially doing a risk decision.

But what we also do, is to copy properties of the current transaction, so we have those 200 properties that we collected from the web, another thousand properties we have calculated over our digital identity network.

So you end up with a transaction of around 1600 properties that would be placed into a relational database. And why do we do that?

Because imagine you have used a proxy with your purchase of your headphones, so this actual transaction would happen to be in review, and somebody from the fraud team of the e-commerce shop would need to review your transaction, not only review the transaction but also look for a related transaction, look for patterns that might have similar behavior to be able to make a decision. Do I actually send those headphones on or not? Essentially to manage fraud then.

And a lot of fraud patterns here, that translates to queries where we say, you have to find the needle in the haystack, you have to find a few transactions across billions of transactions.

And that was our challenge, because we wanted to still offer a solution where you can actually have freeform queries into that data set, not pre-calculated queries.

We wanted to offer our customers, essentially the work clause could be a freeform that can be put in the text in the webpage. And then big customers of ours have multiple billions of transactions that you need to search for. You don’t want to wait two minutes for that needle-in-the-haystack query, you wanted to have something that would return in five seconds.

And it is a very complicated thing to design a system, where you ingest a lot of transactions, but again, have the ability to do whatever query you have across billions of transactions.

And Yellowbrick was one partner of us that could solve this very complicated use case for us, and we essentially help our customers to make the immediate decision, what do I do with this current transaction, by looking across billions of transactions and finding the pattern, and making sure you make the right decision here.

Heather Brodbeck:

Okay. All right. Thank you. All right, well, let’s maybe bounce back to something that came up earlier around build verse buy. And maybe Jim, we can start with you, just to get a sense in general what you’re seeing out in the market. Obviously, I think there’s different decision criteria to go one way or the other. It’d be interesting to hear what you’re seeing.

Jim Curtis:

Yeah, the build-versus-buy is certainly going to, if it doesn’t sort of get under the skin of a lot of people, it certainly will, and there’s certainly buy side and build side people.

Historically though, what’s interesting is that people have often preferred, at least enterprises or research shows, that people often prefer to make fewer buying decisions than more. They want to buy one vendor and go with that.

We’re sort of past those days a little bit, just simply because we live in such an incredibly competitive market and there are products that are very heightened on a solution that they particularly serve.

Firstly, we also see a lot of the cloud platform providers providing sometimes where it’s sort of called a la carte or blend your services model, and sort of the all-in-one sort of blend your services model.

Both are interesting approaches and they can serve well, but in some ways, it’s unlikely that a vendor in and of itself would be able to be best in class in all areas across the board up and down the stack. And generally with the products, people are making decisions on is this thing good enough versus what I really sort of care about.

And so what’s interesting about that is that enterprises don’t necessarily have a problem sort of building something. They certainly have to have the resources. I think Christian mentioned that luckily in his organization, they had that, but then they found out, hey, it might be nice to allocate maybe at a different area. And these are decisions that generally enterprises make. Could you have resources? Can you do that? But also, can you sustain those resources well into the future?

You have to have access, resources around you to be able to hire these people as you grow. And these are all longer tail decisions that people have to make, certainly on the build versus buy side. But what enterprises basically say is that, if something’s holding me back from being data-driven, getting access to my data, getting access to my analytics to be able to make decisions and operate a nimble thing, security and privacy always have risen to the top.

But if you take those away for a minute, generally, the next issue is integration. And enterprises certainly are less favorable to a highly administrative integration place. They certainly want best of class, but if it has a high integration element to it, sometimes that may outweigh the benefit. The other piece of that is, do I have then the resources to sort of manage this thing if there’s integration in place?

So what normally happens is, what makes a nice pairing or whatever, is that when organizations have integrations that are very frictionless and come together, and provide a best in class in certain scenarios, this is certainly the ideal scenario for people, but it requires sort of architecturally to be integrated at certain levels.

But getting past these things like friction and integration, overhead administration, certainly user decisions, enterprises, half the way, as they sort of go into these sort of build versus buy decisions.

Heather Brodbeck:

Okay, thanks, Jim. Maybe Zandra, we can build on that a little bit more with you. What are maybe some additional key factors that you see in that decision-making process?

Zandra Moore:

I think Jim put it really clearly, those two areas around security and integration, and I’ll talk a little bit to those and maybe some of the factors that we also hear.

Absolutely security, it’s almost table stakes. Again, if that’s going to be a concern, then frankly, you’re sort of out of the picture, as far as a vendor that should be considered for any platform that has a huge amount of data, that they’re doing a great job of securing and controlling.

And security isn’t just about how that data is accessed, it is also about the permissions, the roles, the standard integration, authentication for example. So you’re not maintaining multiple user profiles, making sure that you are restricting what people can and can’t do, and can and can’t see.

And being able to do that all automatically, through roles and user settings, and permissions. So that is low maintenance, frictionless in the background. So confident that you can mirror your sort of security layer that you have in your organization, whether that’s your platform or your enterprise, but knowing that’s happening in a way in the background, that is maintained and managed automatically is key.

So I think that’s one part of security. Another is, there’s no need to move data now to do analytics. Tools like embedded analytics can query live and in real-time high-performance databases, like Yellowbrick, and leverage the power of those databases and those data warehouses to provide real-time insights and analytics.

We’ve moved on from the days of having to move data and worrying about whether they’re going to have an impact on the underlying databases. We’ve moved along from that to a point where actually tools, like embedded analytics tools, are built for those high-volume, transactional environments to be performant in live real-time querying scenarios.

So security and data security, there is no need to move your data to do this anymore and I think that’s sometimes not something everybody really understands or is aware of.

I think if we think about integration, it kind of goes a little bit further. So things like an API around the products, that you can call a product. Standard APIs, integrations, what you should be expecting from any analytics platforms today.

DevOps controls, so that you can… Those sort of bluegreen deployments and you can roll those out through your application, making it look and feel exactly like your business or your product, so CSS styling, white labeling. And not just once, that every user can have a different experience when they log in.

And that isn’t just about branding, that’s also about things like accessibility. So if somebody is visually impaired and they need to have a larger font on the screen, that should default automatically to a style sheet they can have. So things like the experience from the visual experience, not just the data they can see, all of those things should be part of the table stakes for a really good, frictionless experience for a user and for a company.

So I think security integration sometimes come hand in glove. But I think the last thing to bring up is, what we hear is, having a partner that knows how to build best-in-class visualizations and analytics. Because they’re doing it all the time and they do that in certain sectors, that can be a real value add for organizations, where they’re not essentially data engineers, data scientists or business intelligence specialists.

So what are the best visual representations for data? What are the best ways to present information and allow a user to engage with that? Sometimes having a partner that can help you get to that really sort of first set of really engaging elements within your platform, can help you to make sure that whatever you are giving to your customers really does drive value and is based on that experience, as opposed to necessarily relying on having to draw that from the customer.

So those are some of the areas, but I think there’s a lot of alignment with how Yellowbrick work and I know that that sort of scalability is something we both share as far as that. That is key, our ability to scale over large data sets.

The performance, the security, and how we can actually look at that kind of cost-effective deployment, so that actually it is aligned to the use case and the volume of that use case, so that it’s fair value, I think is something that both Yellowbrick and Panintelligence have a seamless partnership integration around and go to market, but even better, we have a standard integration that we can work together and connect, and make that frictionless from a technology point of view.

So I think there’s a mirroring of our kind of go-to-market, but also a mirroring around how we integrate as technology.

Heather Brodbeck:

Great, thank you. Christian, maybe you can give us some examples. When you made the decision to go with a BI vendor and you’re working with Panintelligence, what were some of the things that were critical to your team, that you really felt like you needed in a vendor partner for BI?

Christian Garcia:

Yeah, absolutely. So I’ll start by saying that I’ve been with Apptricity for, let’s see, about eight years now. And in my time, I have never written a line of code, and that’s not my background, that’s not what my educational background was.

I feel proficient in the ability to use the Panintelligence tool. And that’s kind of a micro example at a broader scale of the level of whether you want to call it coding background or any sort of technical background.

I do consider myself to be somewhat technically proficient, but I have the ability to be effective in that tool, without understanding how to write code.

And I think that that level of accessibility opens up a broader audience of people who can really own and operate the tool, discover their own insights, create their own dashboards.

We’re doing our best on the product side to understand what our customers want to see, but if they have the ability to really fine tune exactly what they want, that was a huge decision-making point for us, because customers know what they want, we think we know what they want until we talk to them.

And so we’re building our baseline set of reports and charts, and we’re showing them on demonstrations and we’re getting feedback, hey, are these the types of things that you’d like to see? If not, what are the types of things you’d like to see?

And so one of the things from the decision-making perspective, the ability for people internally at Apptricity – who weren’t developers – to become proficient in the tool was a huge value add for us.

Because we have professional services teams that help with customer deployments, and if they know how to assemble dashboards and charts or create new dashboards and charts based off of just basic database linkages, that enables our team to be able to deliver more rapidly.

And so that was probably one of the biggest decision-making points for us – how quick it was for us to link our database. We do still manage our own data warehousing and manage that on our end.

I would tell you, with the way that the scale of transactions is trending, we’re not on the millions and billions daily yet, but we are trending very quickly towards needing to implement a data warehousing solution.

So that decision is on the horizon and we’re starting to prepare for that. But the ability to link the database very seamlessly, to be able to generate these custom charts based off of the data that we know that we’re very intimately familiar with, and to be able to generate those charts and those analytics – it’s incredible how quick it is and how easy it is to set up what we call internally span of control, who is allowed to see what? Creating the executive dashboards, creating the manager dashboards, and then giving people full access to see everything.

So span of control is huge for us, because we do operate in a lot of government environments. And so making sure that people only have access to certain data is paramount for our product. And being able to bring in a charting tool or an analytics tool that can just support that out of the box was very important for us.

So those are a couple of ways. I could probably talk for another 15 minutes about it, but I know we probably have other people to get to.

Heather Brodbeck:

All right, no worries. Thank you. No, that was great. Appreciate that. I think that’s one of the things that’s been talked about for a long time, is self-serve analytics.

So people that don’t know how to write code can actually still get at the data and get the insights they’re looking for. I think the value out of that is tremendous. So that’s great.

Matthias, so over to you, similar type of question. So obviously you went through a process to select Yellowbrick, and I wasn’t around at the time, but glad you’re still with us. But I assume you’re looking at some of the other vendors out there, maybe like Oracle, Teradata, Snowflake.

What were some of the criteria that you looked at? I know we’ve definitely heard about the scalability and speed, but were there other factors that were important to you through that selection process?

Matthias Baumhof:

Sure, yes. We went through the whole build versus buy discussion. Right at the beginning, when we were quite small, we just used the Postgres database for our transactions, and it worked quite nice.

Then of course, we outgrew this Postgres box. We then engaged with Greenplum and we had the Greenplum database for a number of years, until we outgrew this. And then of course we thought, hey, open-source SQL over Hadoop is such a great idea.

You can do resource management. And we actually tried to build a huge data warehouse with our own engineering folks based on open-source tools, and that never really 100% worked and that was the time we engaged with Yellowbrick.

And what drew us to Yellowbrick was, of course, the problems of scalability, but actually it was also they were local. We could just go and build a relationship with people, we can have the trust in people that they can solve very on-the-edge complicated problems.

The other thing that was very promising was the simplicity of the Yellowbrick solution. It is essentially just a big Postgres box. You don’t have to care about… with our Hadoop solution, you always have to compact data. You have to make sure that if you ingest several thousand transactions per second, then you handle that, then you should be able to query that a minute later. All those things were just built in with Yellowbrick.

Also, the whole maintenance. We had a whole team of people maintaining this huge Hadoop cluster, and also with the Yellowbrick system, we could actually run the database with much less people.

So that is also something that you have to consider with the build versus buy. If you partner with a good technology partner, you actually save a lot of headcount on your side, and that has to also flow in this decision.

And of course, other people were also around Snowflake at the time. For us, it was a no-go to give every one of our transactions in the cloud and Yellowbrick offered the on-prem solution, which for us having on-prem, the other center was also a plus.

So essentially, they ticked all the right boxes for us and then it was, once we had found a partner, we said, hey, we can create what we wanted to create. Again, our offering for our customers to solve problems in the fraud world, we had to make the right choices in terms of technology. And with Yellowbrick, we’re still very happy.

Heather Brodbeck:

Great. All right, thank you. Okay, well I think maybe just an open-ended. I know we have about 20 minutes left. We’ll see if we have any questions from our audience. But maybe before we go there, just a question around just what’s next.

So maybe speaking from the different organizations that you guys work for or the different organizations, like for Jim and Zandra, who you work with, just to give us some insights of what do you think is next from technology advancement?

I know AI’s got to come up at some point I’m sure. But maybe we just start, Matthias, we can just start with you on that. What’s next for ThreatMetrix?

Matthias Baumhof:

Very good question. In terms of databases, I think that cloud and AI is of course a big topic. At the moment, I think people with dedicated use cases for databases that tend to always have separate databases for separate use cases.

And I would envision in the cloud, that it’s possible to have your data just once and have different curry engines, so that you can actually build a system, a big data lake, where you can run your ETL loads, your analytics queries, your ad hoc queries from portal use cases, just with a big backend.

And that would be something that we would be looking forward if such a solution exists. And of course, preferably, if it’s possible, cloud technology, but of course, also be able to have on-prem performance. If you marry those two concepts, that would be something that we would be looking forward.

Heather Brodbeck:

Right. Great. Thanks. Christian, over to you. What’s next for Apptricity?

Christian Garcia:

So I think as the Internet of Things continues to grow, there are new opportunities for us to integrate with sensor tags. A

nd for those who aren’t familiar with sensor tags, I’m referring to RFID, Bluetooth, cellular tags, that basically collect environmental data. So things like temperature, things like liquid fill level, shock pressure, things of that nature. And so as we’re working with companies across a variety of industries, we’re integrating tags that collect this data.

And so as we continue to collect that data and start to identify some of these trends, there’s really three phases of where we want to get to with providing intelligence to our user base, and that’s analyzing a failure event. Why did something go wrong? Why did this machine break down? Or why did this piece of equipment get stolen? So how do we analyze that situation? How do we then predict the events that lead up to that situation?

So really what’s next for us is what I think is the predictive analytics side to say, instead of analyzing what went wrong and why it went wrong, how can we analyze and basically forewarn people when these three factors are met, there’s a high probability that something’s going to get stolen or there’s something that’s going to break down. How do we prevent those things from happening?

And when you bring up AI, I think really where we see AI at Apptricity is being able to analyze that quickly, but then also make the decision, as opposed to having the manual interaction of making that decision.

So if I have a piece of equipment or if I have a vehicle that’s driving and it’s about to break an established geofence that is around a project site, where I see AI is understanding the trajectory, understanding this vehicle’s about to leave the appropriate zone, shut down the vehicle, so that it doesn’t get out or it doesn’t break a zone where this person’s not allowed to go.

Access control is a big part of that as well. And so if you get to places where people don’t have the proper certifications to get to a particular part of a building or maybe an area where there’s some sort of environmental concerns, how do we prevent people from getting access?

And it’s that kind of automated workflow, but it’s understanding the events that lead up to that event. It’s understanding through the data, we can say, here’s an example of times where it went wrong and here are the defining factors of what happened.

And then it’s understanding that data to then prevent it in the future. But then also, on the backend, how do we have AI make that decision in real time, as opposed to having a user go in and say, okay, I’m going to restrict access or I’m going to send out an alert. It’s having those things happen autonomously as that data is being collected in real time.

So that’s really where we’re looking at Apptricity. I know that, to your point, Heather, AI is kind of the buzz term right now. And I think just understanding what it can do, understanding how it can help businesses, we’re still investigating that on our end.

Because I think AI, from the commercial side that everyone’s looking at, with things like art generation and ChatGPT, well, that doesn’t really help your business all that much when it comes to understanding your supply chain – at least from what we’ve seen.

So really taking a look at how do we implement AI to solve problems in the enterprise is really what we’re looking at next with that.

Heather Brodbeck:

Great. Thank you. All right, Jim, you want to give us some insight into what you’re seeing? You might have a broader perspective of what different organizations are looking at next.

Jim Curtis:

Yeah, I can certainly talk about this. Well, we can talk about it for a while. But I kind of want to go on what Matthias and what Christian said basically is that, what I’m seeing a lot now, at least from the database and data platform space.

And I’m going to sort of broadly say operational databases or data warehouses, analytic databases, is generally what happens is that – Matthias has mentioned this – it sounds like they had gone through a series of various platforms and products to open source different things to sort of land on something. And Christian sounds like they’re leading up to getting to the point to where they are going to have to make some of these decisions as well. But a lot of times, a lot of customers and vendors, and people I speak to, there is this idea of preparing for and planning for the future.

There’s a term, future-proof, I don’t really like that term. But this idea of not only buying for now, but buying eventually for something that you can grow into. These are real issues.

And when it comes to databases and things, it’s a significant effort to sort of swap that thing out, because it often contains very critical operational data. There’s maybe customer data on there. And so these are not trivial decisions. And so the investment in a database now or these platforms now, as well as associated products with them, have lasting impacts, and so getting that right really matters. It also requires that the vendor is on board to have these visionary approaches.

So certainly we’re seeing the inflection point, I’m seeing it now, pressure on databases to be able to handle AI. A lot of databases now do in-database analytics, in-database machine learning.

These are, in some ways, versions of embedded analytics that we’re talking about here. They’re the same type of play as reduce the friction for me to allow me to do some of these things. But also, as you move forward, databases have to be able to handle different models and different things like this.

We’ll certainly start seeing things like vectors and vector searching, and all these things that sort of get into, can I use my existing infrastructure to drive some of these tools?

So certainly, I don’t have necessarily a crystal ball to say, hey, get these products. But certainly, it’s a journey not only for the vendor, but also for the consumer to buy now, but buy for the future, because there’s money, and resources, and time based on that.

Heather Brodbeck:

Great. All right, thanks, Jim. And Zandra, I think we’re probably going to end with you.

We didn’t get any questions other than one that’s actually specific, I think, to maybe a Panintelligence product. And you might weave it into your end of discussion here on what’s next, but I think it’s called high predicts maybe.

Zandra Moore:

Yeah.

Heather Brodbeck:

All right. Take it away.

Zandra Moore:

Yeah, I mean I think technology companies today are as much going to be valued for their data assets as they will for the software that they build.

And that future-proofing point that Jim said, it is critical to ensuring that we understand the value that we are creating both for our business, but also for the customers that interact with us.

And therefore, to lean into this agenda around machine learning, AI, predictive, that Christian’s talking to, because that is the next generation from real-time analytics visualization.

Now Forbes, I think, published a quote and a statement that said 87% of all machine learning projects don’t actually make it into production. And that’s because just because something’s statistically significant, doesn’t mean it’s useful.

And when these things are built in black boxes, and they’re not explainable and understandable, and can be interacted by and with the domain experts that understand the use case and the problem, can the causal impact of those variables and that model be understood, and therefore iterated to basically ensure that that model and application is useful in context?

And I think Christian gave a brilliant example around access to areas and two causal factors being environmental factors or certifications. Those are causal factors that probably the domain experts know, but are we clear that we have that data informing that model in a way that actually allows us to make the best predictions possible.

Then at Panintelligence, yes, pi Predict, which is very kind, if I just pop that on the chat. I mean, we’ve always called it a machine learning and predictive engine, but most of our customers start with the basics. Let’s be honest, table stakes is can I understand what happened yesterday and today? And people are still kind of wrestling with that challenge and can I do that usefully, real time and iterate those insights for myself?

So we tend to start there with data visualization and distributed reporting. But the next evolution is this machine learning, but what we would call causal AI. And it’s the gap that we’ve missed, the step that we’ve leaped over to generative AI. We should have had this causal AI moment, where non-technical, non-data science, citizen data scientist users, could look at what models were doing, understand them, interact with them, explainable, understandable AI with a cause and impacts of that model could be fully understood before they’re applied.

And it’s not just important to make those things useful and therefore adopted in the live environment operationally, but it’s also really important for reducing bias and ensuring that the ethical application of these models is understood, and also, in some cases probably needs to be regulated.

So for us, we’ve worked really hard to build something that allows the Christians of this world, who maybe haven’t written a line of codes, who’s not coders, to be able to build a model, understand a model, put a model into practice, and be able to confidently talk to their customers about what that model is doing and get their input into it.

Just as you build a chart and say, hey, would you like this different? We should be able to do that with models. What is this model doing? Do we believe that this model is working on data that we believe is the right kind of data to feed that model, and are we confident in it?

So I think the future is yes, we’ve still got a lot of people just getting the basics, what’s happening today, what happened yesterday? And there’s still a lot to do in that area and we should be doing that with very large data sets, live in real time. And that should just be what we’re all expecting of our experiences around data.

But this route to AI, I think for me is explainable AI, understandable AI and making that visible, is really the pool that we’re now seeing by the market before really ChatGPT style, NLP type experiences. It is, can I predict an outcome and be confident that that’s telling me something useful?

Heather Brodbeck:

Great. All right. Thank you. Well, no more questions from the audience, so I just wanted to thank each of you for the time today. This was super informative. I hope the audience agrees.

And also, just for anyone who joined, if you wanted to reach out to anyone on the call here, I’m sure they’d be happy to have a quick chat with you if you did have questions later. So I think we’ll end there. So thank you, everyone. All right. Bye.

Zandra Moore:

Thank you very much, everyone.

Jim Curtis:

Yeah, thanks guys.

Matthias Baumhof:

Thank you.

Zandra Moore:

Cheers. Take care.

Matthias Baumhof:

See you later.

Zandra Moore:

Thank you.

Sign up for our newsletter and stay up to date