Generating value from the data ‘trading floor’

value from data lakes

Data is the modern currency for businesses, yet locking it away rarely yields the best results

Leveraging data to build a corporate data ‘memory’ to predict and understand the wider marketplace, consumers, and internal organizations, without doubt, drives higher returns.

Data-driven strategies have complexified with the emergence of native hyperscaler solutions opening an exponential explosion in new, application-agnostic, data-centric technologies. If data was a currency AWS is the new trading floor.

One of the biggest differences is large commercial applications (such as SAP) tend to have a linked scaling cost between the data size and the amount of compute needed to manage and report on it, the growing range of big data technologies break this expensive symbiotic relationship allowing organizations to store petabytes of data with no compute running if it isn’t needed, this makes data one of cheapest currencies to hold in the Cloud bank.

To achieve this, enterprise organizations must accept that data is an enterprise asset, just like anything else, that requires a long-term strategy. It sounds simple, but is not always the case, even among the largest organizations out there.

When it comes to developing a data strategy, generally speaking, there are two schools of thought:

1. The temptation to do nothing. To continue to focus efforts on the bits and the boxes and the bytes, without really thinking of a true strategy for the organization. This restricts the IT team from thinking about where enterprise data lives, grows, and dies – and drives attention to comparing the market for the cheapest platform to save a few bucks. These organizations aren’t really lifting their head to look at the horizon.

2. Then there is almost the polar opposite, which is to adopt the magpie’s approach of looking at the latest, shiny thing, beautification without a business case literally is only skin deep.

Beautiful, simple UI’s in many cases can drive significant business improvements, and a cheaper VM can yield some cost benefits, but without answering the Business “why” question these approaches are typically short-term in nature, the business case must first be established. This begins by asking what the pain point is and how technology can help. This might be to reduce costs, modernize a business process, or mine data for discovery purposes.

Data lakes are one very pertinent example of this, where the aim is to pull data together. But they then don’t think about what to do with it. At its heart, a data lake is simply a set of modern technologies, from storage layers, to query engines, to AI, that doesn’t really answer the “Why” question. Many organizations take the leap before really thinking about why, or what they are trying to fix.

The attraction is clear. If you take SAP as an example, it’s a world-class platform, but can also be something of a silo, traditionally keeping its data to itself and its own reporting tools.

The minute you want to go broader and look at data from across the enterprise, either SAP or non-SAP, or from hundreds of other sources (Public, private, or subscribed), it starts to get interesting. By building a data lake you are bringing together all those sources into a leverageable asset – and this will help derive better insights, improve cross-application business processes, modernize enterprise reporting and enable you to learn new things about your company. Rather than going into SAP, collating data from different application sources, wrangling it into Excel, and providing it in a clunky dashboard for the C level, that process which once took many painful weeks can now be automated by pulling data into a neutral data lake, enabling you to model and report to resolve your business pain point, or simply to discover new business insights (the why…).

Accessible data

From a technical perspective, when it comes to managing data, gravity can be pretty heavy. By adopting a data strategy built around a data lake, you can reduce the financial gravity (platform, licensing, maintenance, etc) of the data by moving into the cloud, for example, AWS, as well. There, it is cheaper to keep than it would be in a good, old-fashioned offline media. We’ve really passed that evolutionary tipping point, meaning data should now always be accessible.

Take a retailer, for example, being able to make queries around successful Christmas promotions in these same conditions over the last two decades, allowing you to look at historical POS data, consumer loyalty behavior, price point analysis, etc. This is powerful stuff.

It’s quite common for large enterprises to centralize data sources and simply want to go in and discover ‘stuff’ quickly and cheaply. They might find something, or they might not, but in an agile AWS world failing fast (which is also a good result!), or finding insights to build a business case can be achieved in just a few weeks. This is data discovery at its best. Best of all it’s now very quick and cost-effective to do, so you can fail fast and move onto the next area without any noticeable business or cost impact.

Solving problems

Another way in which the enterprise benefits is in fixing broken business processes, using native cloud technologies as a way of solving a particular problem. This means they no longer have to buy yet more enterprise software which involves licenses, maintenance, disk storage underneath, and more virtual assets to run and operate. By doing this through native cloud tooling is empowering them to refactor business processes by using data. This is extremely valuable to the organization at large.

Of course, one of the biggest benefits of cloud technology is to reduce the financial cost of running systems such as SAP. For organizations where organic data is growing at an exponential rate, requiring a new box and a slice of licensing to run it in the SAP world, they can reduce costs by simply by aging that data out of SAP, putting it into native Amazon technology, which reduces the size of the system that they had to buy, licence and run. Furthermore, it massively reduces the cost of maintaining and querying that data in the legacy world.

We’re certainly seeing the emergence of a new type of business role, that of the Data Architect – somebody who understands business issues, pains, and processes but can apply data models and bring an algorithmic mindset to manipulate that into a format that can yield results that directly impact top and/or bottom-line improvements.

Eyes to the horizon

Most organizations would consider themselves to have a data strategy, but it’s often intrinsically linked to application data management vs enterprise-wide and open marketplace data that relates to the industry vertical. Much of this depends on the maturity of the organizations with data architecture and skills within the organization.

On the one hand, there are those who don’t think beyond the application strategy about their data and then set a strategy for it (newer silos, but still a silo!). Then there are others who look at business improvement by bringing data into a data lake and what can be achieved by that. Your typical operational manager of a large enterprise software solution is not going to think across those axis lines. It’s then a question of finding someone to guide you through these scenarios, helping enterprise organizations develop insights they have never had before, they say fortune favors the bold, and in an agile cloud model being bold has never been so cheap and low risk.

Recommendations

“It’s Operate, Stupid.” How a lack of focus on Operate can undermine the business case for moving to the Cloud | Symphonic IT | Innovation News: April 2021

Related Content