Evonence | Google Cloud Partner

View Original

Why an intelligent data cloud is key to digital transformation

Modern data strategies stuck in ancient data systems

Google Cloud customer The Home Depot (THD) has made a name for itself going big—big stores, big product selection, and above all big customer satisfaction. But over time, THD realized it had a problem and of course, it was big—big data. While their success has largely been data-driven over the years, THD was looking for a way to modernize its approach. They needed to better integrate the complexities in their related businesses, such as tool rental and home services. They also wanted to better empower their data analysis teams and store associates with mobile computing devices, as well as leverage ecommerce and new modern tools like artificial intelligence (AI) to meet customer needs. 

Their existing on-premises data warehouse was proving too limited to handle contemporary pressures, overtaxed by the constant demand for analytics and struggling to manage the increasingly complex use cases from their data analysts. This not only drove massive growth of the data warehouse, it also created challenges in managing priorities, performance, and cost. 

If THD wanted to add capacity to the environment, it required major planning, architecture, and testing effort. In one case, adding on-premises capacity took six months of planning and a three-day service outage. But the results were short-lived—within a year, capacity was again scarce, impacting performance and ability to execute all the reporting and analytics workloads required. The Home Depot also needed to modernize their operational databases in order to deploy applications faster for their teams and move away from managing resources. 

These challenges resulted in no real-time access into sales, product, or shipping metrics which THD needed to optimize the customer experience, product SKUs, and more—which would ultimately help them differentiate in an industry where a seamless customer experience is everything. 

Sound familiar? These challenges are by now a common story across the enterprise. Most companies, like THD, are finding that operating legacy technology while trying to deliver a modern data strategy is no longer possible.

From gap to chasm: Why companies are failing to transform data into value
So what’s holding enterprises back? 

At the same time, the pressure to understand, respond to, and sometimes even predict risks and opportunities against the astronomical amount of data is only growing. Every executive recognizes the massive potential of their data to drive competitive advantage and accelerate digital transformation. Done right, data intelligence can help shape delightful, personalized customer experiences, streamline business operations, better forecast demand, and drive innovative and impactful outcomes. But it requires the ability to put all that data to work and derive insights from it—otherwise, you have all the ingredients but you’re cooking without the recipe. It might deliver results, but it will always fall short of the promised meal. 

Unfortunately, achieving real-time data insights still remains more of a pipe dream despite the exponential leaps forward in technology over the last few decades. And instead of being rocketed to new innovative heights, many companies instead find themselves staring down a widening gap between the value they have managed to deliver and the potential value they know can be achieved.

Here’s why it’s so hard for organizations to convert their data into value:

Data silos block businesses from getting insights.

Data silos are pervasive across organizations in every industry. These independent datasets are a consequence of logical, physical, technical, or cultural barriers, which typically lead to fragmented systems that are unable to communicate and share information in real time. For instance, human resources, finance, and other departments may collect overlapping data, but use different systems and tools to store and manage their data, leading to inconsistencies. Data silos prevent enterprises from achieving a consolidated view of data, which makes it impossible to uncover hidden opportunities. Critically, inconsistencies can also lead to mistrust, which hurts collaboration but also keeps people from wanting to use and collaborate with data again. 

On-premises infrastructure can’t scale fast enough to meet data growth.

Scaling on-premises infrastructure to keep up with growing customer demand and data growth has reached an untenable level. Rigid legacy infrastructures struggle to scale fast enough to keep pace with fluctuations in data requirements. The days of overnight data operations are being replaced by the need for streaming and batch data processing, while also supporting simultaneous processing. And legacy infrastructure just isn’t able to keep up. Hitting capacity limits end up slowing users down and tying up database administrators, too.

IT dependency and operational overhead for managing infrastructure is costly. 

Like other on-premises systems, databases follow the old-school mode of paying for hardware and licensing costs, as well as the associated ongoing systems engineering. Updating and extending storage usually requires modifications to both hardware and software, forcing teams to waste time that would be better spent elsewhere. Furthermore, legacy BI tools are reliant on someone manually creating, running and updating reports that are frequently outdated by the time they reach your inbox. 

As a result, many companies feel they are always running to keep up with their data. Instead of planning ahead, businesses are left reacting to whatever just happened. This becomes particularly troubling when unforeseen factors or disruptions occur. If COVID-19 has taught the world anything, it’s that nothing is certain and the best way to prepare is to plan for change. 

AI (and managing data) is complicated. 

AI-powered predictive analytics can be intimidating and time-consuming. But the hardest part of AI and machine learning (ML) is data management. For instance, ML models are only as good as the data used to train them. This is the concept of “garbage in, garbage out” in action—AI doesn’t remove inaccuracies or inconsistencies, so poor data quality will in turn yield poor insights. In addition, machine learning requires collecting and labeling a massive amount of data. In some cases, data is a free byproduct of a system or product, but for many others, getting the data you need to train data science models is incredibly expensive and challenging to collect. Many organizations lack the skills necessary to manage datasets and aren’t sure where to start investing when collecting data.

Transforming data to intelligence to value

By now, it’s clear that data alone is no longer the primary competitive differentiator—it’s what organizations are able to do with that data that will matter. But according to Gartner, approaching information and data as an asset is still in the early phases, which can make it a competitive advantage for organizations that invest in data transformation. Companies will require faster, forward-looking decisions to compete and data and analytics capabilities will need to become core competencies for delivering enterprise value.

There are many factors to consider when it comes to achieving data transformation. Advanced technology surrounding data means there are now better options that provide more access and easier manageability compared to the legacy solutions available in the past. Still, managing data at scale is hard even with today’s better process automation and tools to make sense of gathered data. Sometimes, more access translates to more risk, bringing a new set of challenges around the security, quality, and interpretability of data.