For enterprise CIOs, cloud sovereignty is no longer optional—where your data is stored and processed now defines your ability to comply, compete, and control your digital destiny. While the public cloud offers unprecedented convenience and flexibility, it can come at the cost of increased regulatory exposure in certain sovereign domains. As organizations awaken to these hidden costs, many will embark upon a reorientation of data, or what some are calling geo-repatriation.
The rise of cloud computing has been a boon for companies, but there are downsides to running exclusively in public clouds. As organizations increasingly navigate legal exposure to emerging data sovereignty requirements, a distributed hybrid infrastructure can help them move with these tidal shifts, so they can continue to reap the benefits of cloud computing while minimizing risk.
Below, we’ll examine the top factors helping to drive the geo-repatriation of data, and how a distributed hybrid infrastructure is giving organizations the best of both worlds.
Increasing Cloud Costs
According to Gartner, total cloud spending is projected to reach a staggering $723.4 billion this year. Most companies have participated in this massive runup of cloud computing, whether they’re running on AWS, Azure, GCP, or one of the others. The speed and simplicity of procuring a sophisticated database or spinning up a GPU cluster has been a game-changer for how businesses consume technology.
But organizations are pushing back against the soaring costs of cloud computing. There is also the issue of lock-in that can come from going all-in on a single public platform as a service (PaaS), not to mention the moats that prevent companies from moving data out of software as a service (SaaS) applications.
This is driving many organizations to invest in FinOps techniques to keep costs down. Some are taking drastic action by fully repatriating their data back on-premises, while others seek hybrid-cloud or multicloud solutions that allow them to mitigate cost increases and data lock-in. In any case, investing in a distributed hybrid infrastructure can pay big dividends down the road.
Open Standards and Open Source
Emerging open standards are also helping data and applications become more distributed. For instance, Kubernetes eliminates much of the burden of moving containerized applications, while Amazon S3 provides a global standard for object storage. These technological standards, combined with server-based infrastructure, allow enterprises to optimally store their data and run their applications wherever they can install a server: from the edge and on-premises data centers, all the way to the public cloud or multiple public clouds.
AI and Regulatory Risk
Multinational companies are facing new pressures when it comes to data sovereignty, as governments enact regulations that require AI to be trained on data stored in specific geographic regions. Having a distributed hybrid infrastructure can enable this, while minimizing disruption to the rest of a company’s IT operations.
These novel regulations are impacting organizations’ decisions when it comes to running on-premises or in the cloud. The EU’s AI Act, in particular, is driving much of the geo-repatriation phenomenon. While the new law doesn’t explicitly force European companies to store training data and AI models locally, it certainly emphasizes the need for better data governance and heightened transparency of data sourcing and lineage.
While the data sovereignty issue may be strongest in Europe, other countries have formed their own laws, often based on GDPR, which dictate how organizations store private data about their citizens. The push to data sovereignty is global.
FinOps and Sustainability Initiatives
Just a decade ago, cost savings was one of the biggest drivers of public cloud adoption. But as customers’ bills have increased, many businesses—especially small and midsized companies and those with predictable IT demands—are realizing they don’t need the extreme flexibility public cloud computing provides. In response, they’re starting to bring some applications back on-premises to help rein in costs.
The enormous cost of training AI models is another significant reason companies are embracing FinOps initiatives. For example, a company may want to move its data and AI training workloads to a public cloud region that offers the most cost-effective data storage and processing.
This can all lead to the sustainability angle when making decisions about where and when to store and process data. For instance, a business may want to move its data storage and processing to the cloud region with the smallest footprint when it comes to electricity and water consumption. No matter the situation, having a distributed hybrid infrastructure makes managing this movement exponentially easier.
Distributed Hybrid Infrastructure Helps
Distributed hybrid infrastructure is crucial for providing organizations the flexibility to store and process data wherever they see fit, whether that’s on-premises, at the edge, in the public cloud, or any combination thereof.
Distributed hybrid infrastructure can’t be purchased from a public cloud provider, as they’re not in the business of enabling their solutions to run on competitors’ infrastructure. While businesses could theoretically cobble together their own distributed hybrid infrastructure layer using open standards and open-source software, there are problems with this approach—particularly around data security and access control.
This might not be an issue if enterprises planned to centralize where their data resides, but the reality is that enterprise data is becoming increasingly scattered across the world. Some data resides on-premises, some on edge devices, and still others distributed across public cloud data centers in foreign countries. Organizations need some level of standardized data governance and data management.
Having a distributed hybrid infrastructure gives organizations centralized control over their decentralized data storage and processing infrastructure. It gives them the power to set data governance policies in a central manner, and then push them down in a federated manner to provide the enforcement teeth where bits meet silicon. At a human level, distributed hybrid infrastructure also provides role-based access control (RBAC), which gives administrators fine-grained control over exactly how data can be accessed for specific roles.
The growth of public clouds and edge computing have truly democratized access to cutting-edge technology, and the impact continues to reverberate throughout the world. However, these changes have also brought with them a certain degree of technological chaos. The data center is no longer the center of data, for better or for worse.
As enterprise data becomes increasingly distributed across geographies and platforms, a hybrid infrastructure strategy should be top of mind for CIOs looking to maintain control and ensure sovereignty. This is especially important for new AI workloads where centralized governance and visibility will be key to future-proofing the enterprise in a fragmented digital landscape.