Category: Cloud & Infrastructure

Demystifying Industry Cloud | Blog

Microsoft recently rolled out its first industry cloud, Microsoft Cloud for Healthcare, combining capabilities across Dynamics 365, Microsoft 365, Power Platform, and Azure, to help improve care management and health data insights. Not so long ago, SAP’s CEO, Christian Klein, counted the company’s newly launched industry cloud among its growth drivers, and rightly so. Since its launch, SAP’s industry cloud has seen a long line of suitors rallying to partner in and build different vertical cloud offerings on top of it. In June 2020, there was Deloitte, then Accenture, and, most recently, Wipro.

Having analyzed this specific market trend throughout 2020, we at Everest Group have realized that, with all kinds of cloud providers jumping on to the industry cloud bandwagon, confusion abounds on what truly is an industry cloud.

So, what’s the buzz about? What is an industry cloud?

Simply put, an industry-specific/vertical cloud is a cloud solution that has been optimized and customized to fit the typical nuances and specific needs of a particular industry vertical’s customers. It is designed to tackle industry-specific constraints such as data protection, retention regulations, and operations with mission-critical applications.

We believe that a true industry cloud solution is characterized by four different layers: the infrastructure and platform layers, followed by an application layer, which is further supplemented by customization and differentiation layers.

The infrastructure layer, dominated by industry-agnostic IaaS players such as AWS, provides the hardware, network, scalability, and compute resources. The platform layer, such as Azure’s PaaS offering, is built over this infrastructure layer and becomes the debugging environment for building applications.

The application layer comprises a horizontal cloud application such as Salesforce CRM and, in several cases, hosts development platforms, such as the Salesforce App Cloud, that become the marketplace for building additional functionalities.

The differentiation layer adds vertical nuances to a horizontal application such as built-in industry regulatory compliance. It is here that we see the industry cloud taking shape, but the offerings are still standard and not customized to the needs of specific enterprises.

The customization layer brings in service providers or technology vendors with their vertical expertise and decades of experience in working with enterprises. They partner with providers of differentiated cloud offerings, and build tools and accelerators to further customize them to suit enterprise needs, adding capabilities such as AI and security, personalized dashboards for analytics, and integration services to build a truly industry-specific/vertical cloud offering.

We have illustrated this architecture in the exhibit below, taking the example of Veeva Systems, a popular provider of industry cloud solutions for life sciences. Veeva started as a CRM application built on the Salesforce platform, designed specifically for the pharmaceutical and biotechnology industries. Salesforce provided the infrastructure, platform, and application layers, while Veeva added a differentiation layer with a data model, application logic, and user interface tailored to support the pharmaceutical and biotechnology industries. It leveraged the standard Salesforce reporting, configuration, and maintenance capabilities.

Exhibit: Understanding the industry cloud architecture through Veeva industry cloud

layers

Over time, Veeva has cultivated an ecosystem of partnerships, including service providers such as Accenture (Veeva CRM partner) and Cognizant (Veeva Vault partner). These partners leverage the Veeva development platform to build additional applications customized to enterprise needs, thereby adding the final customization layer to Veeva’s solutions suite.

Industry cloud is gaining significant traction among industries

Industries such as healthcare and banking – which require rapid and auditable deployment of new features or functionalities to comply with regulatory changes – are rapidly adopting industry cloud. Healthcare continues to lead the charge from a vertical standpoint, but many industries are experiencing an uptick in adoption, including manufacturing, financial services, and retail.

The key reasons for this growth are:

  • Lowered barriers to cloud adoption, along with a ready-to-use environment with tools and services tailored to a specific vertical’s operational requirements
  • Accelerated innovation, lower costs, and reduced risks
  • Efficient handling of data sources and workflows, and compliance with the industry’s unique standards
  • Support for industry-standard APIs, helping companies connect more easily and securely, accelerating the DX economy
  • Access to industry insights or benchmarks through data aggregation from multiple clients within the same industry

What can organizations expect in the near future?

With the one-cloud-fits-all-approach reaching maturity, the next decade will be marked by the depth of vertical expertise and customization capabilities that can complement existing applications and address customer pain points. Different kinds of vendors are developing industry cloud solutions, ranging from hyperscalers such as Azure and GCP, to vertical-specific players such as Veeva. We will cover the industry cloud market in further detail in parts 2 and 3 of this blog series, in which we will answer the following questions:

  • What are the different kinds of industry cloud solution providers and their go-to-market strategies?
  • What should enterprises do, and how should they source industry cloud solutions that best suit their needs?

The battle for industry cloud is only going to get fiercer in the near future. Please follow this space for more blogs on the emerging war and warriors in the industry cloud market. Meanwhile, please feel free to reach out to [email protected] or [email protected] to share any questions and your experiences.

 

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

Reflections on Cloudera Now and the Battle for Data Platform Supremacy | Blog

The enterprise data market is going through a pretty significant category revision, with native technology vendors – like Cloudera, Databricks, and Snowflake – evolving, cloud hyperscalers increasingly driving enterprises’ digital transformation mandates, and incumbent vendors trying to remain relevant (e.g., the 2019 HPE-MapR deal.) This revision has led to leadership changes, acquisitions, and interesting ecosystem partnerships. Is data warehousing the new enterprise data cloud category that will eventually be a part of the cloud-first narrative?

Last month I attended Cloudera Now, Cloudera’s client and analyst event. Read on for my key takeaways from the event and let me know what you think.

  • Diversity and data literacy come to the forefront: Props to Cloudera for addressing key issues up front. In the first session, CEO Rob Bearden and activist and historian Dr. Mary Frances Berry had an honest dialogue about diversity and inclusion in tech. More often than not, tech vendors pay lip service to these issues of the zeitgeist, so it was a refreshing change to see the event kicking off with this important conversation. During the analyst breakout, Rob also took questions on data literacy and how crucial it is going to be as Cloudera aims to become more meaningful to enterprise business users against the backdrop of data democratization.
  • Cloudera seems to be turning around, slowly: After a tumultuous period following its merger with Hortonworks in early 2019, Cloudera has new, yet familiar, leaders in place, with Rob Bearden (previously CEO of Hortonworks) taking over the CEO reins in January 2020. The company reported its FYQ2 2021 results a few weeks before the event, and its revenue increased 9 percent over the previous quarter, its subscription revenue was up 17 percent, and its Annualized Recurring Revenue (ARR) grew 12 percent year-over-year. ARR is going to be really key for Cloudera to showcase stickiness and client retention. While its losses narrowed in FYQ2 2021, it has more ground to cover on profitability.
  • Streaming and ML will be key bets: As the core data warehousing platform market faces more competition, it is important for Cloudera to de-risk its portfolio by expanding revenue from emerging high growth spend areas. It was good to see streaming and Machine Learning (ML) products growing faster than the company. In early October, it also announced its acquisition of Eventador, a provider of cloud-native services for enterprise-grade stream processing, to further augment and accelerate its own streaming platform, named DataFlow. The aim is to bring this all together through Shared Data Experience (SDX), is Cloudera’s integrated offering for security and governance.
  • We are all living in the hyperscaler economy: Not surprisingly, there were a share of discussions around the increasing role of the cloud hyperscalers in the data ecosystem. The hyperscalers’ appetite is voracious; while the likes of Cloudera will partner with these cloud vendors, competition will increase, especially on industry-specific use cases. Will one of the hyperscalers acquire a data warehousing vendor? One can only speculate.
  • Industry-specificity will drive the next wave of the platform growth story: I’ve been saying this for a while – clients don’t buy tools, they buy solutions. Industry-context is becoming increasingly important, especially in more regulated and complex industries. For example, after its recent Vlocity acquisition, Salesforce announced Salesforce Industries to expand its industry product portfolio, providing purpose-built apps with industry-specific data models and pre-built business processes. Similarly, Google Cloud has ramped up its industry solutions team by hiring a slew of senior leaders from SAP and the industry. For the data vendors, focusing on high impact industry-led use cases – on their own and with partners – will be key to unlocking value for clients and driving differentiation. Cloudera showcased some interesting use cases for healthcare and life sciences, financial services, and consumer goods. Building a long-term product roadmap here will be crucial.

By happenstance, the Cloudera event started the same day its primary competitor, cloud-based data warehousing vendor Snowflake made its public market debut and more than doubled on day one, making it the largest ever software IPO. Make of that what you will, but to me it is another sign of the validation of the data and analytics ecosystem. Watch this space for more.

I’d enjoy hearing your thoughts on this space. Please email me at: [email protected].

Full disclosure: Cloudera sent a thoughtful package ahead of the event, which included a few fine specimens from the vineyards in La Rioja. I can confirm I wasn’t sampling them while writing this.

IBM Splits Into Two Companies | Blog

IBM announced this week that it is spinning off its legacy Managed Infrastructure business into a new public company, thus creating two independent companies. I highly endorse this move and, in fact, advocated it for years. IBM is a big, successful, proud organization. But it has been apparent for years that it faced significant challenges in trying to manage two very different businesses and operate within two very different operating models.

Read more in my blog on Forbes

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Cloud Native is Not Enough; Enterprises Need to Think SDN Native | Blog

Over the past few years, cloud-native applications have gained significant traction within organizations. These applications are built to work best in a cloud environment using microservices architecture principles. Everest Group research suggests that 59 percent of enterprises have already adopted cloud-native concepts in their production set-up. However, most enterprises continue to operate traditional networks that are slow and stressed due to data proliferation and an increase in cloud-based technologies. Like traditional datacenters, these traditional networks limit the benefits of cloud native applications.

SDN is the network corollary to cloud

The network corollary to a cloud architecture is a Software Defined Network (SDN.) An SDN architecture allows decoupling of the network control plane from the forwarding plane to enable policy-based, centralized management of the network. In simpler terms, an SDN architecture enables an enterprise to abstract away from the physical hardware and control the entire network through software.

Current SDN performance is sub-optimal

Most of the current SDN adoption is an afterthought, offering limited benefits similar to the lift-and-shift of applications to the cloud. Challenges with the current SDN adoptions include:

  • Limited interoperability – Given the high legacy presence, enterprises find themselves in a hybrid legacy SDN infrastructure, which is very difficult to manage.
  • Limited scalability – As the environment is not designed to be SDN native, applications end up placing a high volume of networking requests on the SDN controller, limiting data flow.
  • Latency issues – Separate control and data planes can introduce latency in the network, especially in very large networks. Organizations need to carry out significant testing activities before any SDN implementation.
  • Security issues – The ad hoc nature of current SDN adoption means that the entire network is more vulnerable to security breaches due to the creation of multiple network segments.

SDN native is not about applications but about the infrastructure

Unlike cloud native, which is more about how applications are architected, being SDN native is about architecting the enterprise network infrastructure to optimize the performance of modern applications running on it. While sporadic adoption of SDN might also deliver certain advantages, an SDN-native deployment requires organizations to overhaul their legacy infrastructure and adopt the SDN-native principles outlined below.

Principles of an SDN-native infrastructure

  • Ubiquity – An SDN-native infrastructure needs to ensure that there is a single network across the enterprise that can connect any resource anywhere. The network should be accessible from multiple edge locations supporting physical, cloud, and mobile resources.
  • Intelligence – An SDN-native infrastructure needs to leverage AI and ML technologies to act as a smart software overlay that can monitor the underlay networks and select the optimum path for each data packet.
  • Speed – To reduce time-to-market for new applications, an SDN-native infrastructure should have the capability to innovate and make new infrastructure capabilities instantly available. Benefits should be universally spread across regions, not limited to specific locations.
  • Multi-tenancy – An SDN-native infrastructure should not be limited by the underlay network providers. Applications should be able to run at the same performance levels regardless of any changes in the underlay network.

Recommendations on how you can become an SDN-native enterprise

Similar to the concept of cloud native, the benefits of an SDN-native infrastructure cannot be gained by porting software and somehow trying to integrate with the cloud. You need to build a network native architecture with all principles ingrained in its DNA from the very beginning. However, most enterprises already carry the burden of legacy networks and cannot overhaul their networks in a day.

So, we recommend the following approach:

  • Start small but think big – Most enterprises start their network transformation journeys by adopting SD-WAN in pockets. This approach is fine to begin with, but to become SDN native, you need to plan ahead with the eventual aim of making everything software-defined in the minimum possible time.
  • Time your transformation right – Your network is a tricky IT infrastructure component to disturb when everything is working well. However, every three to four years, you need to refresh the network’s hardware components. You should plan to use this time to adopt as much SDN as possible while ensuring that you follow SDN-native principles.
  • Leverage next-gen technologies – To follow the principles of SDN-native infrastructure, you need to make use of multiple next-generation technologies. For example, edge computing is essential to ensure ubiquity, AI/ML for intelligence, NetOps tools for speed, and management platforms for multi-tenancy.
  • Focus on business outcomes – The eventual objective of an SDN-native infrastructure is better business outcomes; this objective should not get lost in the technology upgrades. The SDN-native infrastructure should become an enabler of cloud-native application implementation within your enterprise to drive business benefits.

What has your experience been with adoption of an SDN-native infrastructure? Please share your thoughts with me at [email protected].

For more on our thinking on SDN, see our report, Network Transformation and Managed Services PEAK Matrix™ Assessment 2020: Transform your Network or Lie on the Legacy Deathbed

How Companies Unleash Value Creation Through Collaborative Networks | Blog

Ecosystems always existed in businesses, but the importance of collaboration across ecosystems dramatically increases today in the effort to create new value. However, the underlying software architecture – traditional database technology – that supports collaboration was built for a siloed, boundaried approach to business, not a networked approach among multiple enterprises or even multiple industries.

Read more on my blog

Choosing Your Best-fit Cloud Services Delivery Location | Blog

While enterprises around the globe began their steady march toward cloud services well before the outbreak of COVID-19, the pandemic has fueled cloud adoption like never before. Following the outbreak, organizations quickly went digital to enable remote working, maintain data security, and ensure operational efficiencies. Globally, first quarter spend on cloud infrastructure services in 2020 increased 39% over the same period last year.

Given the new realities, as firms make long-term cloud investments, it is vital for them to understand the cloud landscape and how various regions and countries fare in comparison to each other as cloud destinations. In this blog, we evaluate and compare the capabilities of different geographies in delivering cloud services.

The Americas

North America is among the most mature geographies for cloud services delivery. The US and Canada offer excellent infrastructure, a mature cloud ecosystem, high innovation potential, a favorable business environment, and business-friendly rules and regulations. The US is the most mature location in North America, offering a large talent pool and high collaboration prospects due to the presence of multiple technology start-ups, global business services centers, and service providers. However, the cost of operations is significantly high, primarily driven by high labor and real estate costs.

In contrast, most locations in Latin America (LATAM) have less mature cloud markets and ecosystems. While they provide proximity to key source markets in the US and considerable cost savings as compared with established markets (60-80%), they offer low innovation potential, a relatively small talent pool, few government policies to promote cloud computing, and limited breadth and depth of cloud delivery. Mexico is a standout location in LATAM, scoring better than others on parameters such as quality of cloud infrastructure, size of talent pool, and business environment.

Europe

Europe provides a good mix of established and emerging locations for cloud services. Countries in Western Europe have a fairly robust infrastructure to support cloud services, with high cybersecurity readiness, sizable talent pools, high complexity of services, and robust digital agendas and cloud policies. England and Germany are the most favorable locations in the region, driven by a comparatively large talent pool accompanied by high innovation potential, excellent cloud and general infrastructure, and high collaboration prospects due to numerous technology start-ups and enterprises. However, high cloud-adoption maturity has markedly driven up operating costs and intensified competition in these markets.

Countries in Central and Eastern Europe (CEE) offer moderate cost savings (in the 50-70% range) over leading source locations in Western Europe. While they offer a favorable cloud ecosystem, talent availability, greater proximity to key source markets, and lower competitive intensity, they score lower on innovation potential, complexity of services offered, and concentration of technology start-ups and players. The Czech Republic is a prominent location for cloud services in the CEE, while Poland and Romania are emerging destinations.

 Asia Pacific (APAC)

Most locations in APAC have high to moderate maturity for cloud services delivery due to the size of the talent pool and significant cost savings (as high as 70-80%) over source markets such as the US. For example, India offers low operating costs, coupled with a large talent pool adept in cloud skills and a significant service provider and enterprise presence. However, it scores lower on aspects such as innovation potential, infrastructure, and quality of business environment. Singapore is an established location that offers well-developed infrastructure and high innovation potential but also involves steep operating costs (40-45% cost arbitrage with the US).  The Philippines, a popular outsourcing destination, has lower cloud delivery maturity given its low innovation potential and talent availability for cloud services.

Middle East and Africa (MEA)

Israel is an emerging cloud location in the MEA that has achieved high cloud services maturity, but that benefit is accompanied by high operating costs and low cost-savings opportunity (about 10-15%). Other locations in the region have moderate to low opportunity due to small talent pools and lower maturity in terms of cloud services delivery.

Choosing your best-fit cloud services delivery location

Our analysis of locations globally reveals that, while different locations can cater to the increasing cloud demand, there is no single one-size-fits-all destination. Instead, the right choice depends on several considerations and priorities:

  • If operating cost is not a constraint and the key requirements are proximity to key source markets and a favorable ecosystem, the US, Canada, Germany, England, Singapore, and Israel are suitable locations, depending on the demand geography
  • If you are looking for moderate cost savings, proximity to source markets, and a favorable ecosystem, with the acceptable trade off of operations in a relatively low maturity market, countries such as Mexico, the Czech Republic, Hungary, Poland, Ireland, Romania, and Spain are attractive targets
  • However, if cost is driving your decision and proximity to demand geographies is not a priority, India, Malaysia, and China emerge as clear winners

The exhibit below helps clarify and streamline location-related decisions, placing an organization’s key considerations up front and identifying acceptable trade-offs to arrive at the best-fit locations shortlist.

Key considerations for choosing your cloud services delivery location

Cloud Handbook for blog

To learn more about the relative attractiveness of key global locations to support cloud skills, see our recently published Cloud Talent Handbook – Guide to Cloud Skills Across the Globe. The report assesses multiple locations against 15 parameters using our proprietary Enabler-Talent Pulse Framework to determine the attractiveness of locations for cloud delivery. If you have any questions or comments, please reach out to us at Hrishi Raj Agarwalla, Bhavyaa Kukreti, or Kunal Anand.

Politics and Technology in the Post-Pandemic World Will Drive Sovereign Cloud Adoption | Blog

Discussions around data and cloud sovereignty have been gaining momentum over the past decade for multiple reasons, including unsecured digital documents, varied data-related laws in different countries, scrutiny and liability issues, the dominance of US hyperscalers, and low visibility into and control of data. However, the biggest trigger for increased focus on sovereign cloud is the US CLOUD Act of 2018, which gives US law enforcement agencies the power to unilaterally demand access to data from companies regardless of the geography in which data is stored.

What is a sovereign cloud?

A sovereign cloud essentially aims to maintain the sovereignty of data in all possible ways for any entity (country, region, enterprise, government, etc.). Thus, it demands that all data should reside locally, the cloud should be managed and governed locally, all data processing – including API calls –  should happen within the country/geography, the data should be accessible only to residents of the same country, and the data should not be accessible under foreign laws or from any outside geography.

Multiple sovereign cloud initiatives have been undertaken in the past

In 2012, France launched a national cloud called Andromede. It failed to gain traction and was split into two clouds – Cloudwatt and Numergy. After struggling, Numergy was acquired by a telecommunications company in 2019, and CloudWatt was shut down in 2020. In another such initiative, Microsoft set up a dedicated German cloud in 2015 but decommissioned it in 2018 due to limited traction.

In 2019, in response to the US CLOUD Act, Germany, in partnership with France, launched Project GAIA-X, with the aim of creating a high performance, competitive, secure, and trustworthy infrastructure for Europe. The two governments invited many companies, including Bosch, Deutsche Telekom, Festo, Orange Business Services, and SAP, to collaborate on the project. While the original intent was quite aspirational and intended to create a completely independent hyperscale cloud, everyone quickly realized that this goal was illusory, and the project evolved into building something that brings the best of all worlds with “European DNA” at its center.

A novel mix of politics and technology makes GAIA-X interesting

While previous sovereign cloud initiatives have not been very successful, project GAIA-X promises to be different. When the project was first launched in 2019, it was largely seen as a political gimmick aimed at driving nationalistic sentiments in Europe. Many enterprises believed that the project could be completely scrapped if the government changed. Most were unconvinced of its potential success given their previous experiences with national clouds. The hyperscalers, AWS and Azure, dismissed GAIA-X’s potential to scale and become competitive, and called out that a national cloud was interesting only in theory.

However, the story that has unfolded and continues to evolve is completely different. While some of the intent may be political, various governments across Europe are strongly backing this initiative and coming together to create Europe’s own digital ecosystem. By June 2020, the GAIA-X Foundation had expanded to 22 organizations – including digital leaders, industrials, academia and associations – and the project is being supported by more than 300 businesses.

On the technology front, GAIA-X is now much clearer than when it began. It released a detailed technical architecture in June 2020, and primarily focuses on building common standards that enable transparency and interoperability by aligning network and interconnection providers, Cloud Solution Providers (CSP), High Performance Computing (HPC), and sector-specific clouds and edge systems. It has built use cases across industries including finance, health, public sector, agriculture, and energy, and horizontal use cases spanning Industry 4.0, smart living, and mobility.

There are still a few issues – including an unclear roadmap, unrealistically aggressive timelines, a lack of detail on hybrid or multi-cloud architectures, lower performance and higher costs, and a gap in the requisite skills – that need to be sorted for GAIA-X to succeed.

Nationalism and the coronavirus will push sovereign clouds beyond theory

Growing nationalistic sentiments sweeping various geographies/nations such as the US, Europe, and India, will add fuel to the sovereign cloud discussion. The Huawei ban in the US, the ban on Chinese apps in India, and reducing dependencies on China across sectors are indicators of emerging nationalistic sentiments. At the same time, the COVID-19 pandemic has forced enterprises to reevaluate their external exposure. The fact that cloud is the core pillar of enterprises’ data strategies falls right at the center of this entire discussion, and the idea of a national or sovereign cloud is destined to become more than mere theory.

Recommendations on how you can begin to incorporate elements of sovereign cloud in your cloud strategy

Enterprises around the globe need to pay immediate attention to this sovereign cloud trend. Here are our recommendations on how your organization can stay ahead of the curve.

  • Be aware of the GAIA-X trends in Europe: While the project is still in its initial stages, it is evolving very quickly and gaining significant traction. You should proactively track and understand GAIA-X adoption as it will be an excellent preview of what could happen in the rest of the world.
  • Factor the political angle into your future cloud strategy: Europe is an excellent example of the political influence in enterprise cloud decisions. You need to be aware of the political environment in your country and whether or not it could pose risks to your existing cloud strategy. The idea of a nationalistic or sovereign cloud will be driven by the government, which might pass laws enforcing some kind of implementation. You should be prepared for this potential eventuality.
  • Ensure technological flexibility in your cloud architecture: Cloud architecture flexibility will be key to integrating sovereign cloud with the rest of your environment. Hyperscalers like AWS and Azure are increasingly leveraging techniques such as increased licensing prices, full stack services, contractual terms, data transfer charges, and limited interoperability to lock in enterprises. You need to be wary of these techniques and adopt an agnostic, interoperable, multi-cloud strategy to ensure easy migration to sovereign cloud in the future.

How do you think sovereign cloud will evolve in Europe and other geographies? Please share your thoughts with me at [email protected].

Have a question?

Please let us know how we can help you.

Contact us

Email us

How can we engage?

Please let us know how we can help you on your journey.