Category: Cloud Infrastructure

Hyperscale Cloud Providers Shaping The Platform Marketplace | Blog

Today, nearly all companies invest in assembling digital platforms as a source of significant efficiencies and competitive advantage. Platforms enable a data-driven world and allow companies to create new business value in improving experiences for customers, employees and partners. Multiple platforms and other software components usually comprise the platform a company assembles. For example, a consistent component of almost all platforms is the heavy use of cloud and the rich set of capabilities available from the hyperscaled platforms. But companies need to understand the consequences of the presence of this component in the platform they build.

Read more in my blog on Forbes

How Cloud Operations Are Changing In 2020 | Blog

Fundamental changes are happening to the core set of assumptions that underpin how cloud ecosystems have been operating.  Some of the traditional assumptions are no longer true today or won’t be true soon. The changes are for the worse – they raise prices and introduce significant additional complexity for companies that operate in a hybrid and multi-cloud environment.

Read more in blog on Forbes

Koch Industries’ Takeover of Infor Signals Key Bet on Cloud ERP Market | Blog

Infor – a global leader in business cloud software specialized by industry – announced on February 4, 2020, that Koch Equity Development (KED) LLC, the private investment arm of Koch Industries, Inc., has entered into a definitive agreement to acquire Golden Gate Capital’s equity stake to take 100 percent ownership of Infor. Before the agreement, Koch Industries owned about 70 percent of Infor. While the official figures are not out, public sources peg the deal at close to US$13 billion, including preferred shares.

Why did Koch do this? Here’s our analysis of the key reasons.

  1. Riding the ERP demand bandwagon: Our recent analysis indicates that ERP-focused process transformation and modernization drove over 30 percent of all digital transformation initiatives in 2019. While Oracle and SAP are the largest players in this space, more than 35 percent of the market is still comprised of a long tail of bespoke ERP, where there is likely to be huge churn and consolidation. Infor promoters wanted to ride this growth opportunity through an IPO.
  2. SAP/Oracle in the equation: SAP is the largest player in the ERP market, and its current licenses are reaching end of life in 2025. Also, it’s well known that SAP is currently offering significant incentives to nudge enterprises to accelerate their move to S/4HANA, especially its cloud version. Oracle is using a similar incentive-oriented approach for its cloud-based applications. Promoters of Infor probably saw how this competitive dynamic would play out.
  3. Taking the private route instead of IPO: In a market driven by incentives, a privately-owned organization backed by a diversified cash rich promoter probably gives Infor a better shot at competing with its much larger competitors. For a listed firm, navigating a growth-oriented strategy (by de-emphasizing margins) would have been a tough nut to crack. Plus, competing with larger peers will require a significant investment in product modernization.
  4. The Koch portfolio companies: The jury is still out on whether Infor can credibly compete with SAP and Oracle in the broader ERP market. However, as the second largest privately-owned conglomerate in America (Cargill is the largest), the parent Koch Industries can enable a captive market for Infor to start with.

Deal implications

For Infor – potential growth through synergies: As we’ve already noted, this acquisition may give Infor access to a captive customer base in Koch Industries’ subsidiary and partner network. Given Koch’s presence in more than 60 countries, this may also allow Infor to expand the geographic footprint of its client base, especially in markets outside of North America where it has limited presence. This is coming at a time when enterprises in Europe and APAC are beginning to embrace SaaS offerings.

For Koch – potential RoI: We see this takeover as a typical private equity play to improve the value of an existing asset by riding the ERP demand wave. While Koch Industries has been making investments in its portfolio on the technology sector, we do not see this tweak in ownership as a sign of change in Koch’s portfolio mix. Given that a large chunk of Infor’s client base is still struggling with aging on-premises applications, Infor will need strong investment backing to convince its existing user base of its long-term cloud ERP vision.

For systems integrators – potential opportunities: Koch industries generated over US$100 billion in annual revenues in FY19. While we do not have estimates for the ERP transformation opportunities within Koch portfolio companies, it is likely to be a significant opportunity for systems integrators to focus on, using an Infor playbook.

For enterprises – better incentives, more supply-side investments: If Koch backs its investment with a large innovation fund, enterprises may gain on the following parameters:

  • Better incentives: Due to intensifying competition, enterprises may see more creative financial solutions and incentives around cloud-based ERP
  • Verticalized product offerings: Industry-focus and verticalization is gaining traction in the ERP space. Koch’s expertise in industries including manufacturing, chemicals, energy, petroleum, finance, and commodities may lead Infor to accelerate micro-vertical solutions faster than its competition.

The path forward

Infor has seen almost flat growth of around 3 percent over the past five years, due primarily to its long-term focus on SaaS revenues, which directly cannibalized its existing license revenues from on-premises offerings. In FY19, Infor’s SaaS revenue – which is about 20 percent of its overall revenue base of US$3.2 billion – grew at approximately 21 percent, while its licensing fees declined at about 12 percent. Given this strong focus on SaaS, Infor is well positioned in the manufacturing and allied verticals to overcome some of the critical cloud migration challenges and cater to some industries’ process-specific demands.

However, over the past year, there have been multiple big-ticket acquisitions in the enterprise platform market, geared to improving product capabilities – especially in areas related to cloud and analytics. In this hyper-competitive space, it will be challenging for Infor to compete credibly at scale based only on promoter-backed cash flow. Watch this space for more on how this move pans out.

Ethical Cloud: It’s About More Than Data | Blog

At their very core, cloud vendors are responsible for many critical aspects of enterprises’ data, including how it’s treated, its portability, workload interoperability, and compliance. Going forward, clients’ and governments’ expectations of cloud vendors will extend far beyond data…they’ll be expecting them to deliver “ethical clouds.”

I believe ethical clouds need to have four key dimensions, in addition to data security.

Sustainable

Multiple studies over the years have looked at data centers’ massive power consumption. While Power Usage Effectiveness (PUE) has improved and power consumption growth has decelerated due to virtualization and cloud services, there is still a long way to go. Clients will hold their cloud vendors accountable not only for power consumption but also for the source of their power and their overall emissions. Of course, cloud vendors’ sustainability strategies should extend into other areas including real estate and water. But power is the right place to start. Some vendors, such as AWS, have already taken initial steps in this direction.

Social

Beyond classic corporate social responsibility, cloud vendors will need to demonstrate how they care for society and the ecosystem in which they operate. This means they will need to clearly express their hiring practices, how they treat tax liabilities across regions, their litigation history, how they manage IP, their leadership views on important global events, etc. In short, they need a very strong social index or social perception.

Steadfast

Although their clients’ interests will always be the top priority for cloud vendors, there will be instances where those interests conflict with the government’s. Cloud vendors will need to have a clear policy and philosophy around handling such matters. They will need to assist the government to honor their duty to the nation, but within a defined construct to keep from violating their commitment to their client. For example, cloud vendors may soon be put under the same type of scrutiny that Apple was when it declined to create a “back door” into the iPhone.

Sensitive

At its heart, this is about the Golden Rule principle of treating others as you want to be treated. For cloud vendors, this means being sensitive to the law and order of the land in which they operate, avoiding behavior or actions that generate negative press or sentiment, and conscientiously addressing regulations, job creation, diversity, data sanctity, partner behavior, etc.

Cloud vendors also need to be sensitive about their internal structures. Their structures must ensure alignment, fairness, and transparency for all members of their ecosystem, including employees and partners, for example, system integrators. Embracing and engendering this type of environment will be just as important as what they deliver to their clients. Moreover, the vendors need to ensure the right behaviors from their employees and partners. And all discrepancies should result in investigation and suitable action. At the same time, employees and partners should be encouraged and empowered to register protest if they believe the vendor is shifting away from being an “ethical cloud” provider.

Over the years, in virtually all industries, organizations have faced significant repercussions for unacceptable financial, labor, and sustainability behaviors. And cloud vendors will soon have to operate under the same microscope.

Going forward, I believe cloud vendors will quickly find that success isn’t just about creating the best legally compliant technology solution; it’s also about living, and demonstrating, the spirit of being ethical.

What do you think about ethical cloud? Please share your thoughts with me at [email protected].

Key Issues For Enterprise IT Spend Decisions In 2020 | Blog

When considering your company’s IT spend decisions for 2020, it’s helpful to know what your peers and competitors expect for IT spend this year. What are their top investment priorities? Their biggest challenges? Is their focus different for 2020 than it was in 2019? How will their plans change if the economy strengthens or if it weakens?

Read my blog on Forbes

Cloud IaaS Versus SaaS: The Fight for Industry Cloud | Blog

A blog I wrote last year discussed the ugly market share war among the three top cloud infrastructure providers – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP.) Now we need to talk about how Independent Software Vendors (ISVs) like Oracle, Salesforce, and SAP are changing the battle with their industry-specific clouds.

Cloud IaaS vendors don’t have an industry cloud

The fact is that AWS, Azure, and GCP don’t really have industry clouds. These cloud IaaS vendors enable clients to run business applications and services on top of their cloud platforms, but haven’t built industry-specific application or process capabilities. They acknowledge that their clients want to focus more on building applications than infrastructure, which defeats their positioning in the industry cloud market. The core of what they offer is compute, data, ML/AI, business continuity, and security, and they rely on technology and service partners to build industry-relevant solutions. For example, GCP partnered with Deloitte for cloud-based retail forecasting, and AWS joined with Merck and Accenture for a medicine platform. They are also partnering with core business application vendors such as Cerner and Temenos.

Cloud SaaS providers have an edge

ISVs have continued to expand their industry cloud offerings over the past few years. For example, in 2016 Oracle acquired Textura, a leading provider of construction contracts and payment management cloud services, SAP introduced its manufacturing cloud in 2018, and in 2019 Salesforce launched its CPG and manufacturing clouds. Further, Oracle and SAP have built solutions for specific industries such as retail, healthcare, and banking by focusing on their core capability of ERP, supply chain management, data analytics, and customer experience. And while SFDC is still largely an experience-centric firm, it is now building customer experience, marketing, and services offerings tailored to specific industries.

So, what will happen going forward?

  • Industry cloud will change: Today’s industry clouds are one more way of running a client’s business; however, they are still not business platforms. Going forward, industry clouds will become something like a big IT park where clients, partners, and other third parties come to a common platform to serve customers. It will be as much about data exchange among ecosystem players as it is about closed wall operations. Enterprises in that industry can take advantage of specific features they deem appropriate rather than building their own. And, they will become a “tenant” of the industry cloud vendor’s or ISV’s platform.
  • Cloud vendors will heavily push industry cloud: AWS, Azure, and GCP will push their versions of industry cloud in 2020 and beyond, with strong marketing and commercial campaigns. They’ll likely be tweaking their existing offerings and creating wrappers around their existing services to give them an industry flavor. But, of course, the leading ISVs have already launched their industry clouds and will expand them going forward.
  • Channel push will increase: Both the cloud infrastructure service providers and the ISVs will aggressively push their service partners – especially consulting firms like Accenture, Capgemini, Deloitte, and PwC. The cloud vendors will also push their technology partners to build solutions or “exclusively” migrate applications onto their clouds.
  • Mega acquisitions: Historically, there hasn’t been any major acquisition activity between infrastructure providers and large software companies. But one of the top infrastructure providers might acquire a “horizontal” ISV that’s making inroads into industry clouds, like Salesforce or Workday, rather than buying a vertical industry ISV. Disclaimer: I am not at all suggesting than any such acquisition is in the cards!

So, what should enterprises do?

  • Be flexible: Enterprises need to closely monitor this rapidly evolving market. Though the paths IaaS providers and ISVs take may not meaningfully conflict in the near future, there may be stranger partnerships on the horizon, and enterprises need to be flexible to take advantage of them.
  • Be cautious: Because the cloud vendors’ channel partners are being pushed to sell their industry cloud offerings, enterprises need to fully evaluate them and their relevance to their businesses. Their evaluation should include not only business, technical, and functional, but also licensing rationalization, discount discussions, and talent availability for these platforms.
  • Be open: As the market disrupts and newer leaders and offerings emerge, enterprises need to be open to reevaluating their technology landscape to adopt the best-in-class solution for their businesses. This is as much about an open mindset as it is about internal processes around application development, delivery, and operations. Enterprise processes and people need to be open enough to incorporate newer industry solutions.

What do you think about industry clouds? Please share with me at [email protected].

Smartphones and 5G are the Keys to AR/VR Success | Blog

A Goldman Sachs Research report published in January 2016 stated that venture capitalists had pumped US$3.5 billion into the augmented reality (AR)/virtual reality (VR) industry in the previous two years and that AR and VR have the potential to become the next big computing platform.

But a recent PwC MoneyTree report stated that funding for augmented and virtual reality startups plunged by 46 percent to US$809.9 million in 2018, as compared to 2017. Indeed, multiple startups in the space shut down in 2019 because they haven’t been able to materialize their claims and have been unsuccessful in making the technology economically viable to the masses.

It’s not just startups that are throwing in the towel on their investments. For example, a dwindling user base drove Google to shut down its Jump VR platform in June 2019, and Facebook-owned Oculus is closing its Rooms and Spaces services at the end of this month.

AR VR blog graphic

And the startups cited above sunk nearly $550 million in investments when they shuttered their doors.

So, what’s going wrong?

Problems with present-day AR/VR

New technologies, particularly those for the consumer market, invariably need hype to succeed. But, despite all the buzz around how AR/VR can change the way consumers interact with commercial and non-commercial entities (like healthcare providers and educational institutions), multiple problems are getting in the way of mass adoption.

  1. Cumbersome hardware: Despite 2-3 generational improvements, the hardware for these technologies remains bulky and difficult to set-up or use. More research is needed to bring advanced optics and computation of head-mounted displays (HMD) to a usable level
  2. High cost: Nearly all the standalone AR HMDs cost over $1000, and those for VR are over $150. At these price points, the vast majority of purchasers are technology enthusiasts and novelty buyers
  3. Poor content: While the premise of buying an HMD is to consume and interact with content in an engaging way, the flood of poorly designed experiences hardly makes the case for purchasing it, even for those who can afford it
  4. Selling an idea, instead of a product: This is perhaps the biggest reason for the slew of closures in recent months. While AR and VR both have compelling use cases, the entrepreneurs and enterprising providing the products and platforms promised the sky and underdelivered on expectations.

So, what should enterprises do to change the narrative behind and fate of AR/VR?

Here are our recommendations.

Focus on developing smartphone-based AR

AR adoption is far outpacing VR adoption, not only because it adds to users’ reality rather than replacing it, but also because smartphones make its cost much lower for consumers. Indeed, smartphone-based AR has gone mainstream in the retail and gaming spaces; examples are IKEA, Nike, Nintendo, and Sephora, all of which have deployed applications for interactive experiences. The buzz will stay alive, and the uptake will continue to grow as an ever-increasing number of developers incorporate AR elements into their applications.

Embrace 5G with open arms

Fifth-generation (5G) wireless promises to bring high bandwidth and reliable low latency in data communications. Along with the proliferation of edge computing, 5G will help move processing-intensive tasks closer to the edge of the network and content closer to the user. In the near future, telecom operators could provide dedicated network slices for AR/VR applications, greatly reducing network latency. By enabling faster processing and increased proximity to content, 5G will boost the overall user experience. And this will lead to increased adoption.

But, before going all in, enterprises should partner with communication service providers to test 5G PoCs for AR/VR. Doing so will help them better prepare for scaled adoption as HMDs become less cumbersome.

By placing hype before substance, AR/AV providers created the current low-growth environment. We believe that focusing on smartphone- and 5G-based AR/VR will increase both investor confidence and customer adoption.

What is your view on the AR/VR space and the emergence of 5G as a savior? Please share with us at [email protected], [email protected], and [email protected].

What Enterprises Can Learn About Cloud Adoption from Project JEDI | Blog

Enterprises must consider many different factors when building their cloud adoption strategy. It’s not easy, but the decisions are critically important. Learning the smart – and not so smart – choices other organizations have made can be enormously valuable. Project JEDI is an example of what not to do.

The background

In 2018, the U.S. Department of Defense (DoD) launched the Joint Enterprise Defense Infrastructure (JEDI) project to accelerate its adoption of cloud architecture and services. The contract was written to award the US$10 billion deal to a single commercial provider to build a cloud computing platform that supports weapons systems and classified data storage. With this ambitious project, the Pentagon intends to drive full-scale implementations and better return on investment on next-generation technologies including AI, IoT, and machine learning.

Here are key learnings from Project JEDI that enterprises should take very seriously.

1. Not to do: Pick a single cloud provider without evaluating others
To do: Explore a multi-hybrid cloud model

The JEDI contract’s fundamental issue arose from the fact that it considered a single cloud provider to chart out its entire cloud strategy. This caused great stakeholder dissonance, in large part because alignment with just one provider could result in losing out on ongoing innovation from other providers.

Because a single cloud strategy offers advantages including lower upfront cost and streamlined systems, enterprises often adopt this approach. But a multi-hybrid architecture allows them to tap into the best of multiple providers’ capabilities and stay ahead on the technology curve.

A well-planned multi-hybrid cloud strategy offers the following benefits:

JEDI

2. Not to do: Ignore open cloud options
To do: Consider application interoperability and portability in cloud design

Project JEDI is completely dependent on a single cloud model, which exposes it to significant lock-in risks such as threat to transfer of data, application, or infrastructure. All of these can have a lasting negative impact on business continuity.

An open cloud model allows application interoperability and portability, and saves enterprises from vendor lock-in. Enterprises should be open to exploring open source or open design technologies for cloud. They should also consider implementing DevOps tools, container technology, and configuration management tools, as they will allow them to deploy their applications to diverse IT environments. All these options reduce the lock-in risks that stem from proprietary configurations, and enable organizations to easily, and with minimal technical costs, switch between providers based on business objectives.

3. Not to do: Be biased towards an incumbent
To do: Evaluate cloud vendors by aligning the service portfolio to workload requirements

The Pentagon recently awarded the JEDI contract to Microsoft. However, in the initial stages, the project garnered massive attention for its alleged preference to Amazon Web Services (AWS,) which has been involved in multiple government contracts for providing cloud support. Critics of the contract cited that an alleged unfair relationship between DoD employees and AWS would lead to inherent bias and rigged competitive bidding.

While existing relationships are important and can deliver strong value, enterprises should carefully evaluate their vendor portfolio against their workload needs. In most cases, a combination of different vendors will provide the most optimum solution. For example, Java workloads are known to work best with AWS, .NET workloads work best with Microsoft Azure, and Google Cloud Platform (GCP) is most suited for analytics workloads.

4. Not to do: Lose sight of stakeholders while moving ahead
To do: Have all stakeholders on board

A wide range of stakeholders are involved in selecting a cloud provider, and each of their needs, interests, and pre-existing biases must be addressed to avoid project derailment. For example, President Trump’s distaste for Amazon is cited as the prime reason that the JEDI project was awarded to Microsoft rather than AWS.
One of cloud project leaders’ most critical responsibilities is identifying and understanding stakeholders’ vendor biases and bringing them into alignment with business objectives if the biases are based on something other than facts.

The DoD’s approach to Project JEDI led to a prolonged delay in its aspirations in adopting cloud architecture and services and developing leading cloud-based AI capabilities. Avoiding the DOD’s missteps will help enterprises more quickly shape and secure a cloud-based contract that satisfies all stakeholders and supports their organizations’ business agenda.

What are your thoughts around the JEDI case and what enterprises can learn not to do, and to do, from it? Please share your thoughts with us at [email protected] and [email protected].

 

You Need to Rethink Your Mainframe Strategy in Today’s Digital World | Blog

The demise of the mainframe has been predicted every year over the past decade. With digital and cloud transformation becoming the enterprise norm, the death-knell has been getting louder. But, for multiple reasons, mainframes aren’t going anywhere anytime soon.

For example, they are designed for efficiency and allow enterprises to run complex computations in a compact infrastructure with high utilization levels. They receive regular updates that can be applied without any business disruption, making them easily expandable and upgradable. The latest mainframes work well with mobile applications, which are becoming the norm across industries. And the fact that mainframes host some of enterprises’ most critical production data has created somewhat of a lock-in situation.

Despite mainframes’ staying power, a variety of factors – including 1) difficulty integrating mainframe-housed data with the rest throughout the enterprise; 2) the shrinking number of IT professionals who understand mainframes’ architectural complexities; and 3) mainframes’ lack of agility – can prevent enterprises from excelling in today’s digital environment.

Levers Enterprises Can Pull to Maximize Their Mainframe Value

With these issues in mind, some enterprises think they should eliminate mainframes completely from their technology environment. But that’s not the best route to take in the short- to medium-term. Rather, by embracing the best of mainframes and digital technologies, they can gain on operational costs and capital invested and realize business flexibility and agility without a loss of continuity or high mainframe efficiency levels.

Our Recommendations on How You Can Achieve Maximum Value from Your Mainframes

Mainframe upgrades – The latest mainframe releases mimic the benefits offered by the cloud. If you haven’t upgraded to the newest release, you should consider doing so now.

Phased retiring of applications – For applications that can work as effectively on the cloud as on mainframes, you should develop new ones on the cloud and slowly phase out the old ones from your mainframe. This approach will avoid business disruptions and help you quickly build new services while still being able to access real-time mainframe data.

Mainframe-as-a-Service (MaaS) –If you’re looking to go asset light, you can adopt MaaS, wherein your existing mainframe assets are transferred to a services provider. In these arrangements, you’ll be charged on an actual consumption basis if you meet a minimum volume commitment. You’ll gain the most value from MaaS when you use it in conjunction with phased retiring of applications, because it will allow you to gain the benefits of a consumption model while preparing your cloud environment in parallel.

Automated migration to modern tech stacks – Multiple tools and services are available to migrate a legacy stack (such as COBOL-based) to a newer stack (such as Java-based or .Net-based,) in an automated fashion. Given the variety of mainframe languages, databases, and infrastructure technologies going into the migration, you should always adopt a custom migration approach.

Wrapper approaches – In the short-term, instead of migrating away from your mainframe, you can augment it with agile data services that enable data interoperability with the rest of your infrastructure. You can also run emulators on the cloud and host legacy application code with minimum changes.

Mainframes are far from dead and will continue to form the backbone of many large enterprises in the near future. However, to excel in today’s digital world, you need to reconsider your mainframe strategy to get the best of all emerging digital technologies. Of course, there is no one size fits all solution. So, you’ll need to take a customized approach, combining the various transformation levers that are most applicable to your enterprise’s unique situation.

How do you think mainframes will fare in the digital world? Please share your thoughts with me at [email protected].

Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud | Blog

Our research suggests that more than 90 percent of enterprises around the world have adopted cloud services in some shape or form. Additionally, 46 percent of them run a multi-cloud environment and 59 percent have adopted more advanced concepts like containers and microservices in their production set-up.  As they go deeper into the cloud ecosystem to realize even more value, they need to be careful of two seemingly similar but vastly different concepts: cloud native and native cloud.

What are Cloud Native and Native Cloud?

Cloud native used to refer to more container-centric workloads. However, at Everest Group, we define cloud native as building blocks of digital workloads that are scalable, flexible, resilient, and responsive to business demands for agility. These workloads are developer centric and operationally better than their “non-cloud” brethren.

Earlier, native cloud meant any workloads using cloud services. Now – just like in the mobile world where apps are “native Android or iOS,” meaning specifically built for these operating systems – native cloud refers to leveraging the capabilities of a specific cloud vendor to build a workload that is not available “like-to-like’ in other platforms. These are innovative disruptive offerings such as cloud-based relational database services, serverless instances, developer platforms, AI capabilities, workload monitoring, and cost management. They are not portable across other cloud platforms without a huge amount of rework.

With these evolutions, we recommend that enterprises…

Embrace Cloud Native

Cloud native workloads provide the fundamental flexibility and business agility enterprises are looking for. They thrive on cloud platforms’ core capabilities without getting tied to them. So, if need be, the workloads can easily be ported to other cloud platforms without any meaningful rework or drop in performance. Cloud native workloads also allow enterprises to build hybrid cloud environments.

And Be Cautious of Native Cloud

Most cloud vendors have built meaningfully advanced capabilities into their platforms. Because these capabilities are largely native to their own cloud stack, they are difficult to port across to other environments without considerable investment. And if – more likely, when – the cloud vendor makes changes to its functional, technical, or commercial model, the enterprise finds it tough to move away from the platform…the workloads essentially become prisoners of that platform.

At the same time, native cloud capabilities are fundamentally disruptive and very useful for enterprise workloads. However, to adopt such advanced features in the right manner and still be able build a multi-cloud strategy, enterprises need the necessary architectural, technical, deployment, and support capabilities. For example, in a serverless application, the architect can put business logic in a container and the event trigger in serverless code. With that approach, when porting to another platform, the container can be directly ported and only the event trigger needs to change.

Overall, enterprise architects need to be cautious of how deep they are going into a cloud stack.

Going Forward

Given that cloud platform feature functionality parity is becoming common, cloud vendors will increasingly push enterprises to go deeper into their stack. The on-premise cloud stack offered by all three large hyperscalers – Amazon Web Services, Azure, and Google Cloud Platform — is an extension of this strategy. Enterprise architects need to have a well thought out plan if they want to go deeper into a cloud stack. They must evaluate the interoperability of their workloads and run simulations at every critical juncture. Although it is unlikely that an enterprise would completely move off a particular cloud platform, enterprise architects should make sure they have the ability to compartmentalize workloads to work in unison and interoperate across a multi-cloud environment.

Please share your cloud native and native cloud experiences with me at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.