Author: Yugal Joshi

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Enterprise Capabilities: Own, Outsource? Why Not Orchestrate? | Blog

Traditionally, enterprises have seen ownership or outsourcing as mutually exclusive binary models to build capabilities and run their businesses. The typical framework to decide when to do what generally pivoted on issues such as strategic nature, internal capabilities, time to market, cost to do in-house, risk to outsource.

However, as technology disrupts businesses, enterprises are realizing that building capabilities is not a binary option. Innovation can come from any part of the ecosystem, some of which the enterprise may not even be aware of. This is where the concept of orchestration comes into the picture.

Own, Outsource, Orchestrate

For definition’s sake, ownership is about building most of the capabilities on your own, rather than relying on other partners. Outsourcing implies letting partners supply the capabilities as discreet service, and you program manage the transactions. Orchestration is bringing on board capabilities from different partners external to your organization and having them work in unison with your internal enterprise capabilities for collective benefits.

Apologists of ownership cite organizations such as Apple, which now wants to sell devices with its own processors. However, supporters of outsourcing cite examples of “next generation” companies such as Tesla, which is now manufacturing in China. Most enterprises realize they aren’t Apple or Tesla, so they need multiple levers to work to build their capabilities. The COVID-19 pandemic has stressed their thinking on overreliance on outsourcing, but their investment-constrained environment won’t allow them to own everything.

Many critics argue that orchestration harkens back to Service Integration and Management (SIAM), where enterprises used to orchestrate multiple IT vendors to drive outcomes in an outsourced environment. However, orchestration isn’t just about outsourcing. In addition to internal capabilities, it encompasses the broader ecosystem, including start-ups, academia, in-house incubation centers, technology vendors, crowd-sourced talent, and even peers within and outside the industry. Moreover, in this model, enterprises get strategically involved instead of only governing third-party providers. Whereas SIAM focused on making existing outsourcing work in a multi-vendor environment, orchestration drives business transformation where each entity plays its role to drive gains such as IP, access to newer markets, and different business models.

The need for orchestration

With the rapid pace of technological disruption, enterprises are realizing they can neither build everything in-house nor leave everything to their service providers. As different ecosystem entities deliver specific capabilities, enterprises want to plug-and-play them to drive business outcomes. Unlike other capabilities models, which either become black boxes or too difficult to navigate, orchestration allows flexibility on multiple dimensions. Enterprises aren’t wed to a specific idea or partner, but rather become open and fungible to adopt options based on business requirements.

What should enterprises do?

It’s quite clear that, despite the urge, a single capability model isn’t going to work for enterprises. They’ll have to mix and match these three models to varying degrees to drive business transformation. Even if they own the building of capabilities, they’ll have to rely on the broader ecosystem. Even if they outsource, their service provider has to rely on other partners to orchestrate the outcome. Therefore, orchestration will become increasingly important in different shapes and forms based on the intended objectives. And word to the wise: enterprises shouldn’t get fixated on one model of capability building by getting irrationally inspired by deep pocketed technology vendors, as that will be counterproductive.

Finally, although Enterprise Resource Planning (ERP) used to suffice, enterprises will increasingly need to rely on Network Resource Planning (NRP), wherein orchestrated networks of resources enter the picture.

Our Digital Services research team recently released a report on Network Resource Planning platforms. These platforms, which are needed to orchestrate capabilities across the value stream, are not limited to technology, but span every aspect of an enterprise.

Please reach out to me to share how are you building capabilities in your organization at [email protected].

AWS Outpost, Azure Stack, Google Anthos, and IBM Satellite: The Race to Edge-to-Cloud Architecture Utopia | Blog

In earlier blog posts, I have discussed chapter 1 and chapter 2  of the cloud wars. Now, let’s look at chapter 3 of this saga, where cloud vendors want to be present across the edge to cloud.

Since the days of mainframes and Java virtual machines, enterprise technology leaders have been yearning for one layer of architecture to build once, deploy anywhere, and bind everything. However, Everest Group research indicates that 85 percent of enterprises believe their technology environment has become more complex and challenging to maintain and modernize.

As the cloud ecosystem becomes increasingly central to enterprise technology, providers like Amazon Web Services, Google Cloud, IBM, and Microsoft Azure are building what they call “Edge-to-Cloud” architectural platforms. The aspiration of these providers’ offerings is to have a consistent architectural layer underpinning different workloads. And the aim is to run these workloads anywhere enterprises desire, without meaningful changes.

Will this approach satisfy enterprises’ needs? The hopes are definitely high as there are some key enablers that weren’t there in earlier attempts.

The architecture is sentient

This is a topic we discussed a few years back in an application services research report. Although evolutionary architecture has been around for some time, architectural practices continue to be rigid. However, the rapid shortening of application delivery cycles does not provide architects with the traditional luxury of months to arrive at the right architecture. Rather, as incremental delivery happens, they are building intelligent and flexible architectures that can sense changing business requirements and respond accordingly.

Open source is the default

As containers and Kubernetes orchestration become default ways to modernize and build applications, the portability challenge is taken care of, at least to some extent. Applications can be built and ported, as long as the host supports the OS kernel version.

Multi-cloud is important

We discussed this in earlier research. Regardless of what individual cloud providers believe or push their clients toward, enterprises require multi-cloud solutions for their critical workloads. This strategy requires them to build workloads that are portable across architectures.

The workload is abstracted

The stack components for workloads are being decoupled. This decoupling is not only about containerizing the workload, but building services components that can run on their own stack. This capability helps to change parts of workloads when needed, and different components can be deployed across scale-distributed systems.

With all this, the probability of achieving architectural portability may indeed be different this time. However, enterprise technology leaders need to be aware of and plan for several things:

  • Evaluate the need for one architecture: In the pursuit of operational consistency, organizations should not push their developers and enterprise architects toward a single common underlying architecture. Different workloads need different architectures. The focus should be on seamless orchestration of different architectural choices.
  • Focus on true architectural consistency versus “provider stack” consistency: This consistency issue has been the bane of enterprise technology. Workloads work well as long as the underlying platforms belong to one provider. That is the reason most of the large technology providers are building their own hybrid offerings for Edge-to-Cloud. Although many are truly built on open source technologies, experienced architects know very well that workloads porting across platforms always require some changes.
  • Manage overwhelming choices: Enterprise architects are struggling with the number of choices they now have across different clouds, terminologies, designs, and infrastructures, which makes their job of building a unified architecture extremely difficult. They need automated solutions that can suggest architectural patterns and choices to drive consistency and reliability.

So, what should enterprise architects now do?

Enterprise architects have always been envied as the guardians of systems, as they set the broad-based strategic underpinning for technology systems. Going forward, they will be bombarded with more choices and provider-influenced material to choose their underlying platforms to build workloads. All of the platforms will be built on a strong open source foundation and will claim interoperability and portability. The architects will need to be discerning enough to understand the nuances and the trade-offs they have to make. At the same time, they should also be open and unsuspicious of technology choices. They must have transparent discussions with technology providers, evaluate the offerings against their business needs, and assess the drivers for a unified architecture.

What is your take on unified architecture from Edge-to-Cloud? Please share your thoughts with me at [email protected].

Ethical Cloud: It’s About More Than Data | Blog

At their very core, cloud vendors are responsible for many critical aspects of enterprises’ data, including how it’s treated, its portability, workload interoperability, and compliance. Going forward, clients’ and governments’ expectations of cloud vendors will extend far beyond data…they’ll be expecting them to deliver “ethical clouds.”

I believe ethical clouds need to have four key dimensions, in addition to data security.

Sustainable

Multiple studies over the years have looked at data centers’ massive power consumption. While Power Usage Effectiveness (PUE) has improved and power consumption growth has decelerated due to virtualization and cloud services, there is still a long way to go. Clients will hold their cloud vendors accountable not only for power consumption but also for the source of their power and their overall emissions. Of course, cloud vendors’ sustainability strategies should extend into other areas including real estate and water. But power is the right place to start. Some vendors, such as AWS, have already taken initial steps in this direction.

Social

Beyond classic corporate social responsibility, cloud vendors will need to demonstrate how they care for society and the ecosystem in which they operate. This means they will need to clearly express their hiring practices, how they treat tax liabilities across regions, their litigation history, how they manage IP, their leadership views on important global events, etc. In short, they need a very strong social index or social perception.

Steadfast

Although their clients’ interests will always be the top priority for cloud vendors, there will be instances where those interests conflict with the government’s. Cloud vendors will need to have a clear policy and philosophy around handling such matters. They will need to assist the government to honor their duty to the nation, but within a defined construct to keep from violating their commitment to their client. For example, cloud vendors may soon be put under the same type of scrutiny that Apple was when it declined to create a “back door” into the iPhone.

Sensitive

At its heart, this is about the Golden Rule principle of treating others as you want to be treated. For cloud vendors, this means being sensitive to the law and order of the land in which they operate, avoiding behavior or actions that generate negative press or sentiment, and conscientiously addressing regulations, job creation, diversity, data sanctity, partner behavior, etc.

Cloud vendors also need to be sensitive about their internal structures. Their structures must ensure alignment, fairness, and transparency for all members of their ecosystem, including employees and partners, for example, system integrators. Embracing and engendering this type of environment will be just as important as what they deliver to their clients. Moreover, the vendors need to ensure the right behaviors from their employees and partners. And all discrepancies should result in investigation and suitable action. At the same time, employees and partners should be encouraged and empowered to register protest if they believe the vendor is shifting away from being an “ethical cloud” provider.

Over the years, in virtually all industries, organizations have faced significant repercussions for unacceptable financial, labor, and sustainability behaviors. And cloud vendors will soon have to operate under the same microscope.

Going forward, I believe cloud vendors will quickly find that success isn’t just about creating the best legally compliant technology solution; it’s also about living, and demonstrating, the spirit of being ethical.

What do you think about ethical cloud? Please share your thoughts with me at [email protected].

Cloud IaaS Versus SaaS: The Fight for Industry Cloud | Blog

A blog I wrote last year discussed the ugly market share war among the three top cloud infrastructure providers – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP.) Now we need to talk about how Independent Software Vendors (ISVs) like Oracle, Salesforce, and SAP are changing the battle with their industry-specific clouds.

Cloud IaaS vendors don’t have an industry cloud

The fact is that AWS, Azure, and GCP don’t really have industry clouds. These cloud IaaS vendors enable clients to run business applications and services on top of their cloud platforms, but haven’t built industry-specific application or process capabilities. They acknowledge that their clients want to focus more on building applications than infrastructure, which defeats their positioning in the industry cloud market. The core of what they offer is compute, data, ML/AI, business continuity, and security, and they rely on technology and service partners to build industry-relevant solutions. For example, GCP partnered with Deloitte for cloud-based retail forecasting, and AWS joined with Merck and Accenture for a medicine platform. They are also partnering with core business application vendors such as Cerner and Temenos.

Cloud SaaS providers have an edge

ISVs have continued to expand their industry cloud offerings over the past few years. For example, in 2016 Oracle acquired Textura, a leading provider of construction contracts and payment management cloud services, SAP introduced its manufacturing cloud in 2018, and in 2019 Salesforce launched its CPG and manufacturing clouds. Further, Oracle and SAP have built solutions for specific industries such as retail, healthcare, and banking by focusing on their core capability of ERP, supply chain management, data analytics, and customer experience. And while SFDC is still largely an experience-centric firm, it is now building customer experience, marketing, and services offerings tailored to specific industries.

So, what will happen going forward?

  • Industry cloud will change: Today’s industry clouds are one more way of running a client’s business; however, they are still not business platforms. Going forward, industry clouds will become something like a big IT park where clients, partners, and other third parties come to a common platform to serve customers. It will be as much about data exchange among ecosystem players as it is about closed wall operations. Enterprises in that industry can take advantage of specific features they deem appropriate rather than building their own. And, they will become a “tenant” of the industry cloud vendor’s or ISV’s platform.
  • Cloud vendors will heavily push industry cloud: AWS, Azure, and GCP will push their versions of industry cloud in 2020 and beyond, with strong marketing and commercial campaigns. They’ll likely be tweaking their existing offerings and creating wrappers around their existing services to give them an industry flavor. But, of course, the leading ISVs have already launched their industry clouds and will expand them going forward.
  • Channel push will increase: Both the cloud infrastructure service providers and the ISVs will aggressively push their service partners – especially consulting firms like Accenture, Capgemini, Deloitte, and PwC. The cloud vendors will also push their technology partners to build solutions or “exclusively” migrate applications onto their clouds.
  • Mega acquisitions: Historically, there hasn’t been any major acquisition activity between infrastructure providers and large software companies. But one of the top infrastructure providers might acquire a “horizontal” ISV that’s making inroads into industry clouds, like Salesforce or Workday, rather than buying a vertical industry ISV. Disclaimer: I am not at all suggesting than any such acquisition is in the cards!

So, what should enterprises do?

  • Be flexible: Enterprises need to closely monitor this rapidly evolving market. Though the paths IaaS providers and ISVs take may not meaningfully conflict in the near future, there may be stranger partnerships on the horizon, and enterprises need to be flexible to take advantage of them.
  • Be cautious: Because the cloud vendors’ channel partners are being pushed to sell their industry cloud offerings, enterprises need to fully evaluate them and their relevance to their businesses. Their evaluation should include not only business, technical, and functional, but also licensing rationalization, discount discussions, and talent availability for these platforms.
  • Be open: As the market disrupts and newer leaders and offerings emerge, enterprises need to be open to reevaluating their technology landscape to adopt the best-in-class solution for their businesses. This is as much about an open mindset as it is about internal processes around application development, delivery, and operations. Enterprise processes and people need to be open enough to incorporate newer industry solutions.

What do you think about industry clouds? Please share with me at [email protected].

Smartphones and 5G are the Keys to AR/VR Success | Blog

A Goldman Sachs Research report published in January 2016 stated that venture capitalists had pumped US$3.5 billion into the augmented reality (AR)/virtual reality (VR) industry in the previous two years and that AR and VR have the potential to become the next big computing platform.

But a recent PwC MoneyTree report stated that funding for augmented and virtual reality startups plunged by 46 percent to US$809.9 million in 2018, as compared to 2017. Indeed, multiple startups in the space shut down in 2019 because they haven’t been able to materialize their claims and have been unsuccessful in making the technology economically viable to the masses.

It’s not just startups that are throwing in the towel on their investments. For example, a dwindling user base drove Google to shut down its Jump VR platform in June 2019, and Facebook-owned Oculus is closing its Rooms and Spaces services at the end of this month.

AR VR blog graphic

And the startups cited above sunk nearly $550 million in investments when they shuttered their doors.

So, what’s going wrong?

Problems with present-day AR/VR

New technologies, particularly those for the consumer market, invariably need hype to succeed. But, despite all the buzz around how AR/VR can change the way consumers interact with commercial and non-commercial entities (like healthcare providers and educational institutions), multiple problems are getting in the way of mass adoption.

  1. Cumbersome hardware: Despite 2-3 generational improvements, the hardware for these technologies remains bulky and difficult to set-up or use. More research is needed to bring advanced optics and computation of head-mounted displays (HMD) to a usable level
  2. High cost: Nearly all the standalone AR HMDs cost over $1000, and those for VR are over $150. At these price points, the vast majority of purchasers are technology enthusiasts and novelty buyers
  3. Poor content: While the premise of buying an HMD is to consume and interact with content in an engaging way, the flood of poorly designed experiences hardly makes the case for purchasing it, even for those who can afford it
  4. Selling an idea, instead of a product: This is perhaps the biggest reason for the slew of closures in recent months. While AR and VR both have compelling use cases, the entrepreneurs and enterprising providing the products and platforms promised the sky and underdelivered on expectations.

So, what should enterprises do to change the narrative behind and fate of AR/VR?

Here are our recommendations.

Focus on developing smartphone-based AR

AR adoption is far outpacing VR adoption, not only because it adds to users’ reality rather than replacing it, but also because smartphones make its cost much lower for consumers. Indeed, smartphone-based AR has gone mainstream in the retail and gaming spaces; examples are IKEA, Nike, Nintendo, and Sephora, all of which have deployed applications for interactive experiences. The buzz will stay alive, and the uptake will continue to grow as an ever-increasing number of developers incorporate AR elements into their applications.

Embrace 5G with open arms

Fifth-generation (5G) wireless promises to bring high bandwidth and reliable low latency in data communications. Along with the proliferation of edge computing, 5G will help move processing-intensive tasks closer to the edge of the network and content closer to the user. In the near future, telecom operators could provide dedicated network slices for AR/VR applications, greatly reducing network latency. By enabling faster processing and increased proximity to content, 5G will boost the overall user experience. And this will lead to increased adoption.

But, before going all in, enterprises should partner with communication service providers to test 5G PoCs for AR/VR. Doing so will help them better prepare for scaled adoption as HMDs become less cumbersome.

By placing hype before substance, AR/AV providers created the current low-growth environment. We believe that focusing on smartphone- and 5G-based AR/VR will increase both investor confidence and customer adoption.

What is your view on the AR/VR space and the emergence of 5G as a savior? Please share with us at [email protected], [email protected], and [email protected].

Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud | Blog

Our research suggests that more than 90 percent of enterprises around the world have adopted cloud services in some shape or form. Additionally, 46 percent of them run a multi-cloud environment and 59 percent have adopted more advanced concepts like containers and microservices in their production set-up.  As they go deeper into the cloud ecosystem to realize even more value, they need to be careful of two seemingly similar but vastly different concepts: cloud native and native cloud.

What are Cloud Native and Native Cloud?

Cloud native used to refer to more container-centric workloads. However, at Everest Group, we define cloud native as building blocks of digital workloads that are scalable, flexible, resilient, and responsive to business demands for agility. These workloads are developer centric and operationally better than their “non-cloud” brethren.

Earlier, native cloud meant any workloads using cloud services. Now – just like in the mobile world where apps are “native Android or iOS,” meaning specifically built for these operating systems – native cloud refers to leveraging the capabilities of a specific cloud vendor to build a workload that is not available “like-to-like’ in other platforms. These are innovative disruptive offerings such as cloud-based relational database services, serverless instances, developer platforms, AI capabilities, workload monitoring, and cost management. They are not portable across other cloud platforms without a huge amount of rework.

With these evolutions, we recommend that enterprises…

Embrace Cloud Native

Cloud native workloads provide the fundamental flexibility and business agility enterprises are looking for. They thrive on cloud platforms’ core capabilities without getting tied to them. So, if need be, the workloads can easily be ported to other cloud platforms without any meaningful rework or drop in performance. Cloud native workloads also allow enterprises to build hybrid cloud environments.

And Be Cautious of Native Cloud

Most cloud vendors have built meaningfully advanced capabilities into their platforms. Because these capabilities are largely native to their own cloud stack, they are difficult to port across to other environments without considerable investment. And if – more likely, when – the cloud vendor makes changes to its functional, technical, or commercial model, the enterprise finds it tough to move away from the platform…the workloads essentially become prisoners of that platform.

At the same time, native cloud capabilities are fundamentally disruptive and very useful for enterprise workloads. However, to adopt such advanced features in the right manner and still be able build a multi-cloud strategy, enterprises need the necessary architectural, technical, deployment, and support capabilities. For example, in a serverless application, the architect can put business logic in a container and the event trigger in serverless code. With that approach, when porting to another platform, the container can be directly ported and only the event trigger needs to change.

Overall, enterprise architects need to be cautious of how deep they are going into a cloud stack.

Going Forward

Given that cloud platform feature functionality parity is becoming common, cloud vendors will increasingly push enterprises to go deeper into their stack. The on-premise cloud stack offered by all three large hyperscalers – Amazon Web Services, Azure, and Google Cloud Platform — is an extension of this strategy. Enterprise architects need to have a well thought out plan if they want to go deeper into a cloud stack. They must evaluate the interoperability of their workloads and run simulations at every critical juncture. Although it is unlikely that an enterprise would completely move off a particular cloud platform, enterprise architects should make sure they have the ability to compartmentalize workloads to work in unison and interoperate across a multi-cloud environment.

Please share your cloud native and native cloud experiences with me at [email protected].

The Top Three Cloud Vendors Are Fighting An Ugly War, And It’s Only Going To Get Uglier | Blog

With the massive size of the public cloud market, it’s reasonable to assume that there’s plenty of pie for each the top three vendors –Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure (Azure) – to get their fill.

But the truth is that they’re all battling to capture even larger slices. While this type of war has happened in other technology segments, this one is unique because the market is growing at 30-40 percent year-over-year.

Here are a few examples of the current ugly wars these vendors are waging against each other.

AWS is luring away Azure customers. Channel checks suggest that AWS is incentivizing clients to move their Windows workloads to Linux. The next step is to move their SQL Server workloads to other databases (e.g, PostgreSQL). Of course, it won’t stop there; there will be an entire migration strategy in place. And there have even been a few instances in which AWS has funded clients’ early PoCs for this migration along with the implementation partner.

Azure is pushing for AWS migration. It isn’t uncommon for many mid-sized implementation partners to make their client pitch solely on the fact that they can migrate AWS virtual instances to Azure and achieve 20-30 percent, or more, cost savings. It also isn’t uncommon for Microsoft to bundle a lot of its offerings, e.g., Office 365, to create an attractive commercial bundling for its broader cloud portfolio against AWS, which lacks an enterprise applications play.

GCP is pushing Kubernetes cloud and Anthos. GCP’s key argument against AWS and Azure is that they are both “legacy clouds.” The entire Kubernetes cloud platform story is becoming very interesting and relevant for clients. More so, for newer workloads, such as AI, Machine Learning, and Containers, GCP is pushing hard to take the lead.

Each of these vendors will continue to find new avenues to create trouble for each other. Given that Azure and GCP are starting from a low base, AWS has more to lose.

So, how will the cloud war play out? Three things will happen going forward.

Stack lock-in

The vendors have realized that clients can relatively easily move their IaaS, and even PaaS, offerings to another cloud. Therefore, they’ll push to make their clients adopt native platform offerings that cannot be easily ported to different clouds (e.g., serverless). While some of the workloads will be interoperable across other clouds, parts will run only on one cloud vendor’s stack.

Preferred partnership for workloads

While the vendors will acknowledge that implementation partners will always have cloud alliances, they’ll push to have preferred partner status for specific workloads such as database lift and shift, IoT, and AI. For this, most cloud vendors will partner with strategy consulting firms and implementation partners to shape enterprises’ board room agenda.

Migration kits

In 2018, Google acquired cloud migration specialist Velostrata. This year, both AWS and Azure launched migration kits targeting each other’s clients. This battle will soon become even fiercer, and will encompass not only lift and shift VM migration, but also workloads such as database instances, DevOps pipelines, application run time, and even applications.

With the cloud giants at war, enterprises need to be cautious of where to place their bets. They need to realize that working with cloud vendors will become increasingly complex, because it’s not only about the offerings portfolio but also the engagement model.

Here are three things enterprises should focus on:

  • Ensure interoperability and migration: Enterprises need to make the cloud vendors demonstrate evidence of easy workload interoperability with and migration to other cloud platforms. They should also determine the target cloud vendor’s own native migration tool kits and services, regardless of what the selected implementation partner may use.
  • Stress test the TCO model: Enterprises need to understand the total cost of ownership (TCO) of the new services offered by the cloud vendors. Most of our clients think the cloud vendors’ “new offerings” are expensive. They believe there’s a lack of translation between the offerings and the TCO model. Enterprises should also stress test the presented cost savings use cases, and ask for strong references.
  • Get the right implementation partner: For simpler engagements, the cloud vendors are increasingly preferring smaller implementation partners as they are more agile. Though the vendors claim their pricing model doesn’t change for different implementation partners, enterprises need to ensure they are getting the best commercial construct from both external parties. For complex transformations, enterprises must do their own evaluation, rather than rely on cloud vendor-attached partners. Doing so will become increasingly important given that most implementation partners work across all the cloud vendors.

The cloud wars have just begun, and will become uglier going forward. The cloud vendors’ deep pockets, technological capabilities, myriad offerings, and sway over the market are making their rivalries different than anything your business has experienced in the past. This time, you need to be better prepared.

What do you think about the cloud wars? Please write to me at [email protected].

Should You Scale Agile/DevOps? | Blog

Scaling in an application development environment can take many different shapes and forms. For the purposes of this blog, let’s agree that scaling implies:

  • From one team to a project
  • From one project to a program
  • From a program to multiple programs
  • From multiple programs to a portfolio
  • From a portfolio to a business
  • From a business to the entire enterprise.

Now that we’ve set the stage…our research suggests that over 90 percent of enterprises have adopted some form of Agile, and 63 percent believe DevOps is becoming a de facto delivery model. Having tasted initial success, most enterprises plan to scale their Agile/DevOps adoption.

The first thing we need to address here is the confusion. Does increasing adoption imply scaling?

Purists may argue that scaling across different projects isn’t really scaling unless they are part of the same program. This is because scaling by its very nature creates resource constraints, planning issues, increased overhead, and entropy. However, the resource constraints primarily relate to shared assets, not individual teams. So, if team A on one program and team B on another both adopt Agile/DevOps, neither team will be meaningfully impacted. Both can have their owns tools, processes, talent, and governance models. This all implies that this type of scaling isn’t really challenging. But, such a technical definition of scaling is of no value to enterprises. If different projects/programs within the organization adopt Agile and DevOps, they should just call it scaling. Doing so makes it easier and more straightforward.

The big question is, can – and should — Agile/DevOps be scaled?

Some people argue that scaling these delivery models negates the core reasons that Agile was developed in the first place: that they should thrive on micro teams’ freedom to have their own rhythm and velocity to release code as fast as they can, instead of getting bogged down in non-core tasks like documentation and meetings overload.

While this argument is solid in some respects, it doesn’t consider broader negative enterprise impacts. The increasingly complex nature of software requires multiple teams to collaborate. If they don’t, the “Agile/DevOps islands” that work at their own pace, with their own methods and KPIs, cannot deliver against the required cost, quality, or consistent user experience. For example, talent fungibility is the first challenge. Enterprises end up buying many software licenses, using various open source tools, and building custom pipelines. But because each team defines its own customization to tools and processes, it’s difficult to hand over to new employees when needed.

So, why is scaling important?

Scaling delivers higher efficiency and outcome predictability, especially when the software is complex. It also tells the enterprise whether it is, or isn’t, doing Agile/DevOps right. The teams take pride in measuring themselves on the outcomes they deliver. But they often are poorly run and hide their inefficiencies through short cuts. This ends up impacting employees’ work-life balance, dents technical and managerial skill development, increases overall software delivery costs, and may cause regulatory and compliance issues.

What’s our verdict on scaling Agile/DevOps?

We think it makes sense most of the time. But most large enterprises should approach it in a methodical manner and follow a clear transitioning and measurement process. The caveat is that enterprise-wide scale may not always be appropriate or advantageous. Enterprises must consider the talent model, tools investments, service delivery methods, the existence of a platform that provides common services (e.g., authentication, APIs, provisioning, and templates,) and flexibility for the teams to leverage tool sets they are comfortable with.

Scaling is not about driving standardization across Agile/DevOps. It’s about building a broader framework to help Agile/DevOps teams drive consistency where and when possible. Our research on how to scale Agile/DevOps without complicating may help you drive the outcomes you expect.

What has been your experience scaling Agile/DevOps adoption? Please contact me to share your thoughts.

You are on AWS, Azure, or Google’s Cloud. But are you Transforming on the Cloud? | Blog

There is no questioning the ubiquity of cloud delivery models, independent of whether they’re private, public, or hybrid. It has become a crucial technology delivery model across enterprises, and you would be hard pressed to find an enterprise that has not adopted at least some sort of cloud service.

However, adopting the cloud and leveraging it to transform the business are very different. In the Cloud 1.0 and Cloud 2.0 waves, most enterprises started their adoption journey through workload lift and shifts. They reduced their Capex and Opex spend by 30-40 percent over the years. Enamored with these savings and believing their job was done, many stopped there. True that the complexity of the lifted and shifted workload increased when they moved from Cloud 1.0 to Cloud 2.0, e.g., from web portal to collaboration platforms to even ERP systems. But, it was still lift and shift, with minor refactoring.

This fact demonstrates that most enterprises are, unfortunately, treating the cloud as just another hosting model, rather than a transformative platform.

Yet, a few forward-thinking enterprises are now challenging this status quo for the Cloud 3.0 wave. They plan to leverage the cloud as a transformative model where native services can be built in to not only modernize the existing technology landscape but also for cloud-based analytics, IoT-centric solutions, advanced architecture, and very heavy workloads. The main difference with these workloads is that they won’t just “reside” on cloud; they will use the fundamental capabilities of the cloud model for perpetual transformation.

So, what does your enterprise need to do to follow their lead?

Of course, you need to start by building the business case for transformation. Once that is done, and you’ve taken care of the change management aspects, here are the three key technology-centric steps you need to follow:

Redo workloads on the cloud

Many monolith applications, like data warehouses and sales applications, have already been ported to a cloud model. You need to break the ones you use down based on their importance and the extent of debt in terms of the transformation needed. Many components may be taken out of the existing cloud and ported in-house or to other cloud platforms based on the value they can deliver and their architectural complexity. Some components can leverage cloud-based functionalities (e.g., for data analytics) and drive further customer value. You need to think about extending the functionality of these existing workloads to leverage newer cloud platform features such as IoT-based data gathering and advanced authentication.

Revisit new builds on the cloud

Our research suggests that only 27 percent of today’s enterprises are meaningfully building and deploying cloud-native workloads. This includes workloads with self-scaling, tuning, replication, back-up, high availability, and cloud-based API integration. You must proactively assess whether your enterprise needs cloud-native architectures to build out newer solutions. Of course, cloud native does not mean every module should leverage the cloud platform. But a healthy dose of the workload should have some elements of cloud adoption.

Relook development and IT operations on the cloud

Many enterprises overlook this part, as they believe the cloud’s inherent efficiency is enough to transform their operating model. Unfortunately, it does not work that way. For cloud-hosted or cloud-based development, you need to relook at your enterprise’s code pipelines, integrations, security, and various other aspects around IT operations. The best practices of the on-premise era continue to be relevant, albeit in a different model, such as tweaks to the established ITSM model). Your developers need to get comfortable with leveraging abstract APIs, rather than worrying about what is under the hood.

The Cloud 3.0 wave needs to leverage the cloud as a transformation platform instead of just another hosting model. Many enterprises limit their cloud journey to migration and transition. This needs to change going forward. Enterprises will also have to decide whether they will ever be able to build so many native services in their private cloud. The answer is probably not. Therefore, the strategic decision of leveraging hybrid models will become even more important. The service partners will also need to enhance their offerings beyond migration, transformation during migration, and management. They need to drive continuous evolution of workloads once ported or built on the cloud.

Remember, the cloud itself is not magic. What makes it magical is the additional transformation you can derive beyond the cloud platform’s core capabilities.

What has been your experience in adopting cloud services? Please write to me at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.