Have you seen the results of our May 2021 Quick Poll on GBS sourcing trends? Read about the changes.

Author: Yugal Joshi

Cloud War Chapter 6: Destination Versus Architecture – Are the Cloud Vendors Fighting the Right Battle? | Blog

Most cloud vendors are obsessed with moving clients to their platforms. They understand that although their core services, such as compute, are no longer differentiated, there is still a lot of money to be made just by hosting their clients’ workloads. And they realize that this migration madness has sufficient legs to last for at least three to five years. No wonder migration spend constitutes 50-60 percent of services spend on cloud.

What about the workloads?

The cloud destination – meaning the platform it operates on – does not help a workload to run better. To get modernized and even natively built, a workload needs open architecture underpinned by software that helps applications be built just one time and run on any platform. Kubernetes has become a standard way of building such applications. Today, every major vendor has its own Kubernetes services such as Amazon EKS, Azure Kubernetes Services, Google Kubernetes Engine, IBM Cloud Kubernetes Service, and Oracle Container Engine for Kubernetes.

However, Kubernetes does not create an open and portable application. They help in running containerized workloads. But if those workloads are not portable, Kubernetes services defeat the purpose. Containerized workloads need to be architected to ensure user and kernel space are well designed. This is relevant for both new and modernized workloads. For portability, the system architecture needs to be built on a layer that is open and portable. Without this, the entire migration madness will simply add one more layer to enterprise complexity. Interestingly, any cloud vendor that helps clients build such workloads will almost always be the preferred destination platform, as clients benefit from combining the architecture, build, and run environment.

Role of partners of cloud vendors

Cloud vendors need to work with partners to help clients build new generation workloads that are easily movable and platform independent. The partners include multiple entities such as ISVs and service providers.

ISVs need to embed such open and interoperable elements into their solution so that their software can run on any platform. Service providers need to engage with clients and cloud vendors to build and modernize workloads by using open, interoperable principles. As there is significant business in cloud migration, there is a risk that the service partners will get blinded and become more focused on the growth of their own cloud services business than on driving value for the client. This is a short-term strategy that can help service providers meet their targets. But it will not make the service provider a client’s strategic long-term partner.

Role of enterprises

There is significant pressure on enterprise technology teams to migrate workloads to the cloud. Many of the clients we work with take pride in telling us they will move more than 80 percent of their workloads to the cloud in the next two to three years. There is limited deliberation on which workloads need newer build on portable middleware, or even if they need a runtime that can support an open and flexible architecture. Unfortunately, many enterprise executives have to show progress in cloud adoption. And though enterprise architects and engineering teams do come together to evaluate how a workload needs to be moved to the cloud, there is little discussion on building an open architecture for these workloads. A bright spot is that there seem to be good architectural discussions around newer workloads.

Enterprises will soon realize they are hitting the wall of cloud value because they did not meaningfully invest in building a stronger architecture. Their focus in moving their workloads to their primary cloud vendor is overshadowing all the other initiatives they must undertake to make their cloud journey a success.

Do you focus on the architecture or the destination for your workloads? I’d enjoy hearing your experiences and thoughts at [email protected].

Pega Platform: Constrained Talent and Higher Implementation Spend Limits Adoption | Blog

A BigTech battle seems to be heating up in the business process management (BPM) and CRM space. So, after completing our PEAK Matrix Assessment on Pega Services, our Enterprise Platform Services team conducted a Voice of the Market study on the company’s platform. They interviewed the 16 global IT service providers covered in the PEAK Matrix and 35+ of Pega’s enterprise clients to gauge their reactions to the Pega platform and Pega’s main competitors. The individuals we interviewed ranked Appian, Bizagi, IBM, Pega, and Salesforce “above, on, or below average” in multiple areas and drilled down on those areas to explain their rankings.

Here’s a summary of how Pega fared from that study.

Strong technology sophistication

Pega’s depth of products and its ability to enable rapid process automation leveraging low-code development and next-generation technology capabilities like artificial intelligence (AI), machine learning (ML), and robotic process automation (RPA) helped it receive the highest score among the providers. Enterprise clients rated its peers above average on their platform capabilities but cited dependencies on third-party capabilities to be a key gap.

Constrained talent availability

The enterprise clients perceive a high demand-supply gap for Lead System Architects (LSAs), Customer Decision Hub (CDH), and marketing specialists, and business architects for the Pega platform, so it received a below-average score for this parameter. In contrast, its peers have built a sizable talent pool, gaining them an above average score.

Complex licensing construct

Pega’s clients cited that its commercial flexibility could be better and that it’s often difficult to understand Pega’s licensing construct. On the other hand, they believe Pega’s peers are effectively bundling their offerings and provide flexible licensing and contracting options.

Good customer experience

According to customers, Pega’s collaboration with systems integrators (SI) in driving large engagements earned it an above average score, but they believe its proactivity in responding to client questions could be better. Its peers also received an above average score for their client-centric engagement.

What’s working well for Pega

  • Strong product portfolio: Pega’s well-knit product offerings across BPM and key CRM areas and domain specific capabilities help position it as an important transformation partner of choice for enterprises in the BFSI, Telecom, HLS, and public sector industries. Enterprises in these sectors have consistently rated Pega higher than average for its technology capabilities
    in the BPM domain.
  • Seamless integration and high customizability: Pega’s ability to easily integrate with all the major enterprise applications and its high level of customizability addressing complex use cases are viewed as its unique strengths. Service providers consider Pega a vendor-agnostic platform and cited multiple instances of adoption along with other key enterprise platforms to drive expansive digital transformation for their clients.
  • High enterprise mindshare: Pega’s transformational proof points with quantitative business impact in the above-mentioned industries across areas such as case management and BPM, low-code platform, RPA, customer service, and customer decision hub have instilled confidence among enterprise clients looking for end-to-end support.

What do customers expect from Pega in 2021?

  • Expanded partner ecosystem: Enterprises in emerging European markets, MEA, and Latin America cited that Pega needs to further its SI network investments across these regions to enhance delivery capabilities. They also believe Pega should enable visibility into its partners’ delivery capabilities by further structuring its partner program and upgrading its partner portal.
  • Ensure talent availability: As Pega’s product portfolio rapidly expands beyond its core focus, enterprises have cited difficulties in hiring, retaining, and training resources on newer modules. So, they believe Pega should better structure its certification programs and make larger investments in well-curated learning and training initiatives for enterprises and service providers.
  • Deliver out-of-the-box solutions: Enterprises perceive Pega to be a complex platform that increases time-to-market and implementation spend. They also believe Pega should further invest in enhancing mindshare and adoption of Pega Marketplace to facilitate the development of out-of-the-box solutions.

While Pega’s rapidly expanding product portfolio makes it one of the important vendors for digital process transformation initiatives, it needs to continuously evaluate its platform capabilities and make targeted investments to consistently drive higher value for its clients.

How has your experience been with Pega? Please write to us at [email protected] and [email protected].

The State of Cloud-Driven Transformation | In the News

From the realities of remote work to the increased focus on digital commerce, organizations are facing new challenges to their IT and security infrastructure—not to mention their general viability as a business. In response, many are rethinking their strategies and accelerating changes to their technology stacks to best confront the new future—primarily through the cloud.

While cost reduction or flexibility remains an important driver of cloud adoption, organizations are also seeking a number of other business benefits, from agility to innovation. “Whereas once organizations had to choose between quality, cost, and time when investing in new technology, the cloud operating model enables them to be more aspirational,” says Yugal Joshi, vice president of digital, cloud, and application services research for strategic IT consultancy and research firm Everest Group. “They can achieve the holy trinity of technology benefits. Organizations today want it all.”

Read the full Harvard Business Review report


Sustainable Business Needs Sustainable Technology: Can Cloud Modernization Help? | Blog

Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.

Current practices are not aligned to sustainable technology

The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:

  • Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
  • Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
  • Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.

The role of cloud modernization in building a sustainable technology ecosystem

In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.

Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.

Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.

Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.

Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.

Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.

When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.

This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Enterprise Capabilities: Own, Outsource? Why Not Orchestrate? | Blog

Traditionally, enterprises have seen ownership or outsourcing as mutually exclusive binary models to build capabilities and run their businesses. The typical framework to decide when to do what generally pivoted on issues such as strategic nature, internal capabilities, time to market, cost to do in-house, risk to outsource.

However, as technology disrupts businesses, enterprises are realizing that building capabilities is not a binary option. Innovation can come from any part of the ecosystem, some of which the enterprise may not even be aware of. This is where the concept of orchestration comes into the picture.

Own, Outsource, Orchestrate

For definition’s sake, ownership is about building most of the capabilities on your own, rather than relying on other partners. Outsourcing implies letting partners supply the capabilities as discreet service, and you program manage the transactions. Orchestration is bringing on board capabilities from different partners external to your organization and having them work in unison with your internal enterprise capabilities for collective benefits.

Apologists of ownership cite organizations such as Apple, which now wants to sell devices with its own processors. However, supporters of outsourcing cite examples of “next generation” companies such as Tesla, which is now manufacturing in China. Most enterprises realize they aren’t Apple or Tesla, so they need multiple levers to work to build their capabilities. The COVID-19 pandemic has stressed their thinking on overreliance on outsourcing, but their investment-constrained environment won’t allow them to own everything.

Many critics argue that orchestration harkens back to Service Integration and Management (SIAM), where enterprises used to orchestrate multiple IT vendors to drive outcomes in an outsourced environment. However, orchestration isn’t just about outsourcing. In addition to internal capabilities, it encompasses the broader ecosystem, including start-ups, academia, in-house incubation centers, technology vendors, crowd-sourced talent, and even peers within and outside the industry. Moreover, in this model, enterprises get strategically involved instead of only governing third-party providers. Whereas SIAM focused on making existing outsourcing work in a multi-vendor environment, orchestration drives business transformation where each entity plays its role to drive gains such as IP, access to newer markets, and different business models.

The need for orchestration

With the rapid pace of technological disruption, enterprises are realizing they can neither build everything in-house nor leave everything to their service providers. As different ecosystem entities deliver specific capabilities, enterprises want to plug-and-play them to drive business outcomes. Unlike other capabilities models, which either become black boxes or too difficult to navigate, orchestration allows flexibility on multiple dimensions. Enterprises aren’t wed to a specific idea or partner, but rather become open and fungible to adopt options based on business requirements.

What should enterprises do?

It’s quite clear that, despite the urge, a single capability model isn’t going to work for enterprises. They’ll have to mix and match these three models to varying degrees to drive business transformation. Even if they own the building of capabilities, they’ll have to rely on the broader ecosystem. Even if they outsource, their service provider has to rely on other partners to orchestrate the outcome. Therefore, orchestration will become increasingly important in different shapes and forms based on the intended objectives. And word to the wise: enterprises shouldn’t get fixated on one model of capability building by getting irrationally inspired by deep pocketed technology vendors, as that will be counterproductive.

Finally, although Enterprise Resource Planning (ERP) used to suffice, enterprises will increasingly need to rely on Network Resource Planning (NRP), wherein orchestrated networks of resources enter the picture.

Our Digital Services research team recently released a report on Network Resource Planning platforms. These platforms, which are needed to orchestrate capabilities across the value stream, are not limited to technology, but span every aspect of an enterprise.

Please reach out to me to share how are you building capabilities in your organization at [email protected].

AWS Outpost, Azure Stack, Google Anthos, and IBM Satellite: The Race to Edge-to-Cloud Architecture Utopia | Blog

In earlier blog posts, I have discussed chapter 1 and chapter 2  of the cloud wars. Now, let’s look at chapter 3 of this saga, where cloud vendors want to be present across the edge to cloud.

Since the days of mainframes and Java virtual machines, enterprise technology leaders have been yearning for one layer of architecture to build once, deploy anywhere, and bind everything. However, Everest Group research indicates that 85 percent of enterprises believe their technology environment has become more complex and challenging to maintain and modernize.

As the cloud ecosystem becomes increasingly central to enterprise technology, providers like Amazon Web Services, Google Cloud, IBM, and Microsoft Azure are building what they call “Edge-to-Cloud” architectural platforms. The aspiration of these providers’ offerings is to have a consistent architectural layer underpinning different workloads. And the aim is to run these workloads anywhere enterprises desire, without meaningful changes.

Will this approach satisfy enterprises’ needs? The hopes are definitely high as there are some key enablers that weren’t there in earlier attempts.

The architecture is sentient

This is a topic we discussed a few years back in an application services research report. Although evolutionary architecture has been around for some time, architectural practices continue to be rigid. However, the rapid shortening of application delivery cycles does not provide architects with the traditional luxury of months to arrive at the right architecture. Rather, as incremental delivery happens, they are building intelligent and flexible architectures that can sense changing business requirements and respond accordingly.

Open source is the default

As containers and Kubernetes orchestration become default ways to modernize and build applications, the portability challenge is taken care of, at least to some extent. Applications can be built and ported, as long as the host supports the OS kernel version.

Multi-cloud is important

We discussed this in earlier research. Regardless of what individual cloud providers believe or push their clients toward, enterprises require multi-cloud solutions for their critical workloads. This strategy requires them to build workloads that are portable across architectures.

The workload is abstracted

The stack components for workloads are being decoupled. This decoupling is not only about containerizing the workload, but building services components that can run on their own stack. This capability helps to change parts of workloads when needed, and different components can be deployed across scale-distributed systems.

With all this, the probability of achieving architectural portability may indeed be different this time. However, enterprise technology leaders need to be aware of and plan for several things:

  • Evaluate the need for one architecture: In the pursuit of operational consistency, organizations should not push their developers and enterprise architects toward a single common underlying architecture. Different workloads need different architectures. The focus should be on seamless orchestration of different architectural choices.
  • Focus on true architectural consistency versus “provider stack” consistency: This consistency issue has been the bane of enterprise technology. Workloads work well as long as the underlying platforms belong to one provider. That is the reason most of the large technology providers are building their own hybrid offerings for Edge-to-Cloud. Although many are truly built on open source technologies, experienced architects know very well that workloads porting across platforms always require some changes.
  • Manage overwhelming choices: Enterprise architects are struggling with the number of choices they now have across different clouds, terminologies, designs, and infrastructures, which makes their job of building a unified architecture extremely difficult. They need automated solutions that can suggest architectural patterns and choices to drive consistency and reliability.

So, what should enterprise architects now do?

Enterprise architects have always been envied as the guardians of systems, as they set the broad-based strategic underpinning for technology systems. Going forward, they will be bombarded with more choices and provider-influenced material to choose their underlying platforms to build workloads. All of the platforms will be built on a strong open source foundation and will claim interoperability and portability. The architects will need to be discerning enough to understand the nuances and the trade-offs they have to make. At the same time, they should also be open and unsuspicious of technology choices. They must have transparent discussions with technology providers, evaluate the offerings against their business needs, and assess the drivers for a unified architecture.

What is your take on unified architecture from Edge-to-Cloud? Please share your thoughts with me at [email protected].

Ethical Cloud: It’s About More Than Data | Blog

At their very core, cloud vendors are responsible for many critical aspects of enterprises’ data, including how it’s treated, its portability, workload interoperability, and compliance. Going forward, clients’ and governments’ expectations of cloud vendors will extend far beyond data…they’ll be expecting them to deliver “ethical clouds.”

I believe ethical clouds need to have four key dimensions, in addition to data security.


Multiple studies over the years have looked at data centers’ massive power consumption. While Power Usage Effectiveness (PUE) has improved and power consumption growth has decelerated due to virtualization and cloud services, there is still a long way to go. Clients will hold their cloud vendors accountable not only for power consumption but also for the source of their power and their overall emissions. Of course, cloud vendors’ sustainability strategies should extend into other areas including real estate and water. But power is the right place to start. Some vendors, such as AWS, have already taken initial steps in this direction.


Beyond classic corporate social responsibility, cloud vendors will need to demonstrate how they care for society and the ecosystem in which they operate. This means they will need to clearly express their hiring practices, how they treat tax liabilities across regions, their litigation history, how they manage IP, their leadership views on important global events, etc. In short, they need a very strong social index or social perception.


Although their clients’ interests will always be the top priority for cloud vendors, there will be instances where those interests conflict with the government’s. Cloud vendors will need to have a clear policy and philosophy around handling such matters. They will need to assist the government to honor their duty to the nation, but within a defined construct to keep from violating their commitment to their client. For example, cloud vendors may soon be put under the same type of scrutiny that Apple was when it declined to create a “back door” into the iPhone.


At its heart, this is about the Golden Rule principle of treating others as you want to be treated. For cloud vendors, this means being sensitive to the law and order of the land in which they operate, avoiding behavior or actions that generate negative press or sentiment, and conscientiously addressing regulations, job creation, diversity, data sanctity, partner behavior, etc.

Cloud vendors also need to be sensitive about their internal structures. Their structures must ensure alignment, fairness, and transparency for all members of their ecosystem, including employees and partners, for example, system integrators. Embracing and engendering this type of environment will be just as important as what they deliver to their clients. Moreover, the vendors need to ensure the right behaviors from their employees and partners. And all discrepancies should result in investigation and suitable action. At the same time, employees and partners should be encouraged and empowered to register protest if they believe the vendor is shifting away from being an “ethical cloud” provider.

Over the years, in virtually all industries, organizations have faced significant repercussions for unacceptable financial, labor, and sustainability behaviors. And cloud vendors will soon have to operate under the same microscope.

Going forward, I believe cloud vendors will quickly find that success isn’t just about creating the best legally compliant technology solution; it’s also about living, and demonstrating, the spirit of being ethical.

What do you think about ethical cloud? Please share your thoughts with me at [email protected].

Cloud IaaS Versus SaaS: The Fight for Industry Cloud | Blog

A blog I wrote last year discussed the ugly market share war among the three top cloud infrastructure providers – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP.) Now we need to talk about how Independent Software Vendors (ISVs) like Oracle, Salesforce, and SAP are changing the battle with their industry-specific clouds.

Cloud IaaS vendors don’t have an industry cloud

The fact is that AWS, Azure, and GCP don’t really have industry clouds. These cloud IaaS vendors enable clients to run business applications and services on top of their cloud platforms, but haven’t built industry-specific application or process capabilities. They acknowledge that their clients want to focus more on building applications than infrastructure, which defeats their positioning in the industry cloud market. The core of what they offer is compute, data, ML/AI, business continuity, and security, and they rely on technology and service partners to build industry-relevant solutions. For example, GCP partnered with Deloitte for cloud-based retail forecasting, and AWS joined with Merck and Accenture for a medicine platform. They are also partnering with core business application vendors such as Cerner and Temenos.

Cloud SaaS providers have an edge

ISVs have continued to expand their industry cloud offerings over the past few years. For example, in 2016 Oracle acquired Textura, a leading provider of construction contracts and payment management cloud services, SAP introduced its manufacturing cloud in 2018, and in 2019 Salesforce launched its CPG and manufacturing clouds. Further, Oracle and SAP have built solutions for specific industries such as retail, healthcare, and banking by focusing on their core capability of ERP, supply chain management, data analytics, and customer experience. And while SFDC is still largely an experience-centric firm, it is now building customer experience, marketing, and services offerings tailored to specific industries.

So, what will happen going forward?

  • Industry cloud will change: Today’s industry clouds are one more way of running a client’s business; however, they are still not business platforms. Going forward, industry clouds will become something like a big IT park where clients, partners, and other third parties come to a common platform to serve customers. It will be as much about data exchange among ecosystem players as it is about closed wall operations. Enterprises in that industry can take advantage of specific features they deem appropriate rather than building their own. And, they will become a “tenant” of the industry cloud vendor’s or ISV’s platform.
  • Cloud vendors will heavily push industry cloud: AWS, Azure, and GCP will push their versions of industry cloud in 2020 and beyond, with strong marketing and commercial campaigns. They’ll likely be tweaking their existing offerings and creating wrappers around their existing services to give them an industry flavor. But, of course, the leading ISVs have already launched their industry clouds and will expand them going forward.
  • Channel push will increase: Both the cloud infrastructure service providers and the ISVs will aggressively push their service partners – especially consulting firms like Accenture, Capgemini, Deloitte, and PwC. The cloud vendors will also push their technology partners to build solutions or “exclusively” migrate applications onto their clouds.
  • Mega acquisitions: Historically, there hasn’t been any major acquisition activity between infrastructure providers and large software companies. But one of the top infrastructure providers might acquire a “horizontal” ISV that’s making inroads into industry clouds, like Salesforce or Workday, rather than buying a vertical industry ISV. Disclaimer: I am not at all suggesting than any such acquisition is in the cards!

So, what should enterprises do?

  • Be flexible: Enterprises need to closely monitor this rapidly evolving market. Though the paths IaaS providers and ISVs take may not meaningfully conflict in the near future, there may be stranger partnerships on the horizon, and enterprises need to be flexible to take advantage of them.
  • Be cautious: Because the cloud vendors’ channel partners are being pushed to sell their industry cloud offerings, enterprises need to fully evaluate them and their relevance to their businesses. Their evaluation should include not only business, technical, and functional, but also licensing rationalization, discount discussions, and talent availability for these platforms.
  • Be open: As the market disrupts and newer leaders and offerings emerge, enterprises need to be open to reevaluating their technology landscape to adopt the best-in-class solution for their businesses. This is as much about an open mindset as it is about internal processes around application development, delivery, and operations. Enterprise processes and people need to be open enough to incorporate newer industry solutions.

What do you think about industry clouds? Please share with me at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.