Author: Yugal Joshi

Cloud Transformation: How Much Is Enough? | Blog

With today’s business transformation led by cloud, migration frenzy remains at a fever pitch. Even though most cloud vendors are now witnessing slower growth, it will still be years before this juggernaut halts. But can you have too much cloud? The question of how far enterprises should go in their cloud transformation journey is rarely thought of. Read on to learn when it may be time for your enterprise to stop and reexamine its cloud strategy.  

Enterprises believe cloud will continue to be critical but only one part of their landscape, according to our recently published Cloud State of the Market 2021. Once enterprises commit to the cloud, the next question is: How far should they go?  This runs deeper and far beyond asking how much of their workloads should run on cloud, when is the opportune time to repatriate workloads from cloud, and whether workloads should be moved between clouds.

Unfortunately, most enterprises are too busy with migration to consider it. Cloud vendors certainly aren’t bringing this question up because they are driving consumption to their platform. Service partners are not talking about this either, as they have plenty of revenue to make from cloud migration.

When should enterprises rethink the cloud transformation strategy?

The challenge in cloud transformation can manifest in multiple ways depending on the enterprise context. However, our work with enterprises indicates three major common obstacles. It’s time to relook at your cloud journey if your enterprise experiences any of the following:

  • Cloud costs can’t be explained: Cloud cost has become a major issue as enterprises realize they did not plan their journeys well enough or account for the many unknowns to start. However, after that ship has sailed, the focus changes to micromanaging cloud costs and justifying the business case. It is not uncommon for enterprises to see the total cost of ownership going up by 20% post cloud migration and the rising costs are difficult for technology teams to defend
  • Cloud value is not being met: Our research indicates 67% of enterprises do not get value out of their cloud journey. When this occurs, it is a good point to reexamine cloud. Many times, the issue is poor understanding of cloud at the offset and the workloads chosen. During migration frenzy, shortcuts are often taken and modern debt gets created, diluting the impact cloud transformation can have for enterprises
  • Cloud makes your operations more complex: With the fundamental cloud journey and architectural input at the beginning more focused on finding the best technology fits, downstream operational issues are almost always ignored. Our research suggests 40-50% of cloud spend is on operations and yet enterprises do not think through this upfront. With the inherent complexity in cloud landscape, accountability may become a challenge. As teams collapse their operating structure, this problem is exacerbated

What should enterprises do when they’ve gone too far in the cloud?

This question may appear strange given enterprises are still scaling their cloud initiatives. However, some mature enterprises are also struggling with deciding the next steps in their cloud journey. Each enterprise and business unit within them should evaluate the extent of their cloud journey. If any of the points mentioned above are becoming red flags, they must act immediately.

Operating models also should be examined. Cloud value depends on the way of working and the internal structure of an enterprise. Centralization, federation, autonomy, talent, and sourcing models can influence cloud value. However, changing operating models in pursuit of cloud value should not become putting the cart before the horse.

Enterprises always struggle with the question of where to stop. This challenge is only made worse by the rapid pace of change in cloud. As enterprises go deeper into cloud stacks of different vendors, it will become increasingly difficult to tweak the cloud transformation journey.

Despite these pressures, enterprises should periodically evaluate their cloud journeys. Cloud vendors, system integrators, and other partners will keep pushing more cloud at enterprises. Strong enterprise leadership that can ask and understand the larger question from a commercial, technical, and strategic viewpoint is needed to determine when enough cloud is enough. Therefore, from journey to the cloud, to journey in the cloud, enterprises should now also focus on the journey’s relevance and value.

If you would like to talk about your cloud journey, please reach out to Yugal Joshi at [email protected].

For more insights, visit our Market Insights™ exploring the cloud infrastructure model. Learn more

Multi-cloud: Strategic Choice or Confusion? | Blog

The multi-cloud environment is not going away, with most large enterprises favoring this approach. Multi-cloud allows enterprises to select different cloud services from multiple providers because some are better for certain tasks than others, along with other factors. While there are valid points to be made both for and against multi-cloud in this ongoing debate, the question remains: Are enterprises making this choice based on strategy or confusion? Let’s look at this issue closer.

The technology industry has never solved the question of best-of-breed versus bundled/all-in consumption. Many enterprises prefer to use technologies consumed from different vendors, while others prefer to have primary providers with additional supplier support. Our research suggests 90% of large enterprises have adopted a multi-cloud strategy.

The definition of multi-cloud has changed over the years. In the SaaS landscape, enterprise IT has always been multi-cloud as it needed Salesforce.com to run customer experience, Workday to run Human Resources, SAP to run finance, Oracle to run supply chain, and ServiceNow to run service delivery. The advent of infrastructure platform players such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) has reinvigorated this best of breed versus all-in cloud debate that results in multi-cloud or single-cloud adoption.

In a true multi-cloud world, parts of workloads were expected to run on different clouds seamlessly. But increasingly, interoperability is becoming the core discussion in multi-cloud. Therefore, it is not about splitting workloads and working across the cloud, but ensuring one cloud workload can be ported to another cloud. While debating a pedantic definition of multi-cloud is moot, it is important to acknowledge it as the way forward.

Most cloud vendors now realize multi-cloud is here to stay. However, behind closed doors, the push to go all-in is very apparent across the three large vendors. Let’s examine the following pro and anti-multi-cloud arguments:

Picture1 3

Both the pro and anti-multi-cloud proponents have strong arguments, and in addition to the above points, there are many others on each side. But the truth is increasing numbers of enterprises are adopting multi-cloud. So, when an enterprise proactively adopts a multi-cloud strategy, does that mean it’s a strategic choice or strategic confusion about cloud and its role as well as the other factors outlined above?

This is a hard question to answer, and each enterprise will have to carve its cloud strategy. However, enterprises should realize this strategy will change in the future. No enterprise will be “forever single cloud,” but most will be “forever multi-cloud.” Therefore, once they embark on a multi-cloud strategy, it will be extremely rare for enterprises to go back, but they can change their single cloud strategy more easily.

In enterprises with significant regional or business autonomy, multi-cloud adoption will grow. Enterprises may adopt various cloud vendors for different regions due to their requirements for workloads, regulations, vendor relationships, etc. Instances will continue to exist where some senior leaders support certain cloud vendors, and, as a result, this preference may also lead to multi-cloud adoption.

On many occasions, enterprises may adopt multi-cloud for specific workloads rather than as part of their strategy. They may want data-centric workloads to run on a cloud but may not want to leverage the cloud for other capabilities. Many cloud vendors may play “loss leaders” to get strategic enterprise workloads (e.g., SAP, mainframe) onto their platform to create sticky relationships with clients.

Many software vendors are launching newer offerings proclaiming they work best with client’s multi-cloud environments. As an ecosystem is built around multi-cloud, it will be hard to change. In addition to AWS, GCP, and MS Azure, other cloud vendors are upping their offerings, as we covered earlier in Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight?.

Given multi-cloud drives “commoditization” of underlying cloud platforms, large cloud vendors are skeptical of it. Integration layers that provide value accretion on abstract platforms rather than core cloud services is an additional vendor concern. However, eventually, a layer on top of these cloud vendor platforms will enable different cloud offerings to work together seamlessly. It will be interesting to see whether cloud platform providers or other vendors end up building such a layer.

We believe system integrators have a good opportunity of owning this “meta integration” of multi-cloud to create seamless platforms. However, most of these system integrators are afraid of upsetting the large cloud vendors by even proactively bringing this up with them, let alone creating such a service. This reluctance may harm the cloud industry in the long run.

What are your thoughts about multi-cloud as a strategy or a confusion? Please write to me at [email protected].

LCLC Not SDLC: Low-code Life Cycle Needs a Different Operating Model | Blog

Low-code platforms are here to stay because of the rapid application development and speed to market it enables. But why is no one taking the same “life cycle” view for low-code applications and workflows as typical software development? A new model of Low-code Development Life Cycle (LCLC or LC2) is needed for enterprises to realize the potential benefits and manage risks. Read on to deep dive into these issues in this latest blog continuing our coverage of low-code.   

Our market interactions suggest enterprises adopting low-code platforms to build simpler workflows or enterprise-grade applications are not thinking about life cycle principles. Though enterprises for ages have adopted Software Development Life Cycle (SDLC) to build applications, it is surprising no such initiatives exist for low-code applications.

As we previously discussed, low-code platforms, requiring little or no programming to build, are surging in adoption. We covered the key applications and workflows enterprises are focusing on in an earlier blog, The Future of Digital Transformation May Hinge on a Simpler Development Approach: Low Code.

Given its staying power in the market, it’s time to consider Low-code Development Life Cycle (LCLC or LC2).

Here are some recommendations on how LCLC can be structured and managed:

Rethink low-code engineering principles: Enterprises that have long relied on SDLC concepts will need to build newer engineering and operations principles for low-code applications. Enterprises generally take long-term bets on their architecture preferences, Agile methodologies, developer collaboration platform, DevOps pipeline, release management, and quality engineering.

Introducing a low-code platform changes most of this, and some of the typical SDLC may not be needed. For example, these platforms do not generally provide an Integrated Development Environment (IDE) and rely on “designing” rather than “building” applications. In SDLC, different developers can build their own code using their IDE, programming language, databases, and infrastructure of choice. They can check in their code, run smoke tests, integrate, and push to their Continuous Integration/Continuous Delivery pipeline.

However, for most low-code platforms, the entire process has to run on a single platform, making it nearly impossible to collaborate across two low-code platforms. Moreover, enterprises might be exposed to performance, compliance, and risk issues if these applications and workflows are built by citizen developers who are unaware of enterprise standards of coding. This also might increase the costs for quality assurance beyond budgeted amounts.

Even professional developers, who are well aware of enterprise standards while building code in an existing manner, may not know how to manage their LCLC. Many low-code platforms allow SDLC steps within their platform, such as requirement management. Therefore, all the collaboration will have to happen on the low-code platform. This creates a challenging situation requiring enterprises to have different collaboration platforms for low-code applications separate from the other standard tooling they have invested in (such as Teams, Slack, and other agile planning tools) – unless they are integrated through APIs, adding overhead and cost.

Also complicating issues is the desire by some developers to have the developer portal of these low-code platforms extend to their IDE. Most platforms prefer their own CI/CD pipelines, although they can also integrate with third-party tools enterprises have invested in.  A different mindset is needed to manage this increased technological complexity. Because low-code applications are difficult to scale for large data sets, some of the scaling imperatives enterprises have built for years will need to be rethought.

Manage lock-in: Most low-code platform vendors have a specific scripting language that generates the application and the workflow. Developers who are trained on Java, .net, Python, and similar languages do not plan to reskill to learn proprietary languages for so many different platforms. While enterprises are accustomed to multiple programming languages in their environment, they normally have selected some primary languages. Though low-code platforms do not extensively rely on developers coding applications, enterprises generally would want to know “under the hood” aspects around architecture, data models, integration layer, and other system elements.

Build governance: We previously covered how low-code platform proliferation will choke organizations that are blindly prioritizing the speed of software delivery. Therefore, governance is needed not only in the development life cycle but also to manage the proliferation of platforms within enterprises. Enterprises will need to closely watch the low-code spend from subscription and software perspectives. As low-code platforms support native API-based access to external platforms, enterprises will need to govern that spend, risk, and compliance (for example, looking at such issues as whether some third-party platforms are on the blacklist).

What should enterprises know?

Low-code platforms can provide enterprises with a potent platform. But, if not managed well, it can be risky. To manage the potential risks, enterprises need to be aware of these three considerations:

  • Understand vendor solutions and their history: Different vendors can have different views and visions around low-code based on their history around being led by API, Business Process Management (BPM), BigTech platform, or process automation. Most will need their run time engine/platform to be deployed to execute the application/low-code. Others may allow code to be run outside of their platform. Moreover, their capabilities around supporting aspects such as forms, process models, simple-data integration, application templates, and library components can significantly vary. CIOs need to understand these nuances
  • Require business and CIO collaboration: Businesses love low-code platforms as it allows rapid application development and shortens time to market. However, as the adoption scales, businesses will realize they cannot manage this low-code ecosystem on their own. Whether CIOs like it or not, the businesses will punt over their responsibility to the CIO organization. Therefore, CIOs need to proactively address this requirement. They will need a strong discovery model to take inventory of their low-code adoption, workflow, and applications that they are supporting
  • Assess the applications and workflows the low-code platform can support: Vendors normally claim they can build “complex” applications through their low-code platforms. However, this definition is not consistent and may not be as complex as vendors say. Enterprise-class applications need code standardization, libraries, documentation, security, recovery, and audit trails. Most of these platforms provide out-of-the-box or custom integration with other enterprise applications, project management, and other SDLC tools. CIOs need to evaluate the cost, performance, maintainability, and security aspect of these multi-point integrations

Expect M&A activity

Enterprises’ desires to drive digital transformation will make low-code proliferation a reality. Currently, most low-code vendors derive a small $100-500K revenue per client, indicating the focus is mostly on Small and Medium Business (SMB) segments or small line of business buying. As a result, we expect consolidation in this market with large vendors such as Salesforce, ServiceNow, and Microsoft furthering eating into small vendor’s share. Enterprises should keep a close watch on this M&A activity as it can completely change their low-code strategy, processes, and the business value they derive out of strategic investment into a low-code platform.

What has your low-code journey been like, and how are you using life cycle concepts? Please reach out to share your story with me at [email protected].

On-screen versus Being Green: Can Virtual Conferencing Be Sustainable? | Blog

Reluctant to be seen on camera during conference calls? Keeping the video off can have the added benefit of helping to reduce your carbon footprint. The skyrocketing use of video conferencing during the pandemic has come with a hidden cost to the environment of increased emissions. Learn more about what organizations can do to reverse the negative impact our digital lifestyles and increased work from home is having. 

A recent study by a team of researchers from MIT, Purdue University, and Yale University on the impact of internet usage and video conferencing found that conference calls can add up to 1 kilobyte of carbon emission, which can be reduced by switching off your video.

With COVID, people collaborating using platforms such as Zoom, Microsoft Teams, and Slack has soared. Teams, for example, has seen daily users increase from 32 million pre-pandemic to 145 million in recent months. As these times made conference calls and virtual events a regular part of our lives, “seeing” others on video has become a norm.

While virtual meetings have reduced emissions from air and road commuting as well as energy usage in offices, it requires large amounts of data that need greater power, putting huge energy demands on data centers that power the internet.

All of this raises new questions for organizations to grapple with. Organizations want to demonstrate their commitment to sustainability but also desire to engage with each other, clients, and other stakeholders. How can people feel like they are part of a team and build relationships with their videos off?

How to reduce your footprint and stay connected

As we return to a new normal, video conference calls will continue as they have given us flexibility, time savings and kept people connected through extraordinary circumstances. Here are some recommendations on how organizations can reduce the environmental impact while continuing to use today’s popular tools for staying close:

  • Invest in real-time dashboards on internet carbon emissions that can help organization drill down on how many calls are made and govern whether all of these calls are needed
  • Optimize the time spent on video calls and consider shortening calls based on analytics from the call-related carbon emission dashboard about the average time spent on video conferences
  • Train your workforce about who to invite to video conferences and reduce the number of large online meetings open to all employees who might not need to be involved
  • Resist falling into the trap of handling video usage through strict policy initiatives. While organizations may think they should build stronger policies to outrightly stop lengthy calls or ration the number of calls, this rarely works and may lead to higher employee attrition
  • Engage employees continuously to evaluate their work experience, physical health, mental condition, and other similar aspects. The pandemic already has impacted the employee experience of being part of a social group and forming personal bonds through face-to-face interactions. If video calls are reduced, employees may feel isolated in their work from home environments

Create a shared vision of sustainability

Organizations need to create a shared vision of their sustainability goals and make employees part of these efforts. The carbon emission dashboard should let each employee know their individual emission use during conference calls and compare it with the amount of fossil fuel consumed. This will give employees a clearer perspective of the enormity of the impact and show what they can achieve by optimizing their virtual interactions.

There are no easy solutions to this new and growing problem. Our market interactions show few companies are being proactive in their approaches to video conferencing. Now is the time for organizations to think before the next time they turn on their videos. Organizations should also focus on the big picture of sustainability and carefully evaluate the environmental impact of virtual conferences. If they determine it is less significant than thought, they should instead focus on other more meaningful initiatives. While many enterprises may eventually have to institutionalize policies around virtual conferences, these approaches should be well contemplated and fully communicated.

How is your organization handling video conferencing? Please write to me at [email protected] to share your experiences.

Cloud War Chapter 6: Destination versus Architecture – Are the Cloud Vendors Fighting the Right Battle? | Blog

Most cloud vendors are obsessed with moving clients to their platforms. They understand that although their core services, such as compute, are no longer differentiated, there is still a lot of money to be made just by hosting their clients’ workloads. And they realize that this migration madness has sufficient legs to last for at least three to five years. No wonder migration spend constitutes 50-60 percent of services spend on cloud.

What about the workloads?

The cloud destination – meaning the platform it operates on – does not help a workload to run better. To get modernized and even natively built, a workload needs open architecture underpinned by software that helps applications be built just one time and run on any platform. Kubernetes has become a standard way of building such applications. Today, every major vendor has its own Kubernetes services such as Amazon EKS, Azure Kubernetes Services, Google Kubernetes Engine, IBM Cloud Kubernetes Service, and Oracle Container Engine for Kubernetes.

However, Kubernetes does not create an open and portable application. They help in running containerized workloads. But if those workloads are not portable, Kubernetes services defeat the purpose. Containerized workloads need to be architected to ensure user and kernel space are well designed. This is relevant for both new and modernized workloads. For portability, the system architecture needs to be built on a layer that is open and portable. Without this, the entire migration madness will simply add one more layer to enterprise complexity. Interestingly, any cloud vendor that helps clients build such workloads will almost always be the preferred destination platform, as clients benefit from combining the architecture, build, and run environment.

Role of partners of cloud vendors

Cloud vendors need to work with partners to help clients build new generation workloads that are easily movable and platform independent. The partners include multiple entities such as ISVs and service providers.

ISVs need to embed such open and interoperable elements into their solution so that their software can run on any platform. Service providers need to engage with clients and cloud vendors to build and modernize workloads by using open, interoperable principles. As there is significant business in cloud migration, there is a risk that the service partners will get blinded and become more focused on the growth of their own cloud services business than on driving value for the client. This is a short-term strategy that can help service providers meet their targets. But it will not make the service provider a client’s strategic long-term partner.

Role of enterprises

There is significant pressure on enterprise technology teams to migrate workloads to the cloud. Many of the clients we work with take pride in telling us they will move more than 80 percent of their workloads to the cloud in the next two to three years. There is limited deliberation on which workloads need newer build on portable middleware, or even if they need a runtime that can support an open and flexible architecture. Unfortunately, many enterprise executives have to show progress in cloud adoption. And though enterprise architects and engineering teams do come together to evaluate how a workload needs to be moved to the cloud, there is little discussion on building an open architecture for these workloads. A bright spot is that there seem to be good architectural discussions around newer workloads.

Enterprises will soon realize they are hitting the wall of cloud value because they did not meaningfully invest in building a stronger architecture. Their focus in moving their workloads to their primary cloud vendor is overshadowing all the other initiatives they must undertake to make their cloud journey a success.

Do you focus on the architecture or the destination for your workloads? I’d enjoy hearing your experiences and thoughts at [email protected].

Pega Platform: Constrained Talent and Higher Implementation Spend Limits Adoption | Blog

A BigTech battle seems to be heating up in the business process management (BPM) and CRM space. So, after completing our PEAK Matrix Assessment on Pega Services, our Enterprise Platform Services team conducted a Voice of the Market study on the company’s platform. They interviewed the 16 global IT service providers covered in the PEAK Matrix and 35+ of Pega’s enterprise clients to gauge their reactions to the Pega platform and Pega’s main competitors. The individuals we interviewed ranked Appian, Bizagi, IBM, Pega, and Salesforce “above, on, or below average” in multiple areas and drilled down on those areas to explain their rankings.

Here’s a summary of how Pega fared from that study.

Strong technology sophistication

Pega’s depth of products and its ability to enable rapid process automation leveraging low-code development and next-generation technology capabilities like artificial intelligence (AI), machine learning (ML), and robotic process automation (RPA) helped it receive the highest score among the providers. Enterprise clients rated its peers above average on their platform capabilities but cited dependencies on third-party capabilities to be a key gap.

Constrained talent availability

The enterprise clients perceive a high demand-supply gap for Lead System Architects (LSAs), Customer Decision Hub (CDH), and marketing specialists, and business architects for the Pega platform, so it received a below-average score for this parameter. In contrast, its peers have built a sizable talent pool, gaining them an above average score.

Complex licensing construct

Pega’s clients cited that its commercial flexibility could be better and that it’s often difficult to understand Pega’s licensing construct. On the other hand, they believe Pega’s peers are effectively bundling their offerings and provide flexible licensing and contracting options.

Good customer experience

According to customers, Pega’s collaboration with systems integrators (SI) in driving large engagements earned it an above average score, but they believe its proactivity in responding to client questions could be better. Its peers also received an above average score for their client-centric engagement.

What’s working well for Pega

  • Strong product portfolio: Pega’s well-knit product offerings across BPM and key CRM areas and domain specific capabilities help position it as an important transformation partner of choice for enterprises in the BFSI, Telecom, HLS, and public sector industries. Enterprises in these sectors have consistently rated Pega higher than average for its technology capabilities
    in the BPM domain.
  • Seamless integration and high customizability: Pega’s ability to easily integrate with all the major enterprise applications and its high level of customizability addressing complex use cases are viewed as its unique strengths. Service providers consider Pega a vendor-agnostic platform and cited multiple instances of adoption along with other key enterprise platforms to drive expansive digital transformation for their clients.
  • High enterprise mindshare: Pega’s transformational proof points with quantitative business impact in the above-mentioned industries across areas such as case management and BPM, low-code platform, RPA, customer service, and customer decision hub have instilled confidence among enterprise clients looking for end-to-end support.

What do customers expect from Pega in 2021?

  • Expanded partner ecosystem: Enterprises in emerging European markets, MEA, and Latin America cited that Pega needs to further its SI network investments across these regions to enhance delivery capabilities. They also believe Pega should enable visibility into its partners’ delivery capabilities by further structuring its partner program and upgrading its partner portal.
  • Ensure talent availability: As Pega’s product portfolio rapidly expands beyond its core focus, enterprises have cited difficulties in hiring, retaining, and training resources on newer modules. So, they believe Pega should better structure its certification programs and make larger investments in well-curated learning and training initiatives for enterprises and service providers.
  • Deliver out-of-the-box solutions: Enterprises perceive Pega to be a complex platform that increases time-to-market and implementation spend. They also believe Pega should further invest in enhancing mindshare and adoption of Pega Marketplace to facilitate the development of out-of-the-box solutions.

While Pega’s rapidly expanding product portfolio makes it one of the important vendors for digital process transformation initiatives, it needs to continuously evaluate its platform capabilities and make targeted investments to consistently drive higher value for its clients.

How has your experience been with Pega? Please write to us at [email protected] and [email protected].

The State of Cloud-Driven Transformation | In the News

From the realities of remote work to the increased focus on digital commerce, organizations are facing new challenges to their IT and security infrastructure—not to mention their general viability as a business. In response, many are rethinking their strategies and accelerating changes to their technology stacks to best confront the new future—primarily through the cloud.

While cost reduction or flexibility remains an important driver of cloud adoption, organizations are also seeking a number of other business benefits, from agility to innovation. “Whereas once organizations had to choose between quality, cost, and time when investing in new technology, the cloud operating model enables them to be more aspirational,” says Yugal Joshi, vice president of digital, cloud, and application services research for strategic IT consultancy and research firm Everest Group. “They can achieve the holy trinity of technology benefits. Organizations today want it all.”

Read the full Harvard Business Review report

 

Sustainable Business Needs Sustainable Technology: Can Cloud Modernization Help? | Blog

Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.

Current practices are not aligned to sustainable technology

The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:

  • Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
  • Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
  • Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.

The role of cloud modernization in building a sustainable technology ecosystem

In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.

Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.

Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.

Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.

Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.

Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.

When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.

This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.