Author: Yugal Joshi

Metaverse: Opportunities and Key Success Factors for Technology Services Providers | Blog

While the metaverse may seem way out there, the opportunities for technology service providers in this next evolution are very real. While sci-fi movies such as Ready Player One introduced this concept of an interactive virtual reality (VR) world, leading technology giants including Facebook, Nvidia, and Microsoft are investing in this future. What will it take for tech service companies to seize a stake in this alternative universe that could be coming very soon? To learn more about the five factors providers will need to succeed in the metaverse, read on.

With digital technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and the cloud, buildings and other physical locations have become “smart spaces,” as we recently wrote about in this Viewpoint. The metaverse – a confluence where people live a seamless life across the real and virtual universe – can be thought of as the “mega smart space.” Google trends analysis of the word “metaverse” below suggests a growing interest in it.

Picture2

As the underlying powerhouse running the metaverse, the internet is expected to evolve to this next-generation model. Driven by the growing acceptance of virtual models as a standard way of living during the pandemic, many evangelists believe the metaverse may become a reality sooner than expected.

News such as a Gucci virtual bag selling for more than its physical value is grabbing attention. Virtual avatars are already attending corporate meetings and large audience forums with real people. The physical motion of body parts is being replicated in the digital world and vice versa, as witnessed at the recent SIGGRAPH 2021 conference. Even if we discount the hyperbole of vendors, there is merit in evaluating what this means for the technology services industry.

Opportunities to build a new world

Interestingly, the metaverse has no standard building blocks. Since it’s a parallel universe, things that exist in the real world are imitated. Therefore, blockchain-driven non-fungible tokens (NFTs) and payments, computing power to run the universe, connectivity through 5G and edge, cyber security, interactive applications, Augmented Reality (AR) and VR, digital twins, and 3D/4D models of the real world all become important. Of course, integrating these seamlessly with enterprise technology will be a demand to cater to.

The entire metaverse is based on technology. And with more technology spend comes more technology services spend. Although some of these enabling technologies, such as AR/VR, are still in their infancy, but technology vendors are accelerating their development, which will only help technology service providers.

Five factors needed for tech service providers to succeed in the metaverse

  1. Innovative client engagement: Gaming companies may end up taking a lead in this area given their inherent capabilities to build engaging life-like content. Unfortunately, few technology services work meaningfully with gaming companies. Vendors who can build product development competence for this set of clients will benefit from the metaverse. Service providers also will need to scale their existing engagements with BigTech and other technology vendors. The current work focused on maintaining their products or providing end-of-life support must change. Service providers will need to engage technology vendors upstream in ideating and designing products and not only developing and supporting them. The traditional client base in segments such as Banking, Financial Services, and Insurance (BFSI), retail, manufacturing, and travel will continue to be important. These industries will build their version of the metaverse for consumers for specific business use cases or participate in/rent out others. Technology service providers will need access to business owner spend in these organizations. Other industries such as education, which do not currently provide large technology service opportunities, may also take the lead in the metaverse adoption. The takeaway is service providers will need to expand their client coverage and rely less on their traditional client base
  2. Capabilities to work with “unknown” partners: Most service providers have a very long list of 200-300 technology partners they work with. However, they usually prioritize five or six as strategic partners who influence 70-80% of their channel revenue. This will need to change for the metaverse. With its complexity, the metaverse will require service providers to not only work with other peers but also innumerable smaller companies. Niche partners could be manufacturing smart glasses, tracking technologies, or virtual interfaces, etc. Building viable Go-to-Market (GTM) and technical capabilities will be critical
  3. Product envisioning and user experience capabilities: While many service providers now have interactive businesses, their predominant revenue comes from building mobile apps, next-gen websites, or commerce platforms. Most have very limited true interactive or product envisioning capabilities. The metaverse will reduce the inherent need for “screens,” and the experience will be seamless. Most enterprises rely on specialist providers to brainstorm with and push their thinking to envision newer products. Other service providers are still catching up and are bucketed as “technical partners.”  Envisioning capabilities will become critical. Therefore, service providers who are yet to get to even product design opportunities have a big road to traverse. Although these technology service providers can continue to focus on the downstream work of core technology, they will soon be sidelined and become irrelevant
  4. Infinite platform competence: The metaverse will need service providers to closely work with cloud, edge, 5G, carriers, and other vendors. However, the boundless infrastructure and platform capabilities needed will change. Service providers have already tasted success in cloud. However, the metaverse infrastructure will stress their capabilities to envision, design, and operate limitless infrastructure platforms. Their tools, operating processes, partners, and talent model will completely transform
  5. Monetization model: Service providers will need to bring and build innovative commercial models for their clients to monetize the metaverse. Much like the internet, no one will own the metaverse. However, every company will try to be its guardian to maximize their business. Service providers will need to understand the deep working of the metaverse and advise clients on potential monetization. To do this, they will not only need traditional capabilities such as consulting and industry knowledge but also breakthrough thinking around potential revenue streams. For example, a bank or telecom company will want its metaverse to influence growth and not just become one more channel of customer experience

Who will take the lead?

Without adding to the ongoing debate on the metaverse and its social impact, it is safe to assume that it can create significant opportunities for technology service providers that will continue to grow as this nascent concept evolves further. These service providers already have many technical building blocks that will be needed to succeed.

However, given the metaverse conversations are not even at infancy in their client landscape, service providers are not proactively thinking along this dimension. Since the metaverse will initially be dominated by technology vendors, who outsource a lot less than their enterprise counterparts, service providers will struggle unless they proactively strategize, and their traditional client base will need a significant push to think along these lines to create opportunities.

Currently, this all may appear too farfetched or futuristic. Indeed, there are too many “unknown unknowns.” Unlike technology vendors, technology service providers do not proactively invest until they size up the market opportunity. However, as enterprise-class technology vendors such as Microsoft launch offerings like Mesh, it is quite apparent that the metaverse, in some shape or form, will become enterprise-ready sooner than we expect.

What has your experience been with metaverse-related opportunities? Please share your thoughts with me at [email protected].

Cloud Transformation: How Much Is Enough? | Blog

With today’s business transformation led by cloud, migration frenzy remains at a fever pitch. Even though most cloud vendors are now witnessing slower growth, it will still be years before this juggernaut halts. But can you have too much cloud? The question of how far enterprises should go in their cloud transformation journey is rarely thought of. Read on to learn when it may be time for your enterprise to stop and reexamine its cloud strategy.  

Enterprises believe cloud will continue to be critical but only one part of their landscape, according to our recently published Cloud State of the Market 2021. Once enterprises commit to the cloud, the next question is: How far should they go?  This runs deeper and far beyond asking how much of their workloads should run on cloud, when is the opportune time to repatriate workloads from cloud, and whether workloads should be moved between clouds.

Unfortunately, most enterprises are too busy with migration to consider it. Cloud vendors certainly aren’t bringing this question up because they are driving consumption to their platform. Service partners are not talking about this either, as they have plenty of revenue to make from cloud migration.

When should enterprises rethink the cloud transformation strategy?

The challenge in cloud transformation can manifest in multiple ways depending on the enterprise context. However, our work with enterprises indicates three major common obstacles. It’s time to relook at your cloud journey if your enterprise experiences any of the following:

  • Cloud costs can’t be explained: Cloud cost has become a major issue as enterprises realize they did not plan their journeys well enough or account for the many unknowns to start. However, after that ship has sailed, the focus changes to micromanaging cloud costs and justifying the business case. It is not uncommon for enterprises to see the total cost of ownership going up by 20% post cloud migration and the rising costs are difficult for technology teams to defend
  • Cloud value is not being met: Our research indicates 67% of enterprises do not get value out of their cloud journey. When this occurs, it is a good point to reexamine cloud. Many times, the issue is poor understanding of cloud at the offset and the workloads chosen. During migration frenzy, shortcuts are often taken and modern debt gets created, diluting the impact cloud transformation can have for enterprises
  • Cloud makes your operations more complex: With the fundamental cloud journey and architectural input at the beginning more focused on finding the best technology fits, downstream operational issues are almost always ignored. Our research suggests 40-50% of cloud spend is on operations and yet enterprises do not think through this upfront. With the inherent complexity in cloud landscape, accountability may become a challenge. As teams collapse their operating structure, this problem is exacerbated

What should enterprises do when they’ve gone too far in the cloud?

This question may appear strange given enterprises are still scaling their cloud initiatives. However, some mature enterprises are also struggling with deciding the next steps in their cloud journey. Each enterprise and business unit within them should evaluate the extent of their cloud journey. If any of the points mentioned above are becoming red flags, they must act immediately.

Operating models also should be examined. Cloud value depends on the way of working and the internal structure of an enterprise. Centralization, federation, autonomy, talent, and sourcing models can influence cloud value. However, changing operating models in pursuit of cloud value should not become putting the cart before the horse.

Enterprises always struggle with the question of where to stop. This challenge is only made worse by the rapid pace of change in cloud. As enterprises go deeper into cloud stacks of different vendors, it will become increasingly difficult to tweak the cloud transformation journey.

Despite these pressures, enterprises should periodically evaluate their cloud journeys. Cloud vendors, system integrators, and other partners will keep pushing more cloud at enterprises. Strong enterprise leadership that can ask and understand the larger question from a commercial, technical, and strategic viewpoint is needed to determine when enough cloud is enough. Therefore, from journey to the cloud, to journey in the cloud, enterprises should now also focus on the journey’s relevance and value.

If you would like to talk about your cloud journey, please reach out to Yugal Joshi at [email protected].

For more insights, visit our Market Insights™ exploring the cloud infrastructure model. Learn more

Multi-cloud: Strategic Choice or Confusion? | Blog

The multi-cloud environment is not going away, with most large enterprises favoring this approach. Multi-cloud allows enterprises to select different cloud services from multiple providers because some are better for certain tasks than others, along with other factors. While there are valid points to be made both for and against multi-cloud in this ongoing debate, the question remains: Are enterprises making this choice based on strategy or confusion? Let’s look at this issue closer.

The technology industry has never solved the question of best-of-breed versus bundled/all-in consumption. Many enterprises prefer to use technologies consumed from different vendors, while others prefer to have primary providers with additional supplier support. Our research suggests 90% of large enterprises have adopted a multi-cloud strategy.

The definition of multi-cloud has changed over the years. In the SaaS landscape, enterprise IT has always been multi-cloud as it needed Salesforce.com to run customer experience, Workday to run Human Resources, SAP to run finance, Oracle to run supply chain, and ServiceNow to run service delivery. The advent of infrastructure platform players such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) has reinvigorated this best of breed versus all-in cloud debate that results in multi-cloud or single-cloud adoption.

In a true multi-cloud world, parts of workloads were expected to run on different clouds seamlessly. But increasingly, interoperability is becoming the core discussion in multi-cloud. Therefore, it is not about splitting workloads and working across the cloud, but ensuring one cloud workload can be ported to another cloud. While debating a pedantic definition of multi-cloud is moot, it is important to acknowledge it as the way forward.

Most cloud vendors now realize multi-cloud is here to stay. However, behind closed doors, the push to go all-in is very apparent across the three large vendors. Let’s examine the following pro and anti-multi-cloud arguments:

Picture1 3

Both the pro and anti-multi-cloud proponents have strong arguments, and in addition to the above points, there are many others on each side. But the truth is increasing numbers of enterprises are adopting multi-cloud. So, when an enterprise proactively adopts a multi-cloud strategy, does that mean it’s a strategic choice or strategic confusion about cloud and its role as well as the other factors outlined above?

This is a hard question to answer, and each enterprise will have to carve its cloud strategy. However, enterprises should realize this strategy will change in the future. No enterprise will be “forever single cloud,” but most will be “forever multi-cloud.” Therefore, once they embark on a multi-cloud strategy, it will be extremely rare for enterprises to go back, but they can change their single cloud strategy more easily.

In enterprises with significant regional or business autonomy, multi-cloud adoption will grow. Enterprises may adopt various cloud vendors for different regions due to their requirements for workloads, regulations, vendor relationships, etc. Instances will continue to exist where some senior leaders support certain cloud vendors, and, as a result, this preference may also lead to multi-cloud adoption.

On many occasions, enterprises may adopt multi-cloud for specific workloads rather than as part of their strategy. They may want data-centric workloads to run on a cloud but may not want to leverage the cloud for other capabilities. Many cloud vendors may play “loss leaders” to get strategic enterprise workloads (e.g., SAP, mainframe) onto their platform to create sticky relationships with clients.

Many software vendors are launching newer offerings proclaiming they work best with client’s multi-cloud environments. As an ecosystem is built around multi-cloud, it will be hard to change. In addition to AWS, GCP, and MS Azure, other cloud vendors are upping their offerings, as we covered earlier in Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight?.

Given multi-cloud drives “commoditization” of underlying cloud platforms, large cloud vendors are skeptical of it. Integration layers that provide value accretion on abstract platforms rather than core cloud services is an additional vendor concern. However, eventually, a layer on top of these cloud vendor platforms will enable different cloud offerings to work together seamlessly. It will be interesting to see whether cloud platform providers or other vendors end up building such a layer.

We believe system integrators have a good opportunity of owning this “meta integration” of multi-cloud to create seamless platforms. However, most of these system integrators are afraid of upsetting the large cloud vendors by even proactively bringing this up with them, let alone creating such a service. This reluctance may harm the cloud industry in the long run.

What are your thoughts about multi-cloud as a strategy or a confusion? Please write to me at [email protected].

LCLC Not SDLC: Low-code Life Cycle Needs a Different Operating Model | Blog

Low-code platforms are here to stay because of the rapid application development and speed to market it enables. But why is no one taking the same “life cycle” view for low-code applications and workflows as typical software development? A new model of Low-code Development Life Cycle (LCLC or LC2) is needed for enterprises to realize the potential benefits and manage risks. Read on to deep dive into these issues in this latest blog continuing our coverage of low-code.   

Our market interactions suggest enterprises adopting low-code platforms to build simpler workflows or enterprise-grade applications are not thinking about life cycle principles. Though enterprises for ages have adopted Software Development Life Cycle (SDLC) to build applications, it is surprising no such initiatives exist for low-code applications.

As we previously discussed, low-code platforms, requiring little or no programming to build, are surging in adoption. We covered the key applications and workflows enterprises are focusing on in an earlier blog, The Future of Digital Transformation May Hinge on a Simpler Development Approach: Low Code.

Given its staying power in the market, it’s time to consider Low-code Development Life Cycle (LCLC or LC2).

Here are some recommendations on how LCLC can be structured and managed:

Rethink low-code engineering principles: Enterprises that have long relied on SDLC concepts will need to build newer engineering and operations principles for low-code applications. Enterprises generally take long-term bets on their architecture preferences, Agile methodologies, developer collaboration platform, DevOps pipeline, release management, and quality engineering.

Introducing a low-code platform changes most of this, and some of the typical SDLC may not be needed. For example, these platforms do not generally provide an Integrated Development Environment (IDE) and rely on “designing” rather than “building” applications. In SDLC, different developers can build their own code using their IDE, programming language, databases, and infrastructure of choice. They can check in their code, run smoke tests, integrate, and push to their Continuous Integration/Continuous Delivery pipeline.

However, for most low-code platforms, the entire process has to run on a single platform, making it nearly impossible to collaborate across two low-code platforms. Moreover, enterprises might be exposed to performance, compliance, and risk issues if these applications and workflows are built by citizen developers who are unaware of enterprise standards of coding. This also might increase the costs for quality assurance beyond budgeted amounts.

Even professional developers, who are well aware of enterprise standards while building code in an existing manner, may not know how to manage their LCLC. Many low-code platforms allow SDLC steps within their platform, such as requirement management. Therefore, all the collaboration will have to happen on the low-code platform. This creates a challenging situation requiring enterprises to have different collaboration platforms for low-code applications separate from the other standard tooling they have invested in (such as Teams, Slack, and other agile planning tools) – unless they are integrated through APIs, adding overhead and cost.

Also complicating issues is the desire by some developers to have the developer portal of these low-code platforms extend to their IDE. Most platforms prefer their own CI/CD pipelines, although they can also integrate with third-party tools enterprises have invested in.  A different mindset is needed to manage this increased technological complexity. Because low-code applications are difficult to scale for large data sets, some of the scaling imperatives enterprises have built for years will need to be rethought.

Manage lock-in: Most low-code platform vendors have a specific scripting language that generates the application and the workflow. Developers who are trained on Java, .net, Python, and similar languages do not plan to reskill to learn proprietary languages for so many different platforms. While enterprises are accustomed to multiple programming languages in their environment, they normally have selected some primary languages. Though low-code platforms do not extensively rely on developers coding applications, enterprises generally would want to know “under the hood” aspects around architecture, data models, integration layer, and other system elements.

Build governance: We previously covered how low-code platform proliferation will choke organizations that are blindly prioritizing the speed of software delivery. Therefore, governance is needed not only in the development life cycle but also to manage the proliferation of platforms within enterprises. Enterprises will need to closely watch the low-code spend from subscription and software perspectives. As low-code platforms support native API-based access to external platforms, enterprises will need to govern that spend, risk, and compliance (for example, looking at such issues as whether some third-party platforms are on the blacklist).

What should enterprises know?

Low-code platforms can provide enterprises with a potent platform. But, if not managed well, it can be risky. To manage the potential risks, enterprises need to be aware of these three considerations:

  • Understand vendor solutions and their history: Different vendors can have different views and visions around low-code based on their history around being led by API, Business Process Management (BPM), BigTech platform, or process automation. Most will need their run time engine/platform to be deployed to execute the application/low-code. Others may allow code to be run outside of their platform. Moreover, their capabilities around supporting aspects such as forms, process models, simple-data integration, application templates, and library components can significantly vary. CIOs need to understand these nuances
  • Require business and CIO collaboration: Businesses love low-code platforms as it allows rapid application development and shortens time to market. However, as the adoption scales, businesses will realize they cannot manage this low-code ecosystem on their own. Whether CIOs like it or not, the businesses will punt over their responsibility to the CIO organization. Therefore, CIOs need to proactively address this requirement. They will need a strong discovery model to take inventory of their low-code adoption, workflow, and applications that they are supporting
  • Assess the applications and workflows the low-code platform can support: Vendors normally claim they can build “complex” applications through their low-code platforms. However, this definition is not consistent and may not be as complex as vendors say. Enterprise-class applications need code standardization, libraries, documentation, security, recovery, and audit trails. Most of these platforms provide out-of-the-box or custom integration with other enterprise applications, project management, and other SDLC tools. CIOs need to evaluate the cost, performance, maintainability, and security aspect of these multi-point integrations

Expect M&A activity

Enterprises’ desires to drive digital transformation will make low-code proliferation a reality. Currently, most low-code vendors derive a small $100-500K revenue per client, indicating the focus is mostly on Small and Medium Business (SMB) segments or small line of business buying. As a result, we expect consolidation in this market with large vendors such as Salesforce, ServiceNow, and Microsoft furthering eating into small vendor’s share. Enterprises should keep a close watch on this M&A activity as it can completely change their low-code strategy, processes, and the business value they derive out of strategic investment into a low-code platform.

What has your low-code journey been like, and how are you using life cycle concepts? Please reach out to share your story with me at [email protected].

On-screen versus Being Green: Can Virtual Conferencing Be Sustainable? | Blog

Reluctant to be seen on camera during conference calls? Keeping the video off can have the added benefit of helping to reduce your carbon footprint. The skyrocketing use of video conferencing during the pandemic has come with a hidden cost to the environment of increased emissions. Learn more about what organizations can do to reverse the negative impact our digital lifestyles and increased work from home is having. 

A recent study by a team of researchers from MIT, Purdue University, and Yale University on the impact of internet usage and video conferencing found that conference calls can add up to 1 kilobyte of carbon emission, which can be reduced by switching off your video.

With COVID, people collaborating using platforms such as Zoom, Microsoft Teams, and Slack has soared. Teams, for example, has seen daily users increase from 32 million pre-pandemic to 145 million in recent months. As these times made conference calls and virtual events a regular part of our lives, “seeing” others on video has become a norm.

While virtual meetings have reduced emissions from air and road commuting as well as energy usage in offices, it requires large amounts of data that need greater power, putting huge energy demands on data centers that power the internet.

All of this raises new questions for organizations to grapple with. Organizations want to demonstrate their commitment to sustainability but also desire to engage with each other, clients, and other stakeholders. How can people feel like they are part of a team and build relationships with their videos off?

How to reduce your footprint and stay connected

As we return to a new normal, video conference calls will continue as they have given us flexibility, time savings and kept people connected through extraordinary circumstances. Here are some recommendations on how organizations can reduce the environmental impact while continuing to use today’s popular tools for staying close:

  • Invest in real-time dashboards on internet carbon emissions that can help organization drill down on how many calls are made and govern whether all of these calls are needed
  • Optimize the time spent on video calls and consider shortening calls based on analytics from the call-related carbon emission dashboard about the average time spent on video conferences
  • Train your workforce about who to invite to video conferences and reduce the number of large online meetings open to all employees who might not need to be involved
  • Resist falling into the trap of handling video usage through strict policy initiatives. While organizations may think they should build stronger policies to outrightly stop lengthy calls or ration the number of calls, this rarely works and may lead to higher employee attrition
  • Engage employees continuously to evaluate their work experience, physical health, mental condition, and other similar aspects. The pandemic already has impacted the employee experience of being part of a social group and forming personal bonds through face-to-face interactions. If video calls are reduced, employees may feel isolated in their work from home environments

Create a shared vision of sustainability

Organizations need to create a shared vision of their sustainability goals and make employees part of these efforts. The carbon emission dashboard should let each employee know their individual emission use during conference calls and compare it with the amount of fossil fuel consumed. This will give employees a clearer perspective of the enormity of the impact and show what they can achieve by optimizing their virtual interactions.

There are no easy solutions to this new and growing problem. Our market interactions show few companies are being proactive in their approaches to video conferencing. Now is the time for organizations to think before the next time they turn on their videos. Organizations should also focus on the big picture of sustainability and carefully evaluate the environmental impact of virtual conferences. If they determine it is less significant than thought, they should instead focus on other more meaningful initiatives. While many enterprises may eventually have to institutionalize policies around virtual conferences, these approaches should be well contemplated and fully communicated.

How is your organization handling video conferencing? Please write to me at [email protected] to share your experiences.

Cloud War Chapter 6: Destination versus Architecture – Are the Cloud Vendors Fighting the Right Battle? | Blog

Most cloud vendors are obsessed with moving clients to their platforms. They understand that although their core services, such as compute, are no longer differentiated, there is still a lot of money to be made just by hosting their clients’ workloads. And they realize that this migration madness has sufficient legs to last for at least three to five years. No wonder migration spend constitutes 50-60 percent of services spend on cloud.

What about the workloads?

The cloud destination – meaning the platform it operates on – does not help a workload to run better. To get modernized and even natively built, a workload needs open architecture underpinned by software that helps applications be built just one time and run on any platform. Kubernetes has become a standard way of building such applications. Today, every major vendor has its own Kubernetes services such as Amazon EKS, Azure Kubernetes Services, Google Kubernetes Engine, IBM Cloud Kubernetes Service, and Oracle Container Engine for Kubernetes.

However, Kubernetes does not create an open and portable application. They help in running containerized workloads. But if those workloads are not portable, Kubernetes services defeat the purpose. Containerized workloads need to be architected to ensure user and kernel space are well designed. This is relevant for both new and modernized workloads. For portability, the system architecture needs to be built on a layer that is open and portable. Without this, the entire migration madness will simply add one more layer to enterprise complexity. Interestingly, any cloud vendor that helps clients build such workloads will almost always be the preferred destination platform, as clients benefit from combining the architecture, build, and run environment.

Role of partners of cloud vendors

Cloud vendors need to work with partners to help clients build new generation workloads that are easily movable and platform independent. The partners include multiple entities such as ISVs and service providers.

ISVs need to embed such open and interoperable elements into their solution so that their software can run on any platform. Service providers need to engage with clients and cloud vendors to build and modernize workloads by using open, interoperable principles. As there is significant business in cloud migration, there is a risk that the service partners will get blinded and become more focused on the growth of their own cloud services business than on driving value for the client. This is a short-term strategy that can help service providers meet their targets. But it will not make the service provider a client’s strategic long-term partner.

Role of enterprises

There is significant pressure on enterprise technology teams to migrate workloads to the cloud. Many of the clients we work with take pride in telling us they will move more than 80 percent of their workloads to the cloud in the next two to three years. There is limited deliberation on which workloads need newer build on portable middleware, or even if they need a runtime that can support an open and flexible architecture. Unfortunately, many enterprise executives have to show progress in cloud adoption. And though enterprise architects and engineering teams do come together to evaluate how a workload needs to be moved to the cloud, there is little discussion on building an open architecture for these workloads. A bright spot is that there seem to be good architectural discussions around newer workloads.

Enterprises will soon realize they are hitting the wall of cloud value because they did not meaningfully invest in building a stronger architecture. Their focus in moving their workloads to their primary cloud vendor is overshadowing all the other initiatives they must undertake to make their cloud journey a success.

Do you focus on the architecture or the destination for your workloads? I’d enjoy hearing your experiences and thoughts at [email protected].

Pega Platform: Constrained Talent and Higher Implementation Spend Limits Adoption | Blog

A BigTech battle seems to be heating up in the business process management (BPM) and CRM space. So, after completing our PEAK Matrix Assessment on Pega Services, our Enterprise Platform Services team conducted a Voice of the Market study on the company’s platform. They interviewed the 16 global IT service providers covered in the PEAK Matrix and 35+ of Pega’s enterprise clients to gauge their reactions to the Pega platform and Pega’s main competitors. The individuals we interviewed ranked Appian, Bizagi, IBM, Pega, and Salesforce “above, on, or below average” in multiple areas and drilled down on those areas to explain their rankings.

Here’s a summary of how Pega fared from that study.

Strong technology sophistication

Pega’s depth of products and its ability to enable rapid process automation leveraging low-code development and next-generation technology capabilities like artificial intelligence (AI), machine learning (ML), and robotic process automation (RPA) helped it receive the highest score among the providers. Enterprise clients rated its peers above average on their platform capabilities but cited dependencies on third-party capabilities to be a key gap.

Constrained talent availability

The enterprise clients perceive a high demand-supply gap for Lead System Architects (LSAs), Customer Decision Hub (CDH), and marketing specialists, and business architects for the Pega platform, so it received a below-average score for this parameter. In contrast, its peers have built a sizable talent pool, gaining them an above average score.

Complex licensing construct

Pega’s clients cited that its commercial flexibility could be better and that it’s often difficult to understand Pega’s licensing construct. On the other hand, they believe Pega’s peers are effectively bundling their offerings and provide flexible licensing and contracting options.

Good customer experience

According to customers, Pega’s collaboration with systems integrators (SI) in driving large engagements earned it an above average score, but they believe its proactivity in responding to client questions could be better. Its peers also received an above average score for their client-centric engagement.

What’s working well for Pega

  • Strong product portfolio: Pega’s well-knit product offerings across BPM and key CRM areas and domain specific capabilities help position it as an important transformation partner of choice for enterprises in the BFSI, Telecom, HLS, and public sector industries. Enterprises in these sectors have consistently rated Pega higher than average for its technology capabilities
    in the BPM domain.
  • Seamless integration and high customizability: Pega’s ability to easily integrate with all the major enterprise applications and its high level of customizability addressing complex use cases are viewed as its unique strengths. Service providers consider Pega a vendor-agnostic platform and cited multiple instances of adoption along with other key enterprise platforms to drive expansive digital transformation for their clients.
  • High enterprise mindshare: Pega’s transformational proof points with quantitative business impact in the above-mentioned industries across areas such as case management and BPM, low-code platform, RPA, customer service, and customer decision hub have instilled confidence among enterprise clients looking for end-to-end support.

What do customers expect from Pega in 2021?

  • Expanded partner ecosystem: Enterprises in emerging European markets, MEA, and Latin America cited that Pega needs to further its SI network investments across these regions to enhance delivery capabilities. They also believe Pega should enable visibility into its partners’ delivery capabilities by further structuring its partner program and upgrading its partner portal.
  • Ensure talent availability: As Pega’s product portfolio rapidly expands beyond its core focus, enterprises have cited difficulties in hiring, retaining, and training resources on newer modules. So, they believe Pega should better structure its certification programs and make larger investments in well-curated learning and training initiatives for enterprises and service providers.
  • Deliver out-of-the-box solutions: Enterprises perceive Pega to be a complex platform that increases time-to-market and implementation spend. They also believe Pega should further invest in enhancing mindshare and adoption of Pega Marketplace to facilitate the development of out-of-the-box solutions.

While Pega’s rapidly expanding product portfolio makes it one of the important vendors for digital process transformation initiatives, it needs to continuously evaluate its platform capabilities and make targeted investments to consistently drive higher value for its clients.

How has your experience been with Pega? Please write to us at [email protected] and [email protected].

The State of Cloud-Driven Transformation | In the News

From the realities of remote work to the increased focus on digital commerce, organizations are facing new challenges to their IT and security infrastructure—not to mention their general viability as a business. In response, many are rethinking their strategies and accelerating changes to their technology stacks to best confront the new future—primarily through the cloud.

While cost reduction or flexibility remains an important driver of cloud adoption, organizations are also seeking a number of other business benefits, from agility to innovation. “Whereas once organizations had to choose between quality, cost, and time when investing in new technology, the cloud operating model enables them to be more aspirational,” says Yugal Joshi, vice president of digital, cloud, and application services research for strategic IT consultancy and research firm Everest Group. “They can achieve the holy trinity of technology benefits. Organizations today want it all.”

Read the full Harvard Business Review report

 

Sustainable Business Needs Sustainable Technology: Can Cloud Modernization Help? | Blog

Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.

Current practices are not aligned to sustainable technology

The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:

  • Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
  • Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
  • Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.

The role of cloud modernization in building a sustainable technology ecosystem

In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.

Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.

Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.

Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.

Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.

Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.

When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.

This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.