All Posts By

Yugal Joshi

Yugal Joshi is a member of the IT services team and assists clients on topics related to mobility, analytics, digital, cloud, and application services. Yugal’s responsibilities include leading Everest Group’s cloud and digital services research offerings. He also assists enterprises in adopting emerging technologies and methodologies such as, Agile, DevOps, software-defined infrastructure. To read more, please see Yugal’s bio.

The Top Three Cloud Vendors Are Fighting An Ugly War, And It’s Only Going To Get Uglier | Blog

By | Blog, Cloud & Infrastructure, Uncategorized

With the massive size of the public cloud market, it’s reasonable to assume that there’s plenty of pie for each the top three vendors –Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure (Azure) – to get their fill.

But the truth is that they’re all battling to capture even larger slices. While this type of war has happened in other technology segments, this one is unique because the market is growing at 30-40 percent year-over-year.

Here are a few examples of the current ugly wars these vendors are waging against each other.

AWS is luring away Azure customers. Channel checks suggest that AWS is incentivizing clients to move their Windows workloads to Linux. The next step is to move their SQL Server workloads to other databases (e.g, PostgreSQL). Of course, it won’t stop there; there will be an entire migration strategy in place. And there have even been a few instances in which AWS has funded clients’ early PoCs for this migration along with the implementation partner.

Azure is pushing for AWS migration. It isn’t uncommon for many mid-sized implementation partners to make their client pitch solely on the fact that they can migrate AWS virtual instances to Azure and achieve 20-30 percent, or more, cost savings. It also isn’t uncommon for Microsoft to bundle a lot of its offerings, e.g., Office 365, to create an attractive commercial bundling for its broader cloud portfolio against AWS, which lacks an enterprise applications play.

GCP is pushing Kubernetes cloud and Anthos. GCP’s key argument against AWS and Azure is that they are both “legacy clouds.” The entire Kubernetes cloud platform story is becoming very interesting and relevant for clients. More so, for newer workloads, such as AI, Machine Learning, and Containers, GCP is pushing hard to take the lead.

Each of these vendors will continue to find new avenues to create trouble for each other. Given that Azure and GCP are starting from a low base, AWS has more to lose.

So, how will the cloud war play out? Three things will happen going forward.

Stack lock-in

The vendors have realized that clients can relatively easily move their IaaS, and even PaaS, offerings to another cloud. Therefore, they’ll push to make their clients adopt native platform offerings that cannot be easily ported to different clouds (e.g., serverless). While some of the workloads will be interoperable across other clouds, parts will run only on one cloud vendor’s stack.

Preferred partnership for workloads

While the vendors will acknowledge that implementation partners will always have cloud alliances, they’ll push to have preferred partner status for specific workloads such as database lift and shift, IoT, and AI. For this, most cloud vendors will partner with strategy consulting firms and implementation partners to shape enterprises’ board room agenda.

Migration kits

In 2018, Google acquired cloud migration specialist Velostrata. This year, both AWS and Azure launched migration kits targeting each other’s clients. This battle will soon become even fiercer, and will encompass not only lift and shift VM migration, but also workloads such as database instances, DevOps pipelines, application run time, and even applications.

With the cloud giants at war, enterprises need to be cautious of where to place their bets. They need to realize that working with cloud vendors will become increasingly complex, because it’s not only about the offerings portfolio but also the engagement model.

Here are three things enterprises should focus on:

  • Ensure interoperability and migration: Enterprises need to make the cloud vendors demonstrate evidence of easy workload interoperability with and migration to other cloud platforms. They should also determine the target cloud vendor’s own native migration tool kits and services, regardless of what the selected implementation partner may use.
  • Stress test the TCO model: Enterprises need to understand the total cost of ownership (TCO) of the new services offered by the cloud vendors. Most of our clients think the cloud vendors’ “new offerings” are expensive. They believe there’s a lack of translation between the offerings and the TCO model. Enterprises should also stress test the presented cost savings use cases, and ask for strong references.
  • Get the right implementation partner: For simpler engagements, the cloud vendors are increasingly preferring smaller implementation partners as they are more agile. Though the vendors claim their pricing model doesn’t change for different implementation partners, enterprises need to ensure they are getting the best commercial construct from both external parties. For complex transformations, enterprises must do their own evaluation, rather than rely on cloud vendor-attached partners. Doing so will become increasingly important given that most implementation partners work across all the cloud vendors.

The cloud wars have just begun, and will become uglier going forward. The cloud vendors’ deep pockets, technological capabilities, myriad offerings, and sway over the market are making their rivalries different than anything your business has experienced in the past. This time, you need to be better prepared.

What do you think about the cloud wars? Please write to me at [email protected].

Should You Scale Agile/DevOps? | Blog

By | Automation/RPA/AI, Blog

Scaling in an application development environment can take many different shapes and forms. For the purposes of this blog, let’s agree that scaling implies:

  • From one team to a project
  • From one project to a program
  • From a program to multiple programs
  • From multiple programs to a portfolio
  • From a portfolio to a business
  • From a business to the entire enterprise.

Now that we’ve set the stage…our research suggests that over 90 percent of enterprises have adopted some form of Agile, and 63 percent believe DevOps is becoming a de facto delivery model. Having tasted initial success, most enterprises plan to scale their Agile/DevOps adoption.

The first thing we need to address here is the confusion. Does increasing adoption imply scaling?

Purists may argue that scaling across different projects isn’t really scaling unless they are part of the same program. This is because scaling by its very nature creates resource constraints, planning issues, increased overhead, and entropy. However, the resource constraints primarily relate to shared assets, not individual teams. So, if team A on one program and team B on another both adopt Agile/DevOps, neither team will be meaningfully impacted. Both can have their owns tools, processes, talent, and governance models. This all implies that this type of scaling isn’t really challenging. But, such a technical definition of scaling is of no value to enterprises. If different projects/programs within the organization adopt Agile and DevOps, they should just call it scaling. Doing so makes it easier and more straightforward.

The big question is, can – and should — Agile/DevOps be scaled?

Some people argue that scaling these delivery models negates the core reasons that Agile was developed in the first place: that they should thrive on micro teams’ freedom to have their own rhythm and velocity to release code as fast as they can, instead of getting bogged down in non-core tasks like documentation and meetings overload.

While this argument is solid in some respects, it doesn’t consider broader negative enterprise impacts. The increasingly complex nature of software requires multiple teams to collaborate. If they don’t, the “Agile/DevOps islands” that work at their own pace, with their own methods and KPIs, cannot deliver against the required cost, quality, or consistent user experience. For example, talent fungibility is the first challenge. Enterprises end up buying many software licenses, using various open source tools, and building custom pipelines. But because each team defines its own customization to tools and processes, it’s difficult to hand over to new employees when needed.

So, why is scaling important?

Scaling delivers higher efficiency and outcome predictability, especially when the software is complex. It also tells the enterprise whether it is, or isn’t, doing Agile/DevOps right. The teams take pride in measuring themselves on the outcomes they deliver. But they often are poorly run and hide their inefficiencies through short cuts. This ends up impacting employees’ work-life balance, dents technical and managerial skill development, increases overall software delivery costs, and may cause regulatory and compliance issues.

What’s our verdict on scaling Agile/DevOps?

We think it makes sense most of the time. But most large enterprises should approach it in a methodical manner and follow a clear transitioning and measurement process. The caveat is that enterprise-wide scale may not always be appropriate or advantageous. Enterprises must consider the talent model, tools investments, service delivery methods, the existence of a platform that provides common services (e.g., authentication, APIs, provisioning, and templates,) and flexibility for the teams to leverage tool sets they are comfortable with.

Scaling is not about driving standardization across Agile/DevOps. It’s about building a broader framework to help Agile/DevOps teams drive consistency where and when possible. Our research on how to scale Agile/DevOps without complicating may help you drive the outcomes you expect.

What has been your experience scaling Agile/DevOps adoption? Please contact me to share your thoughts.

You are on AWS, Azure, or Google’s Cloud. But are you Transforming on the Cloud? | Blog

By | Blog, Cloud & Infrastructure

There is no questioning the ubiquity of cloud delivery models, independent of whether they’re private, public, or hybrid. It has become a crucial technology delivery model across enterprises, and you would be hard pressed to find an enterprise that has not adopted at least some sort of cloud service.

However, adopting the cloud and leveraging it to transform the business are very different. In the Cloud 1.0 and Cloud 2.0 waves, most enterprises started their adoption journey through workload lift and shifts. They reduced their Capex and Opex spend by 30-40 percent over the years. Enamored with these savings and believing their job was done, many stopped there. True that the complexity of the lifted and shifted workload increased when they moved from Cloud 1.0 to Cloud 2.0, e.g., from web portal to collaboration platforms to even ERP systems. But, it was still lift and shift, with minor refactoring.

This fact demonstrates that most enterprises are, unfortunately, treating the cloud as just another hosting model, rather than a transformative platform.

Yet, a few forward-thinking enterprises are now challenging this status quo for the Cloud 3.0 wave. They plan to leverage the cloud as a transformative model where native services can be built in to not only modernize the existing technology landscape but also for cloud-based analytics, IoT-centric solutions, advanced architecture, and very heavy workloads. The main difference with these workloads is that they won’t just “reside” on cloud; they will use the fundamental capabilities of the cloud model for perpetual transformation.

So, what does your enterprise need to do to follow their lead?

Of course, you need to start by building the business case for transformation. Once that is done, and you’ve taken care of the change management aspects, here are the three key technology-centric steps you need to follow:

Redo workloads on the cloud

Many monolith applications, like data warehouses and sales applications, have already been ported to a cloud model. You need to break the ones you use down based on their importance and the extent of debt in terms of the transformation needed. Many components may be taken out of the existing cloud and ported in-house or to other cloud platforms based on the value they can deliver and their architectural complexity. Some components can leverage cloud-based functionalities (e.g., for data analytics) and drive further customer value. You need to think about extending the functionality of these existing workloads to leverage newer cloud platform features such as IoT-based data gathering and advanced authentication.

Revisit new builds on the cloud

Our research suggests that only 27 percent of today’s enterprises are meaningfully building and deploying cloud-native workloads. This includes workloads with self-scaling, tuning, replication, back-up, high availability, and cloud-based API integration. You must proactively assess whether your enterprise needs cloud-native architectures to build out newer solutions. Of course, cloud native does not mean every module should leverage the cloud platform. But a healthy dose of the workload should have some elements of cloud adoption.

Relook development and IT operations on the cloud

Many enterprises overlook this part, as they believe the cloud’s inherent efficiency is enough to transform their operating model. Unfortunately, it does not work that way. For cloud-hosted or cloud-based development, you need to relook at your enterprise’s code pipelines, integrations, security, and various other aspects around IT operations. The best practices of the on-premise era continue to be relevant, albeit in a different model, such as tweaks to the established ITSM model). Your developers need to get comfortable with leveraging abstract APIs, rather than worrying about what is under the hood.

The Cloud 3.0 wave needs to leverage the cloud as a transformation platform instead of just another hosting model. Many enterprises limit their cloud journey to migration and transition. This needs to change going forward. Enterprises will also have to decide whether they will ever be able to build so many native services in their private cloud. The answer is probably not. Therefore, the strategic decision of leveraging hybrid models will become even more important. The service partners will also need to enhance their offerings beyond migration, transformation during migration, and management. They need to drive continuous evolution of workloads once ported or built on the cloud.

Remember, the cloud itself is not magic. What makes it magical is the additional transformation you can derive beyond the cloud platform’s core capabilities.

What has been your experience in adopting cloud services? Please write to me at [email protected].

The Amazon Web Services Juggernaut: Observations from the AWS Summit India 2019 | Blog

By | Blog, Cloud & Infrastructure

Amazon Web Services’ (AWS) Summit in Mumbai last week made it clear that its trifecta juggernaut in customer centricity, long-term thinking, and innovation is giving other public cloud vendors a run for their money.

Here are our key takeaways for AWS clients, partners, and the ecosystem.

Solid growth momentum

Sustaining a growth rate in the mid-teens is a herculean task for most multi billion-dollar businesses. But AWS has an annual run rate of US$31 billion, and clocked-in a 41 percent Y/Y growth rate, underpinned by millions of monthly active customers and tens of thousands of AWS Partner Network (APN) partners around the globe.

Deep focus on the ecosystem

Much of this momentum is due to AWS’ heavy focus on developing a global footprint of partners to help enterprises migrate and transform their workloads. Taking a cautious and guided approach to partner segmentation, it not only broke out its Consulting and Technology partners, but also segmented its Consulting Partners into five principal categories: Global SIs and Influencers, National SIs, Born-in-the-Cloud, Distributors, and Hosters. This is helping AWS establish specific innovation and support agendas for its partners to grow.

AWS growth momentum – underpinned by expansive global partner network

This partner ecosystem focus is increasingly enabling enterprises to achieve real business value through the cloud, including top-line/bottom-line growth, additional RoI, lower cost of operations, and higher application developer productivity. And AWS’ dedicated focus on articulating business benefits such as operational agility, operational resilience, and talent productivity, along with the underlying tenets of the cloud economy, has helped it onboard more enterprises.

Cloud convenience will need an accelerated Outposts push

Enterprises are looking for cloud convenience, which often manifests in location-agnostic (on-premise or on cloud) access to AWS cloud services. To bring native AWS services, infrastructure, and operating models to virtually any datacenter, co-location space, or on-premises facility, the company launched AWS Outposts at its 2018 re:Invent conference. Outposts is expected to go live by H2 2019 for Indian customers. Despite this, AWS is trailing in this front, playing catch-up to Microsoft Azure, which launched Azure Stack almost a year ago (and previewed a version in 2015.) At the same time, AWS will have to educate its enterprise clients and ease their apprehensions about vendor lock-in challenges while leveraging integrated hardware and software packages.

Helping clients avoid consumption fatigue

Shifting the focus toward AWS’ innovation agenda, the public cloud vendor launched over 1,800 services and features in 2018. As enterprises grapple with the rising number of tools and technologies at their disposal – which can lead to consumption fatigue – this can manifest in different ways:

  • Large enterprises will often depend on system integrators to help them unlock value out of latest technologies – AWS’ success in furthering the partner ecosystem will be crucial here
  • For SMBs, AWS will build on its touchpoints with the segment, something that Microsoft and Google already enjoy because of their respective enterprise productivity suites.

What’s next on AWS’ innovation front

There seemed to be a lack of development on the quantum or high-performance computing front. Client conversations suggested that they are struggling to figure out the right use cases depending on whether they need more compute and/or data – something AWS can help educate them on.

Gazing into the enterprise cloud future

We do not believe enterprises will move their entire estates to the public cloud. Indeed, as they transition to the cloud, we expect the future to be decidedly hybrid, i.e., a mix of on-premise and public, as this approach will allow every organization to choose where each application should reside based on its unique needs.

To deliver on this hybrid need, product vendors are inking partnerships with virtualization software companies. And the services and product line-ups are piquing enterprises’ curiosity. To help stake its claim in this hybrid space, AWS Outposts does have a VMware Cloud option, which is AWS’ hardware with the same configurations but using VMware’s Software Defined Data Center (SDDC) stack running on EC2 bare-metal. But it will need to educate the marketplace to accelerate adoption.

The bottom line is that although AWS is facing some challenges on the competitor front – with Azure and a reinvigorated Google Cloud under Thomas Kurian – it is well positioned on account of a solid growth platform and ecosystem leverage, which it demonstrated at the 2019 India Summit.

Busting Four Edge Computing Myths | Blog

By | Blog, Cloud & Infrastructure

Interest in edge computing – which moves data storage, computing, and networking closer to the point of data generation/consumption – has grown significantly over the past several years (as highlighted in the Google Trends search interest chart below). This is because of its ability to reduce latency, lower the cost of data transmission, enhance data security, and reduce pressure on bandwidth.

Interest over time on Google

 

But, as discussions around edge computing have increased, so have misconceptions around the potential applications and benefits of this computing architecture. Here are a few myths that we’ve encountered during discussions with enterprises.

Myth 1: Edge computing is just an idea on the drawing board

Although some believe that edge computing is still in the experimental stages with no practical applications, many supply-side players have already made significant investments in bringing new solutions and offerings to the market. For example, Vapor IO is building a network of decentralized data centers to power edge computing use cases. Saguna is building capabilities in multi access edge computing. Swim.ai allows developers to create streaming applications in real time to process data from connected devices locally. Leading cloud computing players, including Amazon, Google, and Microsoft, are all offering their own edge computing platform. Dropbox formed its edge network to give its customers faster access to their files. And Facebook, Netflix, and Twitter use edge computing for content delivery.

With all these examples, it’s clear that edge computing has advanced well beyond the drawing board.

Myth 2: Edge computing supports only IoT use cases

Processing data on a connected device, such as a surveillance camera, to enable real-time decision making is one of the most common use cases of edge computing. This Internet of Things (IoT) context is what brought edge computing to the center stage, and understandably so. Indeed, our report on IoT Platforms highlights how edge analytics capabilities serve as a key differentiator for leading IoT platform vendors.

However, as detailed in our recently published Edge Computing white paper, the value and role of edge computing extends far beyond IoT.

Edge computing

For example, in online streaming, it makes HD content delivery and live streaming latency free. Its real-time data transfer ability counters what’s often called “virtual reality sickness” in online AR/VR-based gaming. And its use of local infrastructure can help organizations optimize their web sites. For example, faster payments processing will directly increase an e-commerce company’s revenue.

Myth 3: Real-time decision making is the only driver for edge computing

There’s no question that one of edge computing’s key value propositions is its ability to enable real-time decisions. But there are many more use cases in which it adds value beyond reduced latency.

For example, its ability to enhance data security helps manufacturing firms protect sensitive and sometimes highly confidential information. Video surveillance, where cameras constantly capture images for analysis, can generate hundreds of petabytes of data every day. Edge computing eases bandwidth pressure and significantly reduces costs. And when connected devices operate in environments with intermittent to no connectivity, it can process data locally.

Myth 4: Edge spells doom for cloud computing

Much of the talk around edge computing presents that the current cloud computing architecture is not suited to power new age use cases and technologies. This has led to attention grabbing headlines about edge spelling the doom of cloud computing, with developers moving all their applications to the edge. However, edge and cloud computing share a symbiotic relationship. Edge is best suited to run workloads that are less data intensive and require real-time analysis. These include streaming analytics, running the inference phase for machine learning (ML) algorithms, etc. Cloud, on the other hand, powers edge computing by running data intensive workloads such as training the ML algorithms, maintaining databases related to end-user accounts, etc. For example, in the case of autonomous cars, edge enables real-time decision making related to obstacle recognition while cloud stores long-term data to train the car software to learn to identify and classify obstacles. Clearly, edge and cloud computing cannot be viewed in exclusion to each other.

To learn more about edge computing and to discover our decision-making framework for adopting edge computing, please read our Edge Computing white paper.

Application Modernization for Digital Transformation: The Rise of Good Technical Debt | Blog

By | Blog, Digital Transformation

Many organizations today treat technical debt like a pariah. They equate it with legacy systems, worry about how subsequent changes will be complex, time consuming, risky, and cost prohibitive, and consider it something that should be avoided in their journey to becoming a digital enterprise.

What they do not realize is that the debt is not bad in and of itself. Indeed, because speed-to-value is critically important in digital businesses, teams may intentionally take planned shortcuts in order to accomplish the task as quickly and responsibly as possible. As long as the teams understand what they are doing and compromising on, and have suitable plans to address it soon, assuming this debt can be a smart move.

Where enterprises err with technical debt is poorly managing it.

In order to manage it suitably and safely in a digital transformation environment, they should classify it into five major buckets.

The Rise of Good Technical Debt

Planned debt

This is when people knowingly become indebted. It is like buying a house on a bank loan. You know you must repay the loan, and you plan for it accordingly. The defining feature of this type of debt is that the team knows it has the capabilities and resources to “pay” it back. This is a good debt that helps you quickly achieve business objectives.

Blind debt

This is a dangerous debt where system teams do not even know they are building the debt themselves. This is generally the result of poor practices within the team, unplanned and haphazard development, and a fundamentally broken organizational culture. This often also happens during M&As when the acquirer does not know what kind of mess it is getting into.

Acquired debt

This type of debt is unavoidable in business environments. Many systems that were developed in the past with improper technology platforms, tools, coding practices, governance models, and frameworks build technical debt over time. These legacy systems hold valuable information for enterprises aspiring to become digital businesses, and cannot simply be jettisoned. Instead, they need to be made “debt free” in a prioritized manner.

Dead debt

This is probably the worst of all kinds, because, irrespective of corrective measures taken, the systems have degraded so far that they do not support digital initiatives. Therefore, rip and replace becomes the only option. Enterprises need to be careful with identifying this debt as they may confuse it with other types of debt that can be “repaid.” They may end up spending good money after bad, with no way out.

Mirage debt

Not many enterprises think about this one. It appears during system analysis, when architects and others mistakenly believe they have technical debt, when in reality they do not. If there is any, it is in small components, not the system itself.

What should enterprises do to address technical debt?

They should start by understanding that modernization should be of system components, not the systems themselves. Then, they should look at each of their systems and identify the components that can meet future digital demand, and those that could potentially create problems. Once they have catalogued all the components, they need to invest in reducing each one’s technical debt in the most appropriate way. For example, we have seen enterprises successfully build component capabilities outside the main system and exposing APIs for backward integration. This can work across core functionalities as well as user interfaces.

Our research with over 190 application leaders suggests that 75 percent plan to continue to invest and modernize their applications. There is no reason to fear technical debt as long as you understand what you are getting into. For digital businesses, taking on good technical debt can be a strategic choice. Though processes have their value, enterprises that are driven by processes rather than innovation, and are scared of risking short-term technical debt, will struggle in the digital world.
What has been your experience with application modernization? Please share with me at [email protected].

Digital Transformation: The Perils of the “Get Digital Done” Culture | Blog

By | Blog, Digital Transformation

The “Just-in-time” methodology focuses on achieving an outcome through defined structured processes that also build organizational capabilities. “Somehow-in-time” focuses on somehow achieving an outcome, irrespective of the impact it has on the broader enterprise.

Most enterprises reward leaders who embrace “get it done” approaches. Unfortunately, the ideology is becoming part and parcel of more enterprises’ digital transformation initiatives. And while “get it done” may seem like a glamorous virtue, it is detrimental when it comes to digital.

Get Digital Done Doesn’t Build Organizational Capabilities

Everest Group research suggests that 69 percent of enterprises consider the operating model a huge hindrance to digital transformation. Leaders are in such a hurry to achieve the intended outcomes that they neglect building a solid operating model foundation that can enable the outcomes on a consistent basis across the enterprise. This leaves each digital initiative scampering to somehow find resources, somehow find budgets, and somehow find technologies to get it done. And because no new organization capability – think digital vision, talent, or leadership – is developed – these initiatives do not help build sustainable businesses.

Get Digital Done Rewards the Wrong Behavior and People

Much like enterprises’ fascination with “outcome at all costs” creates poor leaders, digital transformation initiatives are plagued with the wrong incentives for the wrong people. Our research suggests that 73 percent of enterprises are failing to get the intended value from their digital initiatives. The key reason is while the leaders are expected to “somehow” complete them, there is no broader strategic agenda for them to scale it beyond their own fiefdoms. Our research also indicates that while enterprises want to drive digital transformation, 60 percent of them lack a meaningful digital vision. They’re obsessed with showing outcomes, and cut corners to achieve them. They take the easier way out to get quick ROI, instead of getting their hands dirty and addressing their big hairy problems.

Get Digital Done Does not Align People towards Common Goals

Obsession with outcomes makes leaders leverage their workforce as “tools” for a project rather than partners in success. Because the employees are not given a meaningful explanation of the agenda and the impact, they become execution hands rather than people who are aligned towards a common enterprise objective. This ultimately causes the initiative to fail. No wonder our research indicates that 87 percent of enterprises that fail to implement change management plans see their digital initiatives fail.

To succeed in their digital transformation journeys, enterprises must put their “get it done” obsession away in a locked drawer and focus on three critical areas:

  • Build a digital foundation: Although easier said than done, this requires a revamp of internal communication, people incentives, and a shared vision of intended goals. Each business unit should have a digital charter that aligns with the corporate mandate of leading in the tech-disrupted world. And it requires strategic, yet nimbler, choices on technology platforms, market channels, brand positioning, and digital vision.
  • Have realistic timelines: Expectation of quick ROI is understandable. However, a crunched timeline can backfire. Enterprises must work towards a pragmatic timeline, and incentivize their leaders to meet it without bypassing any fundamental processes.
  • Involve different stakeholders: Our research shows that a shocking 82 percent of enterprises believe they lack the culture of collaboration needed to drive digital transformation. That means the initiatives become the responsibility of just one leader or team. And that simply won’t work. Instead of driving everything independently, the leader or team should be an orchestrator of the organization’s capabilities. This is the key reason more enterprises are appointing a Chief Digital Officer, as one of that role’s key responsibilities is serving as the orchestrator. Additionally, the team needs to leverage the organization’s current capabilities, and enhance them for the future. It should build a charter for its digital transformation initiative that includes impact on fundamental organizational capabilities such as talent, business functions, compliance, branding, and people engagement.

In their race to “get it done” and appease their end customers, enterprises have forgotten the art of building organizational capabilities that will sustain them in the future and create meaningful competitive advantage. And they can’t succeed unless they change their approach and ideology.

Does your organization have a “get it done” culture, or has it built the right organizational capabilities to achieve true transformation with digital? Please share with me at [email protected].

Stop Trivializing AI: It Is Not Just Automation | Blog

By | Automation/RPA/AI, Blog

AI is certainly being used to attempt to solve many of the world’s big problems, such as health treatment, societal security, and the water shortage crisis. But Everest Group research suggests that 53 percent of enterprises do not – or are not able to – differentiate between AI and intelligent automation and what they can do to help them compete and grow. This trivialization of AI is both eye opening and frustrating.

While it’s true that automation of back-office services is one strong case for AI adoption, there are many more that can deliver considerable value to enterprises. Examples we’ve researched and written about in the past year include intelligent architecture, front-to-back office transformation, talent strategies, and AI in SDLC.

It’s been said that “audacious goals create progress.”  So, how should enterprises think more creatively and aspirationally in their leverage of artificial intelligence to extract real value? There are three ingredients to success.

Think Beyond Efficiency

Enterprises are experimenting with AI-driven IT infrastructure, applications, and business services to enhance the operational efficiency of their internal operations. We have extensively written about how AI-led automation can drive 10-20 percent more savings over traditional models. But enterprises have far more to gain by experimenting with AI to fundamentally transform the entire landscape, including product design customer experience, employee engagement, and stakeholder management.

Think Beyond CX

Most enterprises are confusing putting bots in their contact center with AI adoption. We discussed in an earlier post that enterprises need to get over  their CX fixation and drive an ecosystem experience with AI at the core. Our research suggests that while 63 percent of enterprises rank CX improvement as one of their top three expectations of artificial intelligence, only 43 percent put newer business model among their top three. We believe there are two factors behind this discouraging lack of aspiration: market hype-driven reality checks (which are largely untrue), and enterprises’ inability to truly grasp the power of AI.

Think Beyond Bots

While seemingly paradoxical, humans must be central to any AI adoption strategy. However, most enterprises believe bot adoption is core to their AI journey. Even within the “botsphere,” they narrow it down to Robotic Process Automation (RPA), which is just one small part of the broader ecosystem. At the same time, our research shows that 65 percent of enterprises believe that AI will not materially impact their employment numbers, and that bodes well for their realization of the importance of human involvement.

And, what do enterprises need to do?

Be Patient

Our research suggests that 84 percent of enterprises believe AI initiatives have a long gestation period, which undoubtedly leads to the business losing interest. However, given the nature of these technologies, enterprises need to become more patient in their ROI expectation from such initiatives. Though agility to drive quick business impact is welcome, a short-sighted approach may straight jacket initiatives to the lowest hanging fruits, where immediate ROI outweighs longer term business transformation.

Have Dedicated AI Teams

Enterprises need AI champions within each working unit, in appropriate size alignment. These champions should be tech savvy people who understand where the AI market is going, and are able to contextualize the impact to their business. This team needs to have evangelization experts in who can talk the language of technology as well as business.

Hold Technology Partners Accountable

Our research suggests that ~80 percent of enterprises believe their service partners lack the capabilities to truly leverage artificial intelligence for transformation. Most of the companies complained about the disconnect between the rapid development of AI technologies and the slowness of their service partners to adopt. Indeed, most of these partners sit on the fence waiting for the technologies to mature and become enterprise-grade. And by then, it is too late to help their clients gain first-mover advantage.

As AI technologies span their wings across different facets of our lives, enterprises will have to become more aspirational and demanding. They need to ask their service partners tough questions around AI initiatives. These questions need to go far beyond leveraging AI for automating mundane human tasks, and should focus on fundamentally transforming the business and even creating newer business models.

Let’s create audacious goals for artificial intelligence in enterprises.

What has been your experience adopting AI beyond mundane automation? Please share with me at [email protected].

AI for Experience: From Customers to Stakeholders | Sherpas in Blue Shirts

By | Automation/RPA/AI, Blog, Customer Experience

Everest Group’s digital services research indicates that 89 percent of enterprises consider customer experience (CX) to be their prime digital adoption driver. But we believe the digital experience needs to address all stakeholders an enterprise touches, not just its customers. We touched on this topic in our Digital Services – Annual Report 2018, which focuses on digital operating models.

Indeed, SAP’s recent acquisition of Qualtrics and LinkedIn’s acquisition of Glint indicates the growing importance of managing not only CX, but also the digital experience of employees, partners, and the society at large.

AI Will Usher in the New Era of the Digital Experience Economy

Given the deluge of data from all these stakeholders and the number of parameters that must be addressed to deliver a superior experience, AI will have to be the core engine powering this digital experience economy. It will allow enterprises to build engaging ecosystems that evolve, learn, implement continuous feedback, and make real time decisions.

 

AI’s Potential in Transforming CX is Vast

Today, most enterprises narrowly view the role of AI in CX as implementing chatbots for customer query resolution or building ML algorithms on top of existing applications to enable a basic level of intelligence. However, AI can be leveraged to deliver very powerful experiences including: predictive analytics to pre-empt behaviors; virtual agents that can respond to emotions; advanced conversational systems to drive human-like interactions with machines; and even to deliver completely new experiences by using AI in conjunction with other technologies such as AR/VR, IoT, etc.

Digital natives are already demonstrating these capabilities. Netflix delivers hyper personalization by providing seemingly as many versions as its number of users. Amazon Go retail stores use AI, computer vision, and cameras to deliver a checkout free experience. And the start-up ecosystem is rampant with examples of cutting-edge innovations. For instance, HyperSurfaces is designing next-gen user experiences by using AI to transform any object to user interfaces.

But focusing just on the customer experience is missing the point, and the opportunity.

 AI in the Employee Experience

AI can, and should, play a central role in reimagining the employee journey to promote engagement, productivity, and safety. For example, software company Workday analyzes 60 data points to predict attrition risk. Humanyze enables enterprises to ascertain if a particular office layout supports teamwork. If meticulously designed and tested, AI algorithms can assist in employee hiring and performance management. With video analytics and advanced algorithms, AI systems can ensure worker safety; combined with automation, they can free up humans to work on more strategic tasks.

AI in the Supplier and Partner Experience

Enterprises also need to include suppliers and other partners in their experience management strategy. Using predictive analytics to automate inventory replenishment, gauge supplier performance, and build channels for two-way feedback are just a few examples. AI will play a key role in designing systems that not only pre-empt behaviors/performance but also ensure automated course correction.

AI in the Society Experience

Last but not least, enterprises cannot consider themselves islands in the environment in which they operate. They must realize that experience is as much about reality as about perception. Someone who has never engaged with an enterprise may have an “experience” perception about that organization. Some organizations’ use of AI is clearly for “social good.” Think smart cities, health monitoring, and disaster management systems. But even organizations that don’t have products or services that are “good” for society must view the general public as an important stakeholder. For example, employees at Google vetoed the company’s decision to engage with the Pentagon for use of ML algorithms for military applications. Similarly, employees at Microsoft raised concerns over the company’s involvement with Immigration and Customs Enforcement in the U.S.  AI can be leveraged to predict any such moves by pre-empting the impact that a company’s initiatives might have on society at large.

Moving from Customer to Stakeholder Experience

As organizations make the transition to an AI-enabled stakeholder experience, they must bear in mind that a piecemeal approach will not work. This futuristic vision will have to be supported by an enterprise-wide commitment, rigorous and meticulous preparation of data, ongoing monitoring of algorithms, and significant investment. They will have to cover a lot of ground in reimagining the application and infrastructure architecture to make this vision a distinctive reality.

What has been your experience leveraging AI for different stakeholders’ experiences? Please share with us at [email protected] and [email protected].

 

Using AI to Build, Test, and Fight AI: It’s Disturbing BUT Essential | Sherpas in Blue Shirts

By | Automation/RPA/AI, Blog

Experts and enterprises around the world have talked a lot about the disturbing concept of AI being used to build and test AI systems, and challenge decisions made by those systems. I wrote a blog on this topic a while back.

Disquieting as it is, our AI research makes it clear that AI for AI with increasingly minimal human intervention has moved from concept to reality.

Here are four key reasons this is the case.

Software is Becoming Non-deterministic and Intelligent

Before AI emerged, organizations focused on production support to optimize the environment after the software was released. But those days are going to be over soon, if they aren’t already. The reality is that today’s increasingly dynamic software and Agile/DevOps-oriented environments require tremendous automation and feedback loops from the trenches. Developers and operations teams simply cannot capture and analyze the enormous volume of needed insights. They must leverage AI intelligence to do so, and to enable an ongoing interaction channel with the operating environment.

Testing AI Biases and Outcomes is not Easy

Unlike traditional software with defined boundary conditions, AI systems have very different edge scenarios. And AI systems need to negate/test all edge scenarios to make sense of their environment. But, as there can be millions of permutations and combinations, it’s extremely difficult to manually assure or use traditional automation to test AI systems for data biases and outcomes. Uncomfortable as it may be, AI-layered systems must be used to test AI systems.

The Autonomous Vehicle Framework is Being Mirrored in Technology Systems

The L0-L5 autonomous vehicle framework proposed by SAE International is becoming an inspiration for technology developers. Not surprisingly, they want to leverage AI to build intelligent applications that can have autonomous environments and release. Some are even pushing AI to build the software itself. While this is still in its infancy, our research suggests that developers’ productivity will improve by 40 percent if AI systems are meaningfully leveraged to build software.

The Open Source Ecosystem is Becoming Indispensable

Although enterprises used to take pride in building boundary walls to protect their IP and using best of breed tools, open source changed all that. Most enterprises realize that their developers cannot build an AI system on their own, and now rely on open source repositories. And our research shows that 20-30 percent of an AI system can be developed by leveraging already available code. However, scanning these repositories and zeroing in on the needed pieces of code aren’t tasks for the faint hearted given their massive size. Indeed, even the smartest developers need help from an AI intelligent system.

There’s little question that using AI systems to build, test, and fight AI systems is disconcerting. That’s one of the key reasons that enterprises that have already adopted AI systems haven’t yet adopted AI to build, test, and secure them. But it’s an inevitability that’s already knocking at their doors. And they will quickly realize that reliance on a “human in the loop” model, though well intentioned, has severe limitations not only around the cost of governance, but also around the sheer intelligence, bandwidth, and foresight required by humans to govern AI systems.

Rather than debating its merit or becoming overwhelmed with the associated risks, enterprises need to build a governing framework for this new reality. They must work closely with technology vendors, cloud providers, and AI companies to ensure their business does not suffer in this new, albeit uncomfortable, environment.

Has your enterprise started leveraging AI to build, test, or fight AI systems? If so, please share your experiences with me at [email protected].