Author: ChirajeetSengupta

Reimagining Global Services: How to get MORE out of Technology | Sherpas in Blue Shirts

Much has been written and said about the Bimodal IT model Gartner introduced in 2014 – with forceful arguments for and against. Not at all intending to bash that model, it’s safe to say that the digital explosion over the last three years demands that enterprises’ technology strategies be much more nuanced and dynamic.

The MORE model for global services

Let me explain with the help of the following chart. I call it the Maintain-Optimize-Reimagine-Explore – the MORE – model.

Global Services and Technology in the MORE model

I’ve tried to plot (intuitively) a bunch of technology and service themes on their current and future innovation potential.

  • Maintain: On the bottom right are themes like mainframes and traditional hosting services that are unlikely to go through dramatic changes in the near term. These are exceptionally stable and commoditized, and will not attract exciting investments. Enterprises still need them, and CIOs should Maintain status quo because it’s too risky and/or expensive to modernize them.
  • Optimize: Seven years back, that cool AWS deployment was the craziest, riskiest, hippest tech thing we could do. But, I guess we’ve all aged (just a little bit) since then. The needle of cloud investment for most enterprises has moved from AWS migration (USD$200 per application, anyone?) to effective orchestration and management – a clear case of the enterprise seeking to Optimize its investments in the bottom right corner of my diagram.
  • Explore: On the top right, we have the new wild, wild, west of the tech world. Blockchain can completely transform how the world fundamentally conducts commerce, IoT is working up steam, and artificial intelligence can shape a different version of human existence, much less business models. Enterprises need to Explore these to stay relevant in the future.
  • Reimagine: What we cannot afford to miss out on is the exciting opportunity to Reimagine “traditional” global services into leaner and more effective models using a combination of enabling themes like automation, DevOps, and analytics. These are immediate opportunities that many enterprises consider essential to running effective operations in a traditional AND a digital world. For example:
    • In a world where “the app is the business,” QA is being reimagined as an ecosystem-driven, as-a-service play built on extensive automation and process platforms. The reimagined QA assures a digital business process and a digital experience – not just an app.
    • We are getting into the third generation of workplace services (first hardware-centric, then operations-centric, and now software and experience-centric.) The reimagined workplace service model delivers a highly contextual, user-aware experience, without sacrificing the long-range efficiency benefits.
    • Application management services (AMS) are being reimagined through extensive outcome modeling, automation instrumentation, and continuous monitoring.

Three principles for reimagining global services

It’s interesting to note that many of these reimagination exercises are based on three common foundational principles:

  1. Automation first: Automation and intelligence lie at the heart of our ability to reimagine technology services, because automation helps us deliver breakthrough outcomes without blowing the cost model out of the water.
  2. Speed first: The need to run ALL of IT at speed is driving reimagination and the corresponding investments. If you’re at the reimagination table, throw away your tools to build the perfect (and the biggest) mousetrap. A big part of the drive for reimagination is to move from scale-driven arbitrage first models to speed-driven digital first models.
  3. Alignment always: This is important and good news. For decades, we’ve all complained about the absence of Business IT alignment. Reimagination hits out at this issue by focusing on technology architecture that is open and scalable, and by delivering as-a-service consumption models that are closely linked to things that the business really cares about.

Over the next several months, Everest Group is going to publish viewpoints on each of these topics and more. But we’d love to hear any comments and questions you have right now. Please share with us and our readers!

Reimagining Global Engineering Services – a Hierarchy of Needs | Sherpas in Blue Shirts

The engineering services industry is one of the most interesting segments in the global services landscape today.

Compared to IT and business process services, the global engineering services market is much smaller, at approximately US$ 90 billion. It is also growing much faster, at approximately 15 percent per year.

The bulk of the growth is going to be driven by a need to reimagine global sourcing of engineering services, in line with the progression of enterprise digitalization strategies.

Everest Group believes there are four distinct objectives behind digital engineering strategies:

Hierarchy of Digital Engineering Services Demand

Global Sourcing of Digital Engineering Services

  1. Crushing spend: Arguably, there’s nothing new about leveraging a global sourcing model to reduce spend. However, the optimization levers go well beyond arbitrage, extending into the realms of analytics, the IoT, and automation. We are beginning to see enterprises contracting not just for cost savings, but for specific details around how cost savings are being achieved (e.g., success of automation projects, and ongoing commitment for automation.) Digitalization can often achieve breakthrough spend reduction outcomes (e.g., maintenance of oil refineries leveraging IoT technologies), well beyond the traditional arbitrage levers.
  2. Transforming experience in plants or mines: The experience is typically optimized across a bunch of typical considerations such as safety and accessibility, speed, and convenience. For instance, using design thinking principles in plant assembly line design, IoT implementation in mines for health and safety related use cases and medical device companies are using digitally reimagined techniques to create improved patient care outcomes.
  3. Accelerating product innovation: Sophisticated enterprises realize they can’t do it well enough or fast enough unless they embrace a broader innovation ecosystem. Globalization is a major driver of demand, as is the need to accelerate and contextualize cross-industry innovation. For instance, automotive OEMs realize they need to embrace a broader ecosystem of talent and technology providers to create differentiated infotainment offerings.</>li
  4. Disrupting the business model: Business model disruption comes about as a natural progression through the first three levels of the hierarchy, coupled with a disruptive idea. For instance, automotive companies the world over are waking up to the potential of a new business model that is built on asset sharing as opposed to asset ownership. Utility companies are creating parallel energy sharing models using blockchain. Medical diagnostic companies are reimagining their business model by experimenting to service-led, as-a-service models.

Everest Group recommends enterprises follow a “3E” approach to shaping their engineering services global sourcing strategy:

  • Evaluate the current state of your digital engineering journey against the strategic objectives of efficiency, experience, innovation, or disruption. The way you measure success in the short term should derive from where you are, and your longer-term strategy should stem from a broader industry vision.
  • Evolve the ER&D sourcing model in line with your aspirations. If you are trying to drive strategic business impact at the higher reaches of digital engineering maturity, you should be able to use objective data to benchmark the impact on business processes. For instance, your ER&D sourcing models should be linked with improvements in supply chain metrics, experience, accelerated time to market, or an increase in digital-led revenues.
  • Enrich the sources of engineering and R&D innovation by engaging with service providers, start-ups, academia, designers, social scientists, etc. Such an ecosystem should transcend the traditional enterprise-partner model, and requires a central orchestration function for scalability.

Visit our engineering services page for more insights on engineering services global sourcing strategies.

Automation Economics for Service Providers – Not So Straightforward? | Sherpas in Blue Shirts

IT infrastructure environments are getting increasingly complicated to manage, particularly because enterprises are adopting agile delivery models – e.g., cloud and as-a-service – to meet the dynamic needs of their digital businesses. As assets proliferate, process complexities rise, and management costs escalate, enterprises realize the need for a more coherent, strategic approach to automation to regain control. This renewed focus of automation within IT infrastructure services has new-found implications for IT service providers.

Service providers stand to derive significant cost and productivity benefits from, and showcase value within engagements through, automation. That said, revenue cannibalization is a short-term outcome for which they need to brace. However, this comes against the backdrop of a highly competitive pricing environment and enterprises’ increasing insourcing initiatives. To complicate matters, the margin implications of automation can be tricky, as automation runs fundamentally contra to the arbitrage-driven margin model.

IT Infrastructure Services Automation Blog

 

It’s time to get running and cut the fat…

Service providers lagging in industry growth will be caught in a vicious cycle of margin contraction and degrowth unless they focus on reducing overhead, and make significant and prompt investments in strengthening their core delivery and account management capabilities to capture revenue run-off as a result of automation.

High-growth service providers also need to remain wary, and use automation as a growth lever without holding an excessive margin orientation in order to stay ahead of the pack. This includes firming up their strategic business model by assessing automation strategies in the context of aspirations for a product + services versus a services only play. And the answers may crucially depend on their current starting positons.

As automation stands to disrupt the IT infrastructure services space more than ever before, you can be certain we’ll continue to pay close attention to developments. If IT infrastructure services helps you win your daily bread – so should you!

For drill-down and detailed insights into latest trends in the IT infrastructure services automation space, please see Everest Group’s newly released report, “IT Infrastructure Services Automation – Codified Consciousness is the Future.”

IT Infrastructure Services Automation – What Enterprises Need to Know | Sherpas in Blue Shirts

IT infrastructure services automation is evolving rapidly as the objective function shifts from efficiency gains to service resiliency and agility. IT infrastructure processes have become dynamic and complex, and traditional automation strategies, characterized by siloed initiatives and reactive script-based automation techniques, are becoming increasingly obsolete.

Autonomics holds the key to the “as-a-service” world…

Autonomics is poised to disrupt the automation space, and lay the foundation for business-aligned infrastructure services delivery. The self-learning and self-healing capabilities offered by the technology can help enterprises drive significant efficiencies and control within complex and judgement-intensive IT operations (think availability management or capacity management.) Efficient management of such processes, driven by autonomics, will be a critical enabler of the shift in IT service delivery towards the consumption-based/as-a-service paradigm.

infrsrvcsauto-srvcs-auto-interplay-image

We believe that the nirvana state of the IT infrastructure delivery-automation interplay, though a fair distance away, will involve the leverage of cognitive computing to create a “self-aware/alive” IT infrastructure model. Such a model will help deliver services contextualized to real-time user/business needs leveraging data from human-to-machine and machine-to-machine interfaces – i.e., making infrastructure “truly conscious.”

What is the best mode of automation adoption?

We observe three broad adoption modes for automation within enterprises:

  • Tools-based approach: This primarily focuses on automating low-end, high-volume tasks with the key objective being cost/FTE headcount reduction. While suitable for processes that are extremely well-defined and static, this approach does not unlock the full value of automation. Initiatives are siloed and lack business context, leading to accumulation of legacy portfolio with poor integration.
  • Adoption embedded within managed services constructs: Here, the focus of automation is on streamlining operational processes (i.e. balancing cost reduction and operational efficiency gains.) This model is being increasingly adopted by enterprises with significant outsourcing experience. Although well-understood, it is reactive, and cannot drive business innovation. Additionally, the focus on generating new use cases and creating common standards and best practices across the enterprise remains limited.
  • CoE-based adoption: A centrally-driven initiative with a strategic view to harmonize adoption benefits across each layer of the IT infrastructure stack, this approach helps drive long-term innovation by proactively identifying new use cases/scenarios, acting as a conduit for business enablement. That said, this model requires extensive upfront planning, seamless between business-IT collaboration, and a strong change appetite. Enterprises also need to brace for a significant gestation period before business benefits (commensurate to investment levels) are realized.

While each of these models offers varied levels of benefits, we observe that the eventual mode of automation adoption chosen is highly dependent on enterprise imperatives/mindset and pre-existing service models, and rightly so.

Our recommendation to enterprises…

Traditional automation within IT infrastructure services has been around for ages, but is simply not designed to deal with today’s dynamic environments. It is time that enterprises took a coherent, business-context-aligned approach to IT infrastructure automation. Such a strategy should:

  • Focus on what automation can achieve for delivering services in an agile and resilient manner, not on technical sophistication alone
  • Involve a pragmatic and phased adoption approach with a clear roadmap to scaling, taking into account organizational change constructs
  • Keep in mind that automation is not a one-shot affair – it needs process improvement as pre-work, and downstream maintenance and harmonization with new and changing business requirements
  • Balance the trade-off to protect existing tool investments against the need to avoid lock-in.

For drill-down and detailed insights into the latest trends in the IT infrastructure services automation space, please see Everest Group’s newly released report, “IT Infrastructure Services Automation – Codified Consciousness is the Future.”

Artificial Intelligence Platforms a Game Changer – No Sh*t, Sherlock! | Sherpas in Blue Shirts

Last week, Wipro’s CTO briefed me on the Wipro Holmes artificial intelligence platform. My key takeaways from the session and subsequent musings on where AI is taking the industry:

  • Wipro Holmes is currently focused on highly tangible business cases, where AI can replace large volumes of human labor, the kind that comprises commodity skills, but enterprises cannot do without. A case in point in automating KYC processes, from hours to minutes. Holmes is also being used in infrastructure management deals to automate service desk functions. Similar use cases include claims processing, loan processing, and virtual recruitment. The impact is pretty significant, and the focus seems to be on eliminating or dramatically streamlining large, clunky manual processes that enterprises currently spend a lot on, but which do not contribute significantly in value creation
  • Apart from the usual bundling in managed services contracts and platform models, Wipro is also taking the platform to market through client co-invested vehicles, whereby a client partners in developing a new use case and enjoys a gainshare from subsequent sales
  • The critical challenge in scaling any AI platform is developing new use cases and taking them to market rapidly and at scale. Wipro currently averages four months from conceptualization to deployment – this is impressive, but doesn’t necessarily leave the competition panting for breath

Overall, the AI platform market is focusing on three broad areas:

  1. Automation for IT: Infrastructure management is probably the most notable example
  2. Automation for scaled, commodity business processes: Claims processing and KYC are notable examples
  3. Facilitating complex decision-making: This is probably the most creative scenario – for instance, using AI platforms to scan medical literature and facilitate care management recommendations. These scenarios can lead to new business models, and spawn Digital businesses like http://wayblazer.com/. Doing this successfully requires a fundamentally different business model, usually involving a large developer community to innovate rapidly and reduce the business risks

As of now, managed service providers like Wipro focused on the first two – and understandably so. Innovation using AI is seen in the context of the broader business model and differentiation in their core markets rather than risky investments in areas that are not fully understood – yet.

All of this might change. As the old aphorism goes, we tend to overestimate the short term and underestimate the long term. As the world goes increasingly digital and different business models involving a nexus of technology and service providers, user and developer communities, and adjacent industry participation come to the fore, it may not be long before services providers realize that it’s a question of “and,” not “or.”

Productivity Improvement or Cost Takeout – Pick Your Battle! | Sherpas in Blue Shirts

I wish I had a dollar – or a couple of aspirin – for every time I heard someone claim “20 percent productivity improvement” when all they had really done was move the work to a less expensive location. When they make these claims, they’re confusing cost takeout and productivity.

Cost takeout certainly has its uses, including:

  1. Moving work to a talent model with a flatter pyramid
  2. Getting fewer people to work faster/harder
  3. Offshoring

But cost takeout is not productivity, which is precisely what enterprises need to start thinking about, as most of them have already done all of the above, and then some.

As discussed in our recently released research report, “In Search of ADM Productivity,” productivity can be about (among myriad other things):

  1. Optimizing shared services organizational structures
  2. Standardizing and automating business processes, toolsets, and technologies
  3. Automating infrastructure and application deployment processes

In essence, productivity is an output-input ratio. Productivity improvement has been described as “doing more with less.” I believe a better definition would be “improved output-input ratio, by virtue of being done differently.”

Think about this distinction. Technology and sourcing leaders often talk about “the need to improve productivity.” And they then promptly start flogging the dead cost takeout horse, with roughly the same return as I get (exactly nothing) from listening to the “20 percent productivity gain from outsourcing” line.

The difference between the two is worth bearing in mind because identifying and focusing on the right productivity initiatives can bear startling benefits. Our research suggests as much as 20-50 percent incremental cost savings. More importantly, the emphasis on productivity can lead to increased agility and a focus on greater functionality as opposed to “managing the mess.”

The first step is to pick the right weapon, for the right battle. Or you could always stock up on more aspirin.

A Cinderella Wish: Why Application Management Needs a Fairy Godmother | Sherpas in Blue Shirts

For a very long time, application management has been the red-haired stepchild of the IT services world. Taken for granted, it has silently done its job without complaint, and essentially remained an IT function, far removed from the hurly burly of what business needs.

However, as the industry evolves to answer increasing business demands, the pressure is on service providers to transform the application management function. The application management system of the future needs to address three key issues faced by CIOs.

  • Productivity: Most large enterprises have exhausted the offshoring potential of application management. The focus has shifted to non-linear cost saving models.

    The application management model of the future must offer industry standard toolsets and automated processes, and identify deviations from coding best practices to enable continuous improvement. It must do so over an industrialized global delivery platform.

  • Business IT alignment: Over the years, large enterprises have tended to accumulate heavy sediments of legacy applications that bloat the portfolio and eat up valuable budgets. The application portfolio now faces ruthless rightsizing, and IT needs to provide full visibility on where business is spending its IT budget.

    The application management function needs to provide usage and billing visibility at multiple levels of aggregation 24/7, on a real-time basis, on desktops and mobile devices.

  • Future readiness: The world is going SaaS in front of our eyes. Application management needs to support this fundamental industry change.The modern application management function not only needs to straddle legacy and SaaS application architectures, but also offer a proactive roadmap to application modernization.

Unfortunately, many enterprises are still missing most of these elements. Contrary to popular opinion, the highest level of application management sophistication is not achieved by offering increased offshoring. Nor is it achieved by migrating from staff augmentation to managed service models.

Application management, just like poor Cinderella, is tired. Is there a Fairy Godmother who can ease its burden and get it the support it needs?


Photo credit: Dayna Bateman

Video: PEAK Matrix Assessment of Enterprise Cloud Service Providers | Gaining Altitude in the Cloud

Everest Group Performance | Experience | Ability | Knowledge (PEAK) Matrix™ provides a detailed assessment on the service provider landscape in a given market. In this video, Practice Director Chirajeet Sengupta outlines the positioning of cloud application and infrastructure service providers on the PEAK Matrix.

Download the preview of the report referenced in this video
Learn more about PEAK Matrix
Learn more about Cloud Vista™ research

Why the Traditional Infrastructure Outsourcing Market Is about to Shrink Dramatically | Gaining Altitude in the Cloud

For those of us who are industry observers, it is not a secret that the traditional Infrastructure Outsourcing (IO) market has stopped growing and is currently contracting at a rate of 2 percent a year.

Market Size for Traditional Infrastructure Outsourcing Players

The secular trends driving this contraction are numerous and include client frustration with the contracting model, lack of flexibility, poor customer/provider alignment, and alternative sourcing options such as in-house, co-location and remote infrastructure management outsourcing (RIMO).  All of these alternative options present increased flexibility and often more attractive economics.

However, the reason that this market deceleration and consequent contraction has been so slow is that once a client has entered into a significant IO contract it is very hard to move away from the model. It is possible to switch service providers but very difficult to rebuild in-house managed capacity.  As a result, we note that the traditional IO market place is dominated by re-competes with few new logos entering the market.

Nevertheless, this stable market may be about to change in a big way. To understand why, we only need to look at the nature of the workloads that are running in these IO environments. Over the course of the last year, we have taken a close look at these workloads and have determined that between 40-50 percent of the workloads currently hosted via these contracts do not have the security requirements or the mission criticality that prevents them from being migrated to less expensive cloud environments. We estimate that savings can approach 20-40 percent, depending on the workload distribution and the volumes involved, particularly when these workloads are migrated to a pay-as-you-go environment such as those available from public cloud platforms such as Amazon or Rackspace.

To better understand the viability of this happening, let’s look at the use case of Application Test and Development.  Test and Development environments are not subject to the same performance, security or compliance requirements of production workloads. They are, in fact, excellent candidates to be operated in less expensive but more flexible environments. There is little reason for customers to look at these workloads in the same light as production workloads or for the same cost and delivery constructs to be applied.

When you dig further into existing IO contracts you find that many of these customers are operating well above their contracted minimum volumes thus allowing them to shift these workloads without fear of having to renegotiate contracts or pay early termination fees. When you consider that test and development alone often take up 25-35 percent of the capacity in the traditional IO environment it becomes clear that it is only a matter of time before customers move to shift these workloads and move from these low-hanging fruit to other workloads that exhibit similar favorable characteristics.

We have researched what conditions are necessary to allow a traditional IO client to move down this path. It appears that three key conditions need to be met.

  1. A vision or understanding by the customer that savings are possible, large and attainable.
  2. Orchestration tools that allow the client to organize, manage and coordinate. These tools have recently come on the market with several strong case studies to demonstrate their success.  What makes the business case compelling is that it is entirely possible for customers to “test” the cloud model by moving incremental workloads without investing in such tools though a larger scale of transformation would require such investments.
  3.  The willingness to invest the time and money to implement the program.

Given the strong ROI resulting from these initiatives it is likely that many if not all ITO customers will explore this option.  Already, more than half of enterprise customers are actively migrating or considering migration of development and testing environments to the cloud, next only to email/collaboration and disaster recovery/archiving.

Cloud Adoption for Application Dev Test Environments

With significant portions of IO workloads vulnerable to this emerging threat it’s only logical to conclude that we will see this market contract sharply in the next few years.

Can Outcome-based Pricing Replace “Till SLAs Do Us Part?” | Sherpas in Blue Shirts

ITO deals in which service providers’ compensation is linked directly to the business outcomes they achieve for the clients have started gaining prominence. While the idea has been around for some time, and indeed should be part of a gradual evolution process from pure FTE or T&M models through to gainsharing arrangements, we’ve observed with interest both parties’ strategic interests (see image below) converging through shared business outcomes on several mega deals.

ITO Deal Demand and Supply Forces

A number of clients have recently asked for our advice and insights on the upsides and downsides of outcome-based pricing models. Following are the factors we told them they must carefully consider before asking their providers to make the change:

  1. Trust:  Ask yourself, “Do I really trust my partner?” Use your common-sense. Outcome-based models are often used in combination with a base T&M model in situations involving complicated deployment of new technology, where both parties share the risk.  However, not everything goes smoothly in such situations, so don’t fall over yourself to shout “Penalty!” You might need to arrive at a negotiated outcome once your partner admits to an honest, unforeseen mistake. In other words, incentivise your provider to make it right, rather than masking the flaw and investing in a sub-optimal environment for the duration of the contract. The latter is the road to poor relationship health, contract disputes, and a frustrating end-user experience.
  2. Corollary to the Trust Principle, be prepared to Cede control: If the implementation partner is responsible for improved business outcomes, the team needs to have control over the business process and the underlying stack, including platforms, management, and reporting tools, and quite simply…the way you do business. You can play the powerful investor, but let your partner be the empowered CEO. Share your powers.
  3. Identify scope accurately: Building on the above point, the scope is often beyond the obvious. Mere implementation of an ERP system won’t raise productivity or prevent revenue leakage if the overlying process is inefficient. State the scope in line with your desired outcome. For example, scope is not, “implement ERP”; it’s “raise productivity by XX% by implementing ERP and optimizing the accompanying process.”
  4. Know the price of improved outcomes: Most providers won’t tell you they build a risk premium into their base fees on outcome-based models. In other words, while you are encouraging your partners to take on more risks, they want to cap the downside. Remember that they don’t want to, and sometime can’t, back out of a contract. Thus, if the desired outcome cannot be reached, they would have spent significant time and effort without recompense. So, you must carefully evaluate the business case for an outcome-based model. Is the scope large enough? Are the benefits of transformation deep enough?
  5. Make it stick: Arguably, this is the most challenging part, as it’s  often difficult to establish causality between the provider’s performance and business outcomes, making “Cede control” (point #2 above) even more important. In addition, governance models must be suitably evolved and often supported by sophisticated management tools and chargeback mechanisms. Keep in mind that these come with a cost and, consequently, must be built into your ROI model.

At the end of the day, an outcome-based model is a bit like marriage – it represents the triumph of hope over experience. So be clear about why you are getting into it, choose your partner carefully, share space, and who knows – you could live happily ever after!

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.