Tag: enterprise

The Evolution to Next Generation Operating Models in the Energy Industry | Sherpas in Blue Shirts

As pioneers of global sourcing, leading organizations in the energy industry such as ExxonMobil, Shell, Chevron, and BP have been able to extract significant value from offshoring while maintaining a manageable risk profile. However, the growing complexity of their global sourcing portfolios in terms of internal and external supply options (see Table 1 below), service delivery locations, governance models, and systems/tools has led to a set of challenging issues around design and ongoing optimization.

Comparison of sourcing model for leading energy companies

From a design standpoint, energy companies are rethinking their operating models to address a wide range of issues and concerns including:

The next wave of scope expansion opportunities

  • How can I systematically identify new sourcing areas balancing value and risk?
  • How can I predicate and manage short- and long-term demand?

Building an integrated supply mode and global delivery footprint

  • How can I maintain an optimal, complementary mix of internal and outsourcing supply alternatives reflecting requirements for capacity, competencies, and cost?
  • How can I build a consistent framework to place incremental scope into the right supply model?
  • How can I create an integrated, flexible global delivery model that meets current and future demand while minimizing exposure to risks?

Energy companies also face challenges in building a holistic management approach to enable ongoing value capture and expansion, such as:

How to build a holistic, enterprise-level governance model

  • How can I appropriately assess performance across outsourcing service providers, captives, locations, and service lines?
  • What is the optimal governance approach to enable best practice sharing, risk mitigation, and economies of scale?

How to improve end-to-end process effectiveness and control

  • What are the value and constraints to standardize and simplify end-to-end processes across my enterprise?
  • What are the right effectiveness metrics and process efficiency measures?
  • How can I enhance process control to minimize operational risk?

Many of the top energy organizations are experimenting with next generation operating models to address these issues. For example, a global energy major that utilizes both outsourcing service providers and internal shared services recently embarked on a multi-year journey to integrate services design, delivery, and governance across business units, functions, and geographies. The objectives are to enhance end-to-end process effectiveness and control, reduce complexity and risk at the enterprise-level, and improve service performance and cost efficiency.

Our experience in the energy industry clearly indicates that unlocking the next wave of value requires more deliberate design of an integrated global delivery model, a consistent framework to better align supply with demand, and a holistic approach to govern and optimize services. In addition, corporate culture impacts cannot be overlooked. In global energy companies’ large, complex environments full of competing interest and priorities, strong executive leadership and commitment are vital to success.

Does Your Organization Have Operating Rhythm? | Sherpas in Blue Shirts

A multinational client recently engaged us to conduct an operating rhythm evaluation. If you’re not familiar with the term, operating rhythm refers to the set of meetings that management has on its schedule to drive and manage the organization. While naturally you would think that a management team would be aligned and driving toward common goals, it’s not all that unusual for there to be fairly significant disparities in the way different members of a management team communicate the organization’s goals and objectives or manage their own groups.

At first glance, this client’s management team appeared to be focused on the right things. But our analysis of their individual meeting schedules found that some executives spent too much time in meetings that were not appropriately focused or aligned with the organization’s goals and objectives.

Overall, management was spending over 60 percent of its time in meetings, and, to many, it was death by meeting. While most of the time was spent on the things that mattered to the company, some outliers needed to be refocused. And while many believed the time was spent on the right things, they didn’t perceive the meetings to be overly effective (which is a topic for another blog).

Additionally, some members of the management team were tremendously overbooked. A particular individual had average 200+ hours of scheduled meetings per month. This meant that given a normal 176-hour work month, he would be in meetings more than 10 hours a day! In reality this didn’t happen; instead, he missed meetings and had to reschedule, had to leave meetings early, and overall was not able to do his job. In fact, he didn’t have time to actually think.

Finally, we observed that commercial directors with similar responsibilities and scope of work – but in different geographies – had very different meetings schedules, and in effect, one spent half as much time in meetings than the other did. In this particular case that was acceptable as one was much more of a delegator, and the other was striving to overcome performance gaps and deficiencies. But in general, that type of disparity could signify troublesome misalignment.

We helped the organization understand it needed to determine, in tandem with its management team members, the priorities and time that each individual should spend on company priorities. It also needed to eliminate certain meetings, and establish the right mix of weekly, monthly, quarterly, and annual meetings. Further, each meeting had to be clearly linked to a core process that in itself was linked to a key performance indicator (KPI.)

The first step in building an operating rhythm requires the organization to level set its KPIs to ensure every member of the management team has the same understanding.

Next, management needs to tie each KPI to the core processes that will enable them to be achieved. Then, management must detail these core processes to the next level of granularity, describing:

  • Sub-processes (in a sense process maps or swim lanes)
  • Inputs and outputs
  • Roles and responsibilities
  • Meetings
  • Frequency

The core processes will themselves drive different timescales or frequencies. For example:

  • Annual meetings are necessary to set targets and strategy, typically two-three days
  • Quarterly meetings are necessary to review results and adjust the strategy, typically one day
  • Monthly meetings are necessary to check and correct deviations, typically two-three hours
  • Weekly  meetings are necessary to track and monitor execution, typically one hour

Enterprises can make significant progress in achieving their right operational rhythm if every member of management aligns their meeting schedule to the organizational core processes such that the right amount of time is spent on the right topics with the right people. And the trickle down effect comes into play here, whereby mid-management also establishes meeting schedules that align to their organization’s priorities.

Microsoft Confuses Economies of Scale with Next Generation Data Centers | Gaining Altitude in the Cloud

In a recent article in Information Week, a Microsoft executive made the claim that the economies of scale of cloud data centers were so compelling that few companies, if any, would want to continue to operate their own. He went on to offer Microsoft’s cloud data centers as the proof point. He stated the Microsoft cloud data centers operate on next generation architecture. Instead  of housing servers in hardened data centers, which are expensive to build, cool and maintain, Microsoft utilizes new hyper-scalable architecture that jam packs servers and storage into vapor-cooled containers similar to those you see on the interstate being pulled by semi trucks. Microsoft achieves resilience exceeding that in the hardened data centers by duplication of assets in multiple locations. And when combined with the flexibility of virtualized cloud offerings, the net result is dramatically lower cost – to the tune of as little as 25 percent of the cost to build and run their level 4 hardened cousins.

Our counterpoint: we have been conducting extensive research, and our analysis confirms that many next generation data centers are significantly less expensive than many cloud offerings. Further, they are mature enough to support enterprise-class computing today, and are far more flexible than traditional legacy data center infrastructures. When enterprises combine these benefits, they can indeed achieve dramatically lower computing costs. It’s important to recognize that these are not driven by economies of scale; rather, they arise from the advantages of radical new architecture and technology. Everest Group’s work strongly suggests that whereas economies of scale do exist in next generation data centers and their related cloud offerings, most of the benefits are reached quite quickly.

A vital distinction – next generation data centers and private cloud are available to most mid to large enterprises at a cost comparable to that of mega Microsoft. Enterprises seeking to capture these benefits should not be seduced by claims of massive gains provided by ever increasing size,  but should instead focus their attention on how to leverage the architecture and next generation technologies while adapting their applications and organizations to take advantage of these dramatic new opportunities.

Economic Forecast Calls for More Clouds | Gaining Altitude in the Cloud

Have you ever stopped to think why cloud computing is at the center of any IT-related discussion? In our conversations with clients, from the boardroom to the line manager, cloud is sure to enter into the discussion. Today, many of those conversations are around understanding, and to a lesser degree, implementation. But once the discussion crosses the threshold of understanding, the topic immediately goes to, “How can I get into the cloud?”

Everest Group recently held a webinar on the economics of cloud computing. There were two objectives: 1) Help clarify just how disruptive, in a good way, cloud computing is and can be; and 2) Demonstrate the economic benefits that exist in the cloud economy, and that there are those striving for this competitive advantage today.

The Hole in the Water That You Throw Money Into

One of the key economic drivers that hampers today’s data center environment is the relatively low utilization rate across its resources. Think about it like this: You’ve probably heard the old adage that owning a boat is like having a hole in the water that you throw money into. That is because the majority of boats are seldom used. (Trust me, I know, I used to own one.) The per use cost of a $25,000 (and quickly depreciating) boat that you actually use three or four times a year is quite high, and the reality is you could have rented a boat often for a fraction of the cost. The same thing is happening in your data center. If your utilization is 20 percent, or even 30 percent, you have essentially wasted 70-80 percent of your spend. That is an expensive data center.

Workload Utilizations1

Cloud computing is like that little boat rental shop tucked away in a nice cove on your favorite lake. What if you could get rid of excess capacity, better manage resource peaks and valleys, and rent public capacity when you need it, and not pay for it when you don’t?

What if we leverage public cloud flexibility1

The Economics

As you can see in the graphic below, the economics related to cloud are dramatic, and the key lies in leveraging the public cloud to pay only for what you use, eliminating the issue of excess capacity.

Public cloud options unlock extraordinary enterprise economics

There is a variety of point examples in which this is done today, with the above economics reaped. For instance, Ticket Master leverages the public cloud for large events, loading an environment to the cloud, specifically sized for each given event. The specific event may only last several hours or days, and once complete, Ticket Master takes down the environment and loads the data in its dedicated systems.

There are also enterprises and suppliers working to enable peak bursting more seamlessly. For example, eBay recently showed where they are working with Rackspace and Microsoft Azure to enable hybrid cloud bursting, allowing eBay to reduce its steady state environment (think hole in the water) from 1,900 to 800 servers, saving it $1.1 million per month.

Hybrid economics example eBay

 The Steps to Getting Started

Dedicate yourself to getting rid of your boat (or should I say boat anchor?) Begin a portfolio assessment. Understand what you have, and what is driving utilization. Consolidate applications, offload non-critical usage to the valleys, and look for ways to leverage the public/private cloud. When I unloaded my boat, I freed up capital for the more important things in life, without sacrificing my enjoyment. Doing so in your data center will allow you to take on strategic initiatives that will make you even more competitive.

Leaders Discussing Market (R)evolution | Gaining Altitude in the Cloud

I attended Mahindra Satyam’s customer conference last week, and after a very difficult couple of years, the company seems to be getting its act together. Most impressive was the group of interesting, forward-thinking people it attracted to fill out its program, which was centered on the emerging “Generation C” and the implications it may have for the enterprise. The thought leaders included:

  • CEOs from leading provider and user companies such as Akamai, Pega Systems, and VCE
  • CIOs and other IT leaders from Abercrombie & Fitch, Apple, Applied Materials, BlueCross Blue Shield, Chevron, Citrix, EMC, Nissan, and OC Tanner
  • A variety of professors from the local tech school, doing what is arguably the deepest thinking globally about some of the most promising innovations of the next decade
  • Leading industry advisors, who helped facilitate discussions on key trends and market changes

I had the pleasure of moderating a lively panel discussion on cloud computing infrastructure opportunities and challenges with Michael Capellas (Chairman and CEO of VCE), Sanjay Mirchandani (CIO of EMC), Michael McKiernan (Head of Global Applications Delivery for Citrix), and George Fischer (EVP of Global Sales and Operations for CA). The vision outlined by these panelists clearly articulated that it is not a question of “if” enterprises will move as much as possible of their infrastructure to cloud architectures, but rather a question of how fast once they recognize it is the only way they will be able to meet the demands of their business users.

The questions they raised related mostly to finding creative solutions to deal with the legacy “sludge” that consumes a disproportionate amount of resources while seldom really advancing the enterprise’s growth aspirations. And they flirted with the audience about what the technology and service provider of the future might entail – think “real time,” “predictive vs. reactive,” and core “ecosystem member” rather than vendor – but stopped short of predicting how the leaderboard for the IT products and services marketplace might change over the next decade.

Who do you think will rise to the top (or stay there) as this market rapidly evolves?

Growing Renewals in Asia: Time to Plan Ahead? | Sherpas in Blue Shirts

As Indian service providers announce their results for the year, one can’t but be amazed by their spectacular growth. For fiscal 2011, TCS’s annual earnings grew by 29 percent year over year (YoY), and Wipro’s IT services revenue grew by 19 percent YoY. While North America and Western Europe have traditionally fueled growth for the Indian heritage service providers, the focus is increasingly turning to Asia to capitalize on the growing economic shift.

In this context, the next several years are set to be quite interesting for the Asian markets, as we estimate that more than 500 first generation ITO and BPO contracts, worth US$25-30 billion, are up for renewal in Asia (Middle East, India and South East Asia) through 2014.

Outsourcing Contract Renewals

As we ran an analysis of potential renewals through our deal databases, some interesting trends emerged:

  • India, Malaysia, Singapore, Saudi Arabia, and Israel, in that order, have the largest renewal opportunities; and India is the largest market by far, possessing 50+ percent of the potential renewals by value.
  • In India, a large number of early stage telecommunications deals are coming up for renewal; in the rest of the region, financial services will likely dominate renewal activity.
  • IT contracts will likely contribute nearly 80 percent of these renewals, with infrastructure being the most significant component.
  • Clearly, large buyer enterprises (Forbes 1000 firms or private firms with revenues of over US$5 billion) will dominate these deals, except in Middle East Asia, where there is a greater presence of small-to-medium sized contracts.

What makes renewals unique in the Asian market?

First, over the last decade, Asia has undergone a much more rapid change in service provider capabilities than developed markets. Today, most global IT-BPO providers have sizeable Asian market practices, and have brought global offerings to service regional buyers; a decade ago, it was just a select few.

Second, buyer maturity in the region has increased significantly. Asian buyers today have had multiple deal experiences with providers, and consequently understand sourcing and governance in far greater depth than in the early 2000s. Today, transformational deals are not unheard of, and sophisticated service management frameworks are the norm, not the exception.

Third, consider changes in the global context that also affect the environment in Asia: the growing adoption of virtualization; the emergence of the Cloud; enterprise-scope BPO services; and consolidations in the provider landscape.

Together, all these factors make the case for a careful look at contractual parameters as end-of-engagement terms approach. Typical end-of-term alternatives include:

  1. Renew: Resign the existing contract with minimal changes.
  2. Renegotiate: Modify a limited number of elements of the contract.
  3. Restructure: Rethink the structure of key contract provisions and key business terms.
  4. Recompete: Terminate the existing contract and enter into a fresh competitive bid process.
  5. Repatriate: Terminate the current contract and bring previously outsourced services back in-house.

As Asian buyers consider these options, they need to carefully evaluate market capabilities (given significant market changes), and take a closer look at deal robustness to ensure that market best-in-class service management/contractual frameworks are incorporated. This implies that restructuring and recompeting are very real options in the Asian context.

Everest Group’s experience indicates that end-of-term evaluations take nearly six months in typical global deals. Given added complexities in regional deals, buyers would be wise to begin their evaluations up to a year in advance.

Historically, it is not uncommon to see buyers continue with their incumbent service providers. While some fail to leverage the market effectively, others get bogged down with challenges of repatriation or risks of changing service providers (such as business continuity risks, contractual lacunae for transition-out or incumbent provider hold-up). However, heightened competition, changing service provider landscape, newer delivery models (e.g., cloud) and increasing maturity in the Asian and Middle Eastern markets could well change this going forward.

Size Does Matter – The Real Pecking Order of Indian IT Service Providers | Sherpas in Blue Shirts

Earlier today, Cognizant reported its financial results for the first quarter of 2011, bringing to an end the earnings season for the Big-5 Indian IT providers – affectionately referred to as WITCH (Wipro, Infosys, TCS, Cognizant, and HCL). Cognizant’s results were yet again distinctive: US$1.37 billion in revenues in 1Q11, which represents QoQ growth of 4.6 percent and YoY growth of 42.9 percent. The latest financial results reaffirmed – yet again – Cognizant’s growth leadership compared to its peers and are a testament to Cognizant’s superb client engagement model.

Q1 2011 financial highlights for WITCH:

WITCH Q1-2011 Financial Highlights

In a recent blog post, my colleague Vikash Jain commented on the changes in the IT services leaderboard, and especially the questions and speculation on the relative positions of Wipro and Cognizant in the Indian IT services landscape. Cognizant’s 1Q11 revenues are now just US$29 million below Wipro’s IT services revenues, and based on current momentum, Cognizant could overtake Wipro as early as 2Q11, making it the third largest Indian IT major in quarterly revenue terms. The guidance provided by the two companies for the next quarter – Cognizant (US$1.45 billion) and Wipro (US$1.39-1.42 billion) – provides further credence to the projected timelines.

How important is this upcoming change in the relatively static rank order of the Indian IT industry (the last change happened in January 2009 post the Satyam scandal)? Not very, in our opinion. As and when this happens, the event will indeed create news headlines and the occasional blog entry, but the change in rankings does not imply a meaningful change to the overall IT landscape. Further, other than providing Wipro with even more conviction to make the changes required to recapture a faster growth trajectory, the new rank order does not suggest any changes in the delivery capabilities of either of these organizations.

As we advise our clients on selecting service providers, we believe that it is more important to understand the service provider’s depth of capability and experiences in the buyer organization’s specific vertical industry. While total revenues and financial stability are important enterprise-level criteria, performance in the vertical industry bears greater relevance and significance as buyers evaluate service providers. In our 1Q11 Market Vista report, we examine the CY 2010 revenues of the WITCH group to determine the pecking order in three of the largest verticals from a global sourcing adoption perspective – banking, financial services and insurance (BFSI); healthcare and life sciences; and energy and utilities (E&U).

As we recognize there are differences in the way these providers segment results, for simplicity we are relying on reported segmentation (which we believe does not meaningfully alter the results). The exhibit below summarizes the results of our assessment:

Industry leaderboard for WITCH:

WITCH Industry Leaders1

Our five key takeaways:

  1. The ranking of WITCH based on enterprise revenues has limited correlation to industry vertical rankings. The leader in each of the three examined industries is different.
  2. In BFSI, while TCS is the clear leader, Cognizant is rapidly closing in on Infosys for the second spot. (Note: Wipro is already #4 in this vertical).
  3. In Healthcare and Life Sciences, Cognizant emerges as the clear leader with 2010 revenues greater than those of Wipro, TCS, and HCL combined. (Note: Infosys does not report segment revenues for Healthcare).
  4. In E&U, Wipro leads the pack and is expected to widen the gap through its acquisition of SAIC’s oil and gas business. TCS achieved the highest growth in 2010 to move to third position ahead of HCL (TCS was #4 in 2009) and narrow the gap with Infosys (Note: Cognizant does not report E&U revenues).
  5. Finally, the above ranks are going to change quickly. Based on the results announced for the first calendar quarter of 2011 alone, we anticipate a change in the second position for each of the three examined verticals:
    • Cognizant’s Q1 BFSI revenue of US$570 million is nearly identical to that of Infosys’ US$572 million
    • TCS’ Q1 Healthcare and Life Sciences revenue at US$ 119 million is higher than Wipro’s US$111 million (which also includes services)
    • TCS reported Q1 E&U revenues of US$103 million, versus Infosys’ US$93 million

While it will be interesting to see the impact on a full year basis, the above changes in momentum already indicate further changes in the industry leaderboard before the end of the year.

On an unrelated note, by the time we revisit the Wipro versus Cognizant debate when the Indian majors announce their Q2 results starting mid-July, WITCH will assume an additional meaning – the last installment of the Harry Potter movies is due for release on July 15, 2011!

Where Are Enterprises in the Public Cloud? | Gaining Altitude in the Cloud

Amazon Web Services (AWS) recently announced several additional services including dedicated instances of Elastic Compute Cloud (EC2) in three flavors: on demand, one year reserved, and three year reserved. This should come as no surprise to those who have been following Amazon, as the company has been continually launching services such as CloudWatch, Virtual Private Cloud (VPC), and AWS Premium Support in an attempt to position itself as an enterprise cloud provider.

But will these latest offerings capture the attention of the enterprise? To date, much of the workload transitioned to the public cloud has been project-based (e.g., test and development), and peak demand computing-focused. Is there a magic bullet that will motivate enterprises to move their production environments to the public cloud?

In comparison with “traditional” outsourcing, public cloud offerings – whether from Amazon or any other provider – present a variety of real or perceived hurdles that must be overcome before we see enterprises adopt them for production-focused work:

Security: the ability to ensure, to the client’s satisfaction, data protection, data transfer security, and access control in a multi-tenant environment. While the cloud offers many advantages, and offerings continue to evolve to create a more secure computing environment, the perception that multi-tenancy equates to lack of security remains.

Performance and Availability: typical performance SLAs for the computing environment and all related memory and storage in traditional outsourcing relationships are 99.5– 99.9 percent availability, and high availability environments require 99.99 percent or higher. These availability ratings are measured monthly, with contractually agreed upon rebates or discounts kicking in if the availability SLA isn’t met. While some public cloud providers will meet the lower end of these SLAs, some use 12 months of previous service as the measurement timeline, while others define an SLA event as any outage in excess of 30 minutes, and still others use different measurements. This disparity leads to confusion and discomfort among most enterprises, and the perception that the cloud is not as robust as outsourcing services.

Compliance and Certifications: in industries that utilize highly personal and sensitive end-user customer information – such as social security number, bank account details, or credit card information – or those that require compliance in areas including HIPPA or FISMA, providers’ certifications are vital. As most public cloud providers have only basic certification and compliance ratings, enterprises must tread very carefully, and be extremely selective.

Support: a cloud model with little or no support only goes so far. Enterprises must be able to get assistance, when they need it. Some public cloud providers – such as Amazon and Terremark – do offer 24X7 support for an additional fee, but others still need to figure support into their offering equation.

Addressing and overcoming these measuring sticks will encourage enterprises to review their workloads and evaluate what makes sense to move to the cloud, and what will remain in private (or even legacy) environments.

However, enterprises’ workloads are also price sensitive, and we believe, at least today, that the public cloud is not an economical alternative for many production environments. Thus enterprise movement to the cloud could evolve one of several ways. In a hybrid cloud where the bulk of the production environment will be placed in a private cloud and peak demand burst to the public cloud. Or will increased competition, improved asset utilization and workload management continue to drive down pricing, as has happened to Amazon in both of the past two years? If so, will enterprises bypass the hybrid path and move straight to the public cloud as the economics prove attractive?

The ability to meet client demands, creating a comfort level with the cloud and the economics all play a role into how and when enterprises migrate to the cloud. The market is again at an inflection point, and it promises to be an exciting time.

Will the Sun Come out Tomorrow? | Gaining Altitude in the Cloud

Cloud computing promises increased flexibility, faster time to market, and drastic reduction of costs by better utilizing assets and improving operational efficiency. The cloud further promises to create an environment that is fully redundant, readily available, and very secure. Who isn’t talking about and wanting the promises of the cloud?

Today, however, Amazon’s cloud suffered significant degradation in its Virginia data center following an almost flawless year+ long record. Yes, the rain started pouring out of Amazon’s cloud at about 1:40 a.m. PT when it began experiencing latency and error rates in the east coast U.S. region.

The first status message about the problem stated:

1:41 AM PT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region.

Seven hours later, as Amazon continued to feverishly work on correcting the problem, its update said:

8:54 AM PDT We’d like to provide additional color on what were working on right now (please note that we always know more and understand issues better after we fully recover and dive deep into the post mortem). A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We’re starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them.

No! Say it’s not so! A cloud outage? The reality is that cloud computing remains the greatest disruptive force we’ve seen in business world since the proliferation of the Internet. What cloud computing will do to legacy environments is similar to what GPS systems did to mapmakers. And when is the last time you picked up a map?

In the future, businesses won’t even consider hosting their own IT environments. It will be an automatic decision to go to the cloud.

So why is Amazon’s outage news?

Only because it affected the 800-pound gorilla. Amazon currently has about 50 percent of the cloud market, and its competitors can only dream of this market share. When fellow cloud provider Coghead failed in 2009, did anyone know?  We certainly didn’t.  But when Amazon hiccups, everybody knows it.

Yes, the outage did affect a number of businesses. But businesses experience outages, disruptions, and degradation of service every day, regardless of whether the IT environment is legacy or next generation, outsourced or insourced.  In response, these businesses scramble, putting in place panicked recovery plans, and having their IT folk work around the clock to get it fixed…but rarely, do these service blips make the news.   So with the spotlight squarely shining on it because of its position in the marketplace, Amazon is scrambling, panicking, and working to get the problem fixed. And it will. Probably long before its clients would or could in their own environments.

Yes, it rained today, but really, it was just a little sprinkle. We believe the future for the cloud is so bright, we all need to be wearing shades.

Disruption, Offshoring and Predictions About the Cloud Computing Marketplace | Gaining Altitude in the Cloud

In the 1990s, Jack Welch’s GE recognized it could hire well educated individuals from high-quality universities in India at salaries that were only a quarter of those in the United States or Western Europe. GE committed to building organizations to perform important business support activities in low-cost regions of the globe, and Gecis, now Genpact, was born. In a similar timeframe, entrepreneurs and government policy makers also recognized the promise of leveraging a low-cost, high-quality talent pool and creating attractive, high-paying (relatively) jobs.

Applying the 4:1 cost advantage began slowly as it was applied to only a small sliver of business support activities. But the offshoring industry very quickly took off on a trajectory matched by few industries. And offshoring itself has become a disruptive force, continuing to expand across a wide range of activities and industries, enabling the creation of companies with enormous growth potential and market valuations equal to historic incumbents five times their size in terms of revenue, shaping enterprises of all shapes and sizes, and influencing political and social agendas for both mature and developing parts of the world.

Is cloud computing the next iteration of disruption to hit the services delivery industry? While few have yet grasped the transformational impact it may have on enterprise IT, from a market space standpoint we see a number of analogous characteristics:

  • Emerging analysis and case studies of the economic impact of cloud computing suggest 4:1 or better cost advantage for users. And this is enterprise solutions for public clouds, private clouds, virtual private clouds, and hybrid solutions with select workloads dynamically moving between private and public cloud options.
  • While cloud solutions are in their infancy, the toe testing of workloads driven by what appears to be cheap processing power that can be self-provisioned in minutes is spreading rapidly to other workloads much closer to the core.
  • Leading service providers are achieving VERY attractive financial margins on the core cloud services, much like the Tier 1 offshore IT services players drive superior margins, growth, and market valuations. For example, India-based TCS last quarter put up 31 percent year-over-year growth at NET margins over 24 percent, numbers most firms would kill for. But it appears that the cloud units of Amazon, Rackspace, and others are growing twice as fast, with margins also likely to be substantially higher.
  • The leading providers in cloud sectors – Amazon, Google, Rackspace, Microsoft, and Terramark (Verizon) – are non-traditional services players, similar to the national champions in major low-cost offshoring destinations that built offshoring companies from scratch or entered from non-traditional, only loosely related spaces.

With all these similarities, my predictions are:

  1. Cloud computing will drive massive disruption in the IT services marketplace; large market share shifts will occur, and many legacy providers will be forced to change their business models or suffer extended decline.
  2. The leading service providers five to 10 years from now will most likely be those that were NOT incumbents or players in closely related sectors. Incumbents who embrace the required changes and aggressively attack their legacy book of business with new solutions may contend for leadership; hardware providers such as IBM, HP, Dell, and major Japanese players have unique ingredients that could spice up the mix, but they need to learn from their late entry into offshoring that marketing speak alone will not make a difference.
  3. Leading performers, once to sustainable scale, will outperform followers from legacy environments on growth and profitability dimensions by a factor of two or more (although they may remain smaller for some time). The value creation opportunity for investors will represent a next wave of stars.
  4. IT services customers will be the big winners, capturing up to 2/3 to 3/4 of the “surplus” value. CIOs will ignore the economics of cloud initiatives at their peril, but those who embrace the cloud will find themselves very valuable when the talent war begins.
  5. Benefits will not be limited to economic value – the new leaders will fulfill higher expectations for responsiveness and innovation. Economics will drive adoption, but the early adopters will find speed and flexibility benefits to be the primary value creation levers. These early adopters will experience real business leverage from their cloud IT initiatives, elevating these efforts to a strategic level.

Buckle up! It’s going to be a wild ride up in the clouds…

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.