Tag: enterprise

Live from Bangalore – the NASSCOM IMS Summit, September 21 | Gaining Altitude in the Cloud

CIOs, service providers, analysts, and the business media rubbed shoulders on the power-packed first day of the NASSCOM Infrastructure Management Summit (IMS) in Bangalore. This year’s conference has the twin themes of Enterprise Mobility and Cloud Computing, with one day dedicated to each, which seems to lead to a more focused set of discussions than a super broad-based event that leaves you struggling to absorb all of what you just heard.

After the welcome address and keynote speech from Som Mittal, President of NASSCOM, and Pradeep Kar, Chairman of the NASSCOM RIM Forum, we settled in for a series of insightful presentations and panel discussions with global technology leaders.

BMC CEO Robert E. Beauchamp spoke about how the parallel paradigms of cloud, consumerization, and communication (yes, I am in alliteration mode today) require CIOs to think of a unified approach to service management. Of particular interest were Beauchamp’s insights on how different service providers are trying to interpret the cloud differently in an attempt to a) disintermediate the competition; b)  avoid being disintermediated; or c) both a and b.

IBM’s interpretation of the cloud: The cloud is all the bundled hardware, software, and middleware we have always sold to you, but now you can buy the whole stack yourself instead of us having to sell it to you.

Google’s counter: Who cares about the hardware anyway? We will buy the boxes from Taiwan – cheaper and better. It’s about what you do with it, and that’s where we come in…again.

VMWare chips in: You already own the hardware – and we will tell you how best to make use of it.

Beauchamp sees more than one way of “belling the cloud cat,” and CIOs need to figure out which direction to take based on their legacy environments, security requirements, and cost imperatives. (“Belling the cloud cat” is my take-off on a fable titled Belling the Cat. It means attempting, or agreeing to perform, an impossibly difficult task.)

As for service providers, he also foresees successful survivors and spectacular failures as the cloud conundrum disrupts traditional business models.

Mark Egan, VMWare CIO spoke about how consumerization and cloud computing are nullifying the efficacy of traditional IT management tools. According to Egan, IT needs to move from a “we’ll place an agent on the device” mode to a “heuristics” mode of analyzing data in order to prevent every CIO’s security nightmare from coming true in a consumerized enterprise.

Next up, Brian Pereira, Editor, InformationWeek, and Chandra Gnanasambandam, Partner, McKinsey, inspired us with real stories about how mobility is transforming the lives of unbanked villagers, saving billions of dollars worth of healthcare expenditure, and improving and optimizing the enterprise supply chain.

Here’s a gem of an insight: Do you know what most urban workers in the Philippines, Vietnam, or India do if they need to transfer money to parents living in rural areas? They buy a train ticket. Then they call Mum and Dad, share the ticket number, and ask them to go to the local railway station, cancel the ticket and collect the refund (minus a small cancellation fee). Wow – that’s what I call consumer-led innovation!

To summarize today’s sessions:

  • While many discussions highlighted the correctness of what Everest Group analysts are already predicting, it was invaluable to get validation on what we suspected, complete with more live examples.
  • Cloud and enterprise mobility are here to stay. With the momentum behind them – unlike other hyped up technologies – these are being demanded by the consumer, not dumped on them. And that is always going to mean something.
  • Service providers and CIOs need to evolve. In themselves, cloud and mobility do not represent a threat. But it’s a lot of change. And the threat lies in how CIOs, and their service providers, gauge the pace of the change, and react to it.

That’s it for now. Tomorrow, I share a panel with CSC and Microland to discuss “Trigger points – Driving traditional datacenter to private cloud.” Right now I’m heading off the gym in an attempt to burn all the calories I’ve put on during the day, thanks to the excellent food. Stay tuned!

Evolving Cloud, Evolving Advisory Role | Gaining Altitude in the Cloud

Avid readers of this blog can tell by now that Everest Group is excited to participate and contribute to the market discussion on the impact of the rapidly evolving cloud industry. We get tremendous satisfaction from both the online and in-person conversations our blog topics have generated in the last year and promise to continue to contribute our informed viewpoints with continued enthusiasm.

Enterprise IT leaders we talk with every day find themselves at a crossroads. The cloud revolution is nearing an inflection point, promising to radically transform the way IT services are delivered. At the same time, there is an equally strong “echo chamber” effect in which promises and benefits are refracted through various service provider prisms, creating a significant challenge in separating what’s possible from unhelpful hyperbole. Additionally, the focus in the current market tends to be on the ever-evolving technology upgrades and releases of various cloud components, which leaves most CIOs in the dark when it comes time to try and sell the economic benefits of cloud technology to their key internal stakeholders.

IT organizations have had enough theory and are ready to start working in more practical terms:

  • How do we transform the provision of IT services to our business to meet its needs more effectively?
  • How do we build a strategy to get us there?
  • What does the financial case look like to achieve our desired outcomes?

We’re excited to share the vision of our Next Generation IT Practice with you, because we believe it to be the natural evolution from our current practice of assisting Global 1000 firms drive greater operational efficiency. Our expertise allows us to help transform IT organizations to strengthen both their long-term strategic and economic positions by leveraging the next generation of technologies.

Our vision for this new practice is simple: provide a bridge between strategic direction and technical execution for IT transformation without bias towards the desired end state. We believe this is where the current advisory market falls short and Everest Group can add the most value.

Our Next Generation IT team is successfully able to:

  • Build on existing experience helping large IT clients develop strategies
  • Leverage our breadth of research on the service providers’ strengths and weaknesses
  • Adopt a time-tested methodology to include next-generation technologies
  • Utilize our business case modeling skills to construct a versatile tool for helping assess transformation economics in a way that is unique in the marketplace

We cannot wait to share more details about how our team at Everest Group has helped clients develop a roadmap towards transformation in the coming weeks, so that we can continue to contribute thought leadership in this space.


Learn details about how our Cloud Transformation and Next Generation IT offerings can help your organization achieve the strategic value it’s seeking.

Procure-to-Pay: Measuring Outcome Beyond Efficiency Gains | Sherpas in Blue Shirts

More and more companies are recognizing the value of end-to-end business process management as it breaks down functional and organizational silos to enable a more holistic approach to enterprise performance management.

Of the common sets of end-to-end processes – which include Source-to-Contract (S2C), Procure-to-Pay (P2P), Order-to-Cash (O2C), Record-to-Report (R2R), and Hire-to-Retire (H2R) – P2P is most often identified as the priority for optimization. There are two key drivers of this trend. First, compared to other end-to-end processes, P2P activities are typically more common across the enterprise, making them easier to standardize. Second, the business case for P2P is frequently the most compelling. Through process standardization, workflow automation, system integration, and rigorous compliance enforcement, companies have been able to achieve rapid and significant spend and operating cost savings while simultaneously gaining the ability to better manage risk.

A case in point: a global software and products company achieved an initial operating cost reduction of 35 percent. It subsequently realized spend savings of US$700 million (~9 percent on a spend base of US$8 billion) and captured more than US$10M in Early Payment Discounts (EPD). The savings and benefits accrued generated a break-even on the business case in less than six months.

Based on Everest Group’s experience, one of the most critical success factors of P2P transformation is the institutionalization of a common set of well-defined performance metrics across the entire organization, including both internal and third party delivery partners. The performance metrics should be closely linked to desired business outcomes, and applicable across segments and geographies. Moreover, both P2P efficiency and effectiveness should be easily quantified, measured, and benchmarked.

The table below presents a P2P metrics framework that starts with clearly defined business objectives that are measured by a small set of outcome-based metrics to reflect the overall efficiency and effectiveness of the P2P process. The diagnostic measures are designed to identify specific process breakdowns and improvement opportunities, and are tracked and reported at the operational level.

P2P Metrics Framework

 

We strongly recommend companies follow a structured approach to develop a holistic P2P performance management framework:

  1. Define common metrics, and clearly delineate objectives, descriptions, and interdependencies with other performance measures
  2. Establish a standard methodology and systems to track and report performance; key components include:
    • Measurement scope, parameters, method, data source, and frequency
    • Benchmarking methodology and data source
    • Reporting dashboards, frequency, and forum
  3. Assign accountability for:
    • Measuring and tracking performance metrics
    • Benchmarking and reporting overall P2P performance
    • Identifying and prioritizing continuous improvement (CI) opportunities
    • Reviewing and approving CI projects
    • Implementing and monitoring CI initiatives
    • Calibrating performance metrics based on evolving business objectives

There’s no question that the old management adage “You can’t manage what you don’t measure” holds true in the case of end-to-end process management. Having a common set of appropriately-designed performance metrics is both an enabler for and indicator of successful P2P transformation.

Is End to End Really the End All? | Sherpas in Blue Shirts

It’s all the rage. Global organizations are starting to take a more “user” centric view of process workflows and operations. As opposed to organizing their delivery capabilities around discrete functions like procurement, finance and accounting (F&A), and HR, the world’s leading firms are organizing around end to end (E2E) processes like Procure to Pay, Hire to Retire, and Record to Report. But is E2E simply a “Hail Mary” pass, a wishful attempt to find value beyond labor arbitrage? Or, as evidence suggests, are the benefits – e.g., better EBITDA, tighter compliance, and greater financial control – real and proven?

A comprehensive CFO survey IBM conducted last year clearly demonstrated that organizations that consistently outperform their peers in EBITDA do, in fact, organize and deliver their global services around principals consistent with an E2E approach. Additionally, these companies all have standardized finance processes, common data definitions and governance, a standard chart of accounts, and globally mandated, strictly enforced standards supporting these E2E processes.

Everest Group’s experience supports IBM’s survey results. And while it seems clear that every large, complex global organization should be chasing E2E in order to improve results and reduce risk, it’s important to note that doing so is neither easy nor without challenges.

To realize the benefits of E2E, Everest Group typically recommends a developing a three- to five-year roadmap with a heavy focus on building the business case, defining the target operating model, and managing stakeholder expectations and change.

Roadmap

Yet even the best game plan will have to address key challenges, including:

  • Fragmentation – The core of many E2E processes, F&A, is often fragmented in large companies. Finance processes are commonly distributed not only by business unit, but also often by geography. A global rationalization of F&A to understand the base case (current state) is a critical first step.
  • Vision – It is essential to document and agree on a common target operating model definition for each E2E process which details:
    • activities, standards, and data definitions
    • a common set of E2E process metrics used to measure performance, provide transparency on delivery performance, and underpin dashboard reporting
    • a framework for controls, oversight, and balance sheet integrity
    • a compelling and thorough business case that clearly defines the current state, investments, and future benefits
  • Technology – Even under the best of technology frameworks, a single, global instance of an ERP system like SAP, a further “thin” layer enabling technologies and tools, may be needed to drive standardized processes.

No, E2E is not a Hail Mary pass, but rather a sustained and balanced drive down the field for a game winning touchdown. Success will require strong leadership, talented personnel, technology, a sound game plan, and solid coaching staff to pull it all together, building momentum and confidence along the way.


Related Blog: Building a Robust Global Services Organization

The Evolution to Next Generation Operating Models in the Energy Industry | Sherpas in Blue Shirts

As pioneers of global sourcing, leading organizations in the energy industry such as ExxonMobil, Shell, Chevron, and BP have been able to extract significant value from offshoring while maintaining a manageable risk profile. However, the growing complexity of their global sourcing portfolios in terms of internal and external supply options (see Table 1 below), service delivery locations, governance models, and systems/tools has led to a set of challenging issues around design and ongoing optimization.

Comparison of sourcing model for leading energy companies

From a design standpoint, energy companies are rethinking their operating models to address a wide range of issues and concerns including:

The next wave of scope expansion opportunities

  • How can I systematically identify new sourcing areas balancing value and risk?
  • How can I predicate and manage short- and long-term demand?

Building an integrated supply mode and global delivery footprint

  • How can I maintain an optimal, complementary mix of internal and outsourcing supply alternatives reflecting requirements for capacity, competencies, and cost?
  • How can I build a consistent framework to place incremental scope into the right supply model?
  • How can I create an integrated, flexible global delivery model that meets current and future demand while minimizing exposure to risks?

Energy companies also face challenges in building a holistic management approach to enable ongoing value capture and expansion, such as:

How to build a holistic, enterprise-level governance model

  • How can I appropriately assess performance across outsourcing service providers, captives, locations, and service lines?
  • What is the optimal governance approach to enable best practice sharing, risk mitigation, and economies of scale?

How to improve end-to-end process effectiveness and control

  • What are the value and constraints to standardize and simplify end-to-end processes across my enterprise?
  • What are the right effectiveness metrics and process efficiency measures?
  • How can I enhance process control to minimize operational risk?

Many of the top energy organizations are experimenting with next generation operating models to address these issues. For example, a global energy major that utilizes both outsourcing service providers and internal shared services recently embarked on a multi-year journey to integrate services design, delivery, and governance across business units, functions, and geographies. The objectives are to enhance end-to-end process effectiveness and control, reduce complexity and risk at the enterprise-level, and improve service performance and cost efficiency.

Our experience in the energy industry clearly indicates that unlocking the next wave of value requires more deliberate design of an integrated global delivery model, a consistent framework to better align supply with demand, and a holistic approach to govern and optimize services. In addition, corporate culture impacts cannot be overlooked. In global energy companies’ large, complex environments full of competing interest and priorities, strong executive leadership and commitment are vital to success.

Microsoft Confuses Economies of Scale with Next Generation Data Centers | Gaining Altitude in the Cloud

In a recent article in Information Week, a Microsoft executive made the claim that the economies of scale of cloud data centers were so compelling that few companies, if any, would want to continue to operate their own. He went on to offer Microsoft’s cloud data centers as the proof point. He stated the Microsoft cloud data centers operate on next generation architecture. Instead  of housing servers in hardened data centers, which are expensive to build, cool and maintain, Microsoft utilizes new hyper-scalable architecture that jam packs servers and storage into vapor-cooled containers similar to those you see on the interstate being pulled by semi trucks. Microsoft achieves resilience exceeding that in the hardened data centers by duplication of assets in multiple locations. And when combined with the flexibility of virtualized cloud offerings, the net result is dramatically lower cost – to the tune of as little as 25 percent of the cost to build and run their level 4 hardened cousins.

Our counterpoint: we have been conducting extensive research, and our analysis confirms that many next generation data centers are significantly less expensive than many cloud offerings. Further, they are mature enough to support enterprise-class computing today, and are far more flexible than traditional legacy data center infrastructures. When enterprises combine these benefits, they can indeed achieve dramatically lower computing costs. It’s important to recognize that these are not driven by economies of scale; rather, they arise from the advantages of radical new architecture and technology. Everest Group’s work strongly suggests that whereas economies of scale do exist in next generation data centers and their related cloud offerings, most of the benefits are reached quite quickly.

A vital distinction – next generation data centers and private cloud are available to most mid to large enterprises at a cost comparable to that of mega Microsoft. Enterprises seeking to capture these benefits should not be seduced by claims of massive gains provided by ever increasing size,  but should instead focus their attention on how to leverage the architecture and next generation technologies while adapting their applications and organizations to take advantage of these dramatic new opportunities.

Economic Forecast Calls for More Clouds | Gaining Altitude in the Cloud

Have you ever stopped to think why cloud computing is at the center of any IT-related discussion? In our conversations with clients, from the boardroom to the line manager, cloud is sure to enter into the discussion. Today, many of those conversations are around understanding, and to a lesser degree, implementation. But once the discussion crosses the threshold of understanding, the topic immediately goes to, “How can I get into the cloud?”

Everest Group recently held a webinar on the economics of cloud computing. There were two objectives: 1) Help clarify just how disruptive, in a good way, cloud computing is and can be; and 2) Demonstrate the economic benefits that exist in the cloud economy, and that there are those striving for this competitive advantage today.

The Hole in the Water That You Throw Money Into

One of the key economic drivers that hampers today’s data center environment is the relatively low utilization rate across its resources. Think about it like this: You’ve probably heard the old adage that owning a boat is like having a hole in the water that you throw money into. That is because the majority of boats are seldom used. (Trust me, I know, I used to own one.) The per use cost of a $25,000 (and quickly depreciating) boat that you actually use three or four times a year is quite high, and the reality is you could have rented a boat often for a fraction of the cost. The same thing is happening in your data center. If your utilization is 20 percent, or even 30 percent, you have essentially wasted 70-80 percent of your spend. That is an expensive data center.

Workload Utilizations1

Cloud computing is like that little boat rental shop tucked away in a nice cove on your favorite lake. What if you could get rid of excess capacity, better manage resource peaks and valleys, and rent public capacity when you need it, and not pay for it when you don’t?

What if we leverage public cloud flexibility1

The Economics

As you can see in the graphic below, the economics related to cloud are dramatic, and the key lies in leveraging the public cloud to pay only for what you use, eliminating the issue of excess capacity.

Public cloud options unlock extraordinary enterprise economics

There is a variety of point examples in which this is done today, with the above economics reaped. For instance, Ticket Master leverages the public cloud for large events, loading an environment to the cloud, specifically sized for each given event. The specific event may only last several hours or days, and once complete, Ticket Master takes down the environment and loads the data in its dedicated systems.

There are also enterprises and suppliers working to enable peak bursting more seamlessly. For example, eBay recently showed where they are working with Rackspace and Microsoft Azure to enable hybrid cloud bursting, allowing eBay to reduce its steady state environment (think hole in the water) from 1,900 to 800 servers, saving it $1.1 million per month.

Hybrid economics example eBay

 The Steps to Getting Started

Dedicate yourself to getting rid of your boat (or should I say boat anchor?) Begin a portfolio assessment. Understand what you have, and what is driving utilization. Consolidate applications, offload non-critical usage to the valleys, and look for ways to leverage the public/private cloud. When I unloaded my boat, I freed up capital for the more important things in life, without sacrificing my enjoyment. Doing so in your data center will allow you to take on strategic initiatives that will make you even more competitive.

Growing Renewals in Asia: Time to Plan Ahead? | Sherpas in Blue Shirts

As Indian service providers announce their results for the year, one can’t but be amazed by their spectacular growth. For fiscal 2011, TCS’s annual earnings grew by 29 percent year over year (YoY), and Wipro’s IT services revenue grew by 19 percent YoY. While North America and Western Europe have traditionally fueled growth for the Indian heritage service providers, the focus is increasingly turning to Asia to capitalize on the growing economic shift.

In this context, the next several years are set to be quite interesting for the Asian markets, as we estimate that more than 500 first generation ITO and BPO contracts, worth US$25-30 billion, are up for renewal in Asia (Middle East, India and South East Asia) through 2014.

Outsourcing Contract Renewals

As we ran an analysis of potential renewals through our deal databases, some interesting trends emerged:

  • India, Malaysia, Singapore, Saudi Arabia, and Israel, in that order, have the largest renewal opportunities; and India is the largest market by far, possessing 50+ percent of the potential renewals by value.
  • In India, a large number of early stage telecommunications deals are coming up for renewal; in the rest of the region, financial services will likely dominate renewal activity.
  • IT contracts will likely contribute nearly 80 percent of these renewals, with infrastructure being the most significant component.
  • Clearly, large buyer enterprises (Forbes 1000 firms or private firms with revenues of over US$5 billion) will dominate these deals, except in Middle East Asia, where there is a greater presence of small-to-medium sized contracts.

What makes renewals unique in the Asian market?

First, over the last decade, Asia has undergone a much more rapid change in service provider capabilities than developed markets. Today, most global IT-BPO providers have sizeable Asian market practices, and have brought global offerings to service regional buyers; a decade ago, it was just a select few.

Second, buyer maturity in the region has increased significantly. Asian buyers today have had multiple deal experiences with providers, and consequently understand sourcing and governance in far greater depth than in the early 2000s. Today, transformational deals are not unheard of, and sophisticated service management frameworks are the norm, not the exception.

Third, consider changes in the global context that also affect the environment in Asia: the growing adoption of virtualization; the emergence of the Cloud; enterprise-scope BPO services; and consolidations in the provider landscape.

Together, all these factors make the case for a careful look at contractual parameters as end-of-engagement terms approach. Typical end-of-term alternatives include:

  1. Renew: Resign the existing contract with minimal changes.
  2. Renegotiate: Modify a limited number of elements of the contract.
  3. Restructure: Rethink the structure of key contract provisions and key business terms.
  4. Recompete: Terminate the existing contract and enter into a fresh competitive bid process.
  5. Repatriate: Terminate the current contract and bring previously outsourced services back in-house.

As Asian buyers consider these options, they need to carefully evaluate market capabilities (given significant market changes), and take a closer look at deal robustness to ensure that market best-in-class service management/contractual frameworks are incorporated. This implies that restructuring and recompeting are very real options in the Asian context.

Everest Group’s experience indicates that end-of-term evaluations take nearly six months in typical global deals. Given added complexities in regional deals, buyers would be wise to begin their evaluations up to a year in advance.

Historically, it is not uncommon to see buyers continue with their incumbent service providers. While some fail to leverage the market effectively, others get bogged down with challenges of repatriation or risks of changing service providers (such as business continuity risks, contractual lacunae for transition-out or incumbent provider hold-up). However, heightened competition, changing service provider landscape, newer delivery models (e.g., cloud) and increasing maturity in the Asian and Middle Eastern markets could well change this going forward.

Size Does Matter – The Real Pecking Order of Indian IT Service Providers | Sherpas in Blue Shirts

Earlier today, Cognizant reported its financial results for the first quarter of 2011, bringing to an end the earnings season for the Big-5 Indian IT providers – affectionately referred to as WITCH (Wipro, Infosys, TCS, Cognizant, and HCL). Cognizant’s results were yet again distinctive: US$1.37 billion in revenues in 1Q11, which represents QoQ growth of 4.6 percent and YoY growth of 42.9 percent. The latest financial results reaffirmed – yet again – Cognizant’s growth leadership compared to its peers and are a testament to Cognizant’s superb client engagement model.

Q1 2011 financial highlights for WITCH:

WITCH Q1-2011 Financial Highlights

In a recent blog post, my colleague Vikash Jain commented on the changes in the IT services leaderboard, and especially the questions and speculation on the relative positions of Wipro and Cognizant in the Indian IT services landscape. Cognizant’s 1Q11 revenues are now just US$29 million below Wipro’s IT services revenues, and based on current momentum, Cognizant could overtake Wipro as early as 2Q11, making it the third largest Indian IT major in quarterly revenue terms. The guidance provided by the two companies for the next quarter – Cognizant (US$1.45 billion) and Wipro (US$1.39-1.42 billion) – provides further credence to the projected timelines.

How important is this upcoming change in the relatively static rank order of the Indian IT industry (the last change happened in January 2009 post the Satyam scandal)? Not very, in our opinion. As and when this happens, the event will indeed create news headlines and the occasional blog entry, but the change in rankings does not imply a meaningful change to the overall IT landscape. Further, other than providing Wipro with even more conviction to make the changes required to recapture a faster growth trajectory, the new rank order does not suggest any changes in the delivery capabilities of either of these organizations.

As we advise our clients on selecting service providers, we believe that it is more important to understand the service provider’s depth of capability and experiences in the buyer organization’s specific vertical industry. While total revenues and financial stability are important enterprise-level criteria, performance in the vertical industry bears greater relevance and significance as buyers evaluate service providers. In our 1Q11 Market Vista report, we examine the CY 2010 revenues of the WITCH group to determine the pecking order in three of the largest verticals from a global sourcing adoption perspective – banking, financial services and insurance (BFSI); healthcare and life sciences; and energy and utilities (E&U).

As we recognize there are differences in the way these providers segment results, for simplicity we are relying on reported segmentation (which we believe does not meaningfully alter the results). The exhibit below summarizes the results of our assessment:

Industry leaderboard for WITCH:

WITCH Industry Leaders1

Our five key takeaways:

  1. The ranking of WITCH based on enterprise revenues has limited correlation to industry vertical rankings. The leader in each of the three examined industries is different.
  2. In BFSI, while TCS is the clear leader, Cognizant is rapidly closing in on Infosys for the second spot. (Note: Wipro is already #4 in this vertical).
  3. In Healthcare and Life Sciences, Cognizant emerges as the clear leader with 2010 revenues greater than those of Wipro, TCS, and HCL combined. (Note: Infosys does not report segment revenues for Healthcare).
  4. In E&U, Wipro leads the pack and is expected to widen the gap through its acquisition of SAIC’s oil and gas business. TCS achieved the highest growth in 2010 to move to third position ahead of HCL (TCS was #4 in 2009) and narrow the gap with Infosys (Note: Cognizant does not report E&U revenues).
  5. Finally, the above ranks are going to change quickly. Based on the results announced for the first calendar quarter of 2011 alone, we anticipate a change in the second position for each of the three examined verticals:
    • Cognizant’s Q1 BFSI revenue of US$570 million is nearly identical to that of Infosys’ US$572 million
    • TCS’ Q1 Healthcare and Life Sciences revenue at US$ 119 million is higher than Wipro’s US$111 million (which also includes services)
    • TCS reported Q1 E&U revenues of US$103 million, versus Infosys’ US$93 million

While it will be interesting to see the impact on a full year basis, the above changes in momentum already indicate further changes in the industry leaderboard before the end of the year.

On an unrelated note, by the time we revisit the Wipro versus Cognizant debate when the Indian majors announce their Q2 results starting mid-July, WITCH will assume an additional meaning – the last installment of the Harry Potter movies is due for release on July 15, 2011!

Where Are Enterprises in the Public Cloud? | Gaining Altitude in the Cloud

Amazon Web Services (AWS) recently announced several additional services including dedicated instances of Elastic Compute Cloud (EC2) in three flavors: on demand, one year reserved, and three year reserved. This should come as no surprise to those who have been following Amazon, as the company has been continually launching services such as CloudWatch, Virtual Private Cloud (VPC), and AWS Premium Support in an attempt to position itself as an enterprise cloud provider.

But will these latest offerings capture the attention of the enterprise? To date, much of the workload transitioned to the public cloud has been project-based (e.g., test and development), and peak demand computing-focused. Is there a magic bullet that will motivate enterprises to move their production environments to the public cloud?

In comparison with “traditional” outsourcing, public cloud offerings – whether from Amazon or any other provider – present a variety of real or perceived hurdles that must be overcome before we see enterprises adopt them for production-focused work:

Security: the ability to ensure, to the client’s satisfaction, data protection, data transfer security, and access control in a multi-tenant environment. While the cloud offers many advantages, and offerings continue to evolve to create a more secure computing environment, the perception that multi-tenancy equates to lack of security remains.

Performance and Availability: typical performance SLAs for the computing environment and all related memory and storage in traditional outsourcing relationships are 99.5– 99.9 percent availability, and high availability environments require 99.99 percent or higher. These availability ratings are measured monthly, with contractually agreed upon rebates or discounts kicking in if the availability SLA isn’t met. While some public cloud providers will meet the lower end of these SLAs, some use 12 months of previous service as the measurement timeline, while others define an SLA event as any outage in excess of 30 minutes, and still others use different measurements. This disparity leads to confusion and discomfort among most enterprises, and the perception that the cloud is not as robust as outsourcing services.

Compliance and Certifications: in industries that utilize highly personal and sensitive end-user customer information – such as social security number, bank account details, or credit card information – or those that require compliance in areas including HIPPA or FISMA, providers’ certifications are vital. As most public cloud providers have only basic certification and compliance ratings, enterprises must tread very carefully, and be extremely selective.

Support: a cloud model with little or no support only goes so far. Enterprises must be able to get assistance, when they need it. Some public cloud providers – such as Amazon and Terremark – do offer 24X7 support for an additional fee, but others still need to figure support into their offering equation.

Addressing and overcoming these measuring sticks will encourage enterprises to review their workloads and evaluate what makes sense to move to the cloud, and what will remain in private (or even legacy) environments.

However, enterprises’ workloads are also price sensitive, and we believe, at least today, that the public cloud is not an economical alternative for many production environments. Thus enterprise movement to the cloud could evolve one of several ways. In a hybrid cloud where the bulk of the production environment will be placed in a private cloud and peak demand burst to the public cloud. Or will increased competition, improved asset utilization and workload management continue to drive down pricing, as has happened to Amazon in both of the past two years? If so, will enterprises bypass the hybrid path and move straight to the public cloud as the economics prove attractive?

The ability to meet client demands, creating a comfort level with the cloud and the economics all play a role into how and when enterprises migrate to the cloud. The market is again at an inflection point, and it promises to be an exciting time.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.