Tag: cloud computing

Economic Forecast Calls for More Clouds | Gaining Altitude in the Cloud

Have you ever stopped to think why cloud computing is at the center of any IT-related discussion? In our conversations with clients, from the boardroom to the line manager, cloud is sure to enter into the discussion. Today, many of those conversations are around understanding, and to a lesser degree, implementation. But once the discussion crosses the threshold of understanding, the topic immediately goes to, “How can I get into the cloud?”

Everest Group recently held a webinar on the economics of cloud computing. There were two objectives: 1) Help clarify just how disruptive, in a good way, cloud computing is and can be; and 2) Demonstrate the economic benefits that exist in the cloud economy, and that there are those striving for this competitive advantage today.

The Hole in the Water That You Throw Money Into

One of the key economic drivers that hampers today’s data center environment is the relatively low utilization rate across its resources. Think about it like this: You’ve probably heard the old adage that owning a boat is like having a hole in the water that you throw money into. That is because the majority of boats are seldom used. (Trust me, I know, I used to own one.) The per use cost of a $25,000 (and quickly depreciating) boat that you actually use three or four times a year is quite high, and the reality is you could have rented a boat often for a fraction of the cost. The same thing is happening in your data center. If your utilization is 20 percent, or even 30 percent, you have essentially wasted 70-80 percent of your spend. That is an expensive data center.

Workload Utilizations1

Cloud computing is like that little boat rental shop tucked away in a nice cove on your favorite lake. What if you could get rid of excess capacity, better manage resource peaks and valleys, and rent public capacity when you need it, and not pay for it when you don’t?

What if we leverage public cloud flexibility1

The Economics

As you can see in the graphic below, the economics related to cloud are dramatic, and the key lies in leveraging the public cloud to pay only for what you use, eliminating the issue of excess capacity.

Public cloud options unlock extraordinary enterprise economics

There is a variety of point examples in which this is done today, with the above economics reaped. For instance, Ticket Master leverages the public cloud for large events, loading an environment to the cloud, specifically sized for each given event. The specific event may only last several hours or days, and once complete, Ticket Master takes down the environment and loads the data in its dedicated systems.

There are also enterprises and suppliers working to enable peak bursting more seamlessly. For example, eBay recently showed where they are working with Rackspace and Microsoft Azure to enable hybrid cloud bursting, allowing eBay to reduce its steady state environment (think hole in the water) from 1,900 to 800 servers, saving it $1.1 million per month.

Hybrid economics example eBay

 The Steps to Getting Started

Dedicate yourself to getting rid of your boat (or should I say boat anchor?) Begin a portfolio assessment. Understand what you have, and what is driving utilization. Consolidate applications, offload non-critical usage to the valleys, and look for ways to leverage the public/private cloud. When I unloaded my boat, I freed up capital for the more important things in life, without sacrificing my enjoyment. Doing so in your data center will allow you to take on strategic initiatives that will make you even more competitive.

If Governments Can, Why Can’t You? | Gaining Altitude in the Cloud

All around the world, governments are increasingly stepping up to the Cloud.

Early last week, the U.S. government’s General Services Administration (GSA) issued a solicitation for cloud-based email, office automation, and records services, in a contract estimated to be worth up to US$2.5 billion over five years. The GSA expects savings of nearly 45 percent from this move.

By the end of this year, the government CIO’s commitment to a “Cloud First” policy is expected to result in closures of up to 137 data centers across the United States. While this is only about 6 percent of the government’s 2,000+ data centers, it’s a great start given the extent of change required.

Across the Atlantic, there are plans to consolidate the U.K. government’s 8,000 data centers into a dozen centers on an internal G-Cloud. The government also recently released an alpha version of a consolidated government portal (alpha.gov.uk) hosted on Amazon’s cloud platform, that aims to centralize access to all government services.

In China, there are plans to build a cloud computing center the “size of a city” within the Heibei province, to primarily serve government departments.

These moves by governments around the globe represent, for perhaps the first time in recent memory, path-breaking leadership in technology transformation. Change is never an easy subject, especially within the public sphere. Yet the extent of potential benefits from a move to the Cloud is making governments take notice and make the plunge.

Private enterprises stand to learn a variety of lessons from these public sector Cloud moves:

a. They set the lead for large private enterprises

The Cloud is already at the forefront of CIO priorities for 2011. However, many enterprises hesitate to take large technological plunges given the extent of change required from legacy environments. Questions often emerge as to whether Cloud strategies are better suited for small-to-medium environments, and for new next generation initiatives. Enterprises also question how the change can be managed across so many different business units with disparate platforms. 

The scale of attempted governmental transformation should put such questions to rest. If an entity with over a thousand departments and an US$80 billion IT budget (a.k.a. the US government) can make the shift, why can’t you?

b. They indicate greater tolerance towards risk and security challenges

As recent discussions on this blog indicate, security and compliance concerns constitute two of the biggest impediments to transition to the Cloud. Yet, with risk sensitive departments such as Defense, Homeland Security and the NSA making the move, it’s clear the public sector’s concerns on these risks have been largely alleviated.

As the head of the U.S. Cyber Command General Keith Alexander recently testified in a House sub-committee hearing“…moving the programs and the data that users need away from the thousands of desktops we now use —each of which has to be individually secured… to a centralized configuration that will give us wider availability of applications and data combined with tighter control over accesses and vulnerabilities and more timely mitigation of the latter…Indeed, no system that human beings use can be made immune to abuse — but we are convinced the controls and tools that will be built into the cloud will ensure that people cannot see any data beyond what they need for their jobs and will be swiftly identified if they make unauthorized attempts…”

c. They herald greater maturity in the supplier ecosystem

Google and Microsoft have sparred publicly over the last few months over the (alleged) respective lack of FISMA certification on Cloud services offered to U.S. government agencies. As the war for public sector Cloud prospects heats up, so will functionality and service provider maturity. For example, Google Apps for Government now includes specialized security functionality: data location and segregation of government data, necessary to ensure greater security and compliance.

In addition, as The Federal Risk and Authorization Management Program  (FedRAMP) mechanisms are established later this year to enable government-wide certifications and authorization, more Cloud vendors will step up to meet the bar.

d. They indicate need for concerted CIO-level leadership

Since 2009, when Cloud computing was identified as a Federal IT priority, the U.S. government’s CIO has unveiled a wide range of initiatives: establishing standard definitions;  defining Cloud value propositions; launching Cloud store fronts; establishing the “Cloud First” strategy as a keystone of IT strategy; setting clear decision frameworks and timelines; and establishing new Cloud standards. Clearly, Federal Cloud initiatives are leading change across a diverse government organization, much of which has been driven by the CIO’s determined efforts to push through change, despite naysayers and challenges.

Governments’ migration to the Cloud represents a monumental effort in technology change in a large and complex organization. As private enterprises navigate to the Cloud, they have much to learn from the public sector’s lead.

Cloud’s Impact on the CIO | Gaining Altitude in the Cloud

Disclosure: I’ve never been a CIO. However, I’ve worked with and advised them on many engagements, so I have an understanding of how they think and the challenges they face in today’s business environment.

Much has already been written about the technology revolution emerging from cloud-delivered services, I wanted to turn the tables slightly and ponder how these technologies influence the role and skills of IT management and the office of the CIO.

Macro Trends CIOs Face

1: Strategy Replacing Operations
CIOs are facing tremendous pressure to think strategically about how IT can better align itself with business needs. An operations-focused “we’re just here keeping the lights on” approach to IT management is, at best, the minimum expectations of the job. As organizations demand more technological enablement in all parts of the business model, CIOs must fully integrate into strategy setting and change enablement within their company.

2: Era of Big Data
A study conducted by the University of California, San Diego, estimated that the volume of enterprise data produced per year (2008) topped out at 9.57 zettabytes (1 zettabyte =1 million petabytes), which translates to an average of 63.4 terabytes per company per year or 12 gigabytes per worker per day.

Although it’s a gross oversimplification, CIOs are continually asked to do more with less. They need to support their company’s desire to take advantage of big data to make better business decisions and more data-rich transactions, yet simultaneously are burdened with the liabilities of processing capacity limitations, storing and retrieving requirements, and data protection, all while capital budgets are under tighter scrutiny.

3: Speed and Agility
Almost every new technology comes attached with a promise of saving time. But to the adopter, the outcome isn’t more free time; rather, it’s a shortened expectation of the time it takes to complete a workload. In the enterprise, this is manifested in the demand of increasingly greater organizational agility and nimbleness. And as a CIO’s performance is measured by the rate at which he or she pushes initiatives that enable faster achievement the organization’s goals, one perceived as creating more bottlenecks than accelerators will not last long in the role.

Cloud to the Rescue

CIOs are challenged with consistently meeting (and hopefully exceeding) their stakeholders needs, despite the mounting pressure caused by these macro trends. Thus, even in cloud technology’s relative infancy, CIOs need to at least consider evaluating cloud solutions because of their ability to address common pain points.

Cloud technologies have the potential to help CIOs focus more on the business and less on the underlying infrastructure. While traditional ITO promised this, anecdotal and empirical evidence suggests that the reality was more often than not “your mess for less.” The subtext here is CIOs are spending too much time managing their outsourcing providers to solve technological, rather than business, problems. But fundamental to cloud architecture design is delivery of a service to the end user, which ultimately will disaggregate the supporting infrastructure from the service, and enable the CIO to focus more on solving business problems.

Cloud technologies can also address the do more with less issue. IT departments are starting to realize that the traditional one application per server approach to running enterprise infrastructure is unsustainable in a big data world. CIOs can yield benefits from cloud (and virtualization) technologies from two major drivers:

1) Increasing utilization per server – meaning either requiring fewer servers to do the same data processing volume, or squeezing more data processing out of the same volume of servers. Either way, cloud delivers more for less.

2) Thinking strategically about load balancing – an enterprise’s requirement on its IT department is neither predictable nor equal in terms of business priorities. But cloud technology enables evaluation of the trade-offs presented by flexible, on-demand data management.

Cloud technology is already having a seismic effect on expectations around business agility. For example, when the time required to procure a server goes from weeks to minutes, there is a quantifiable shift in productivity gains. And as cloud technology evolves, these gains will be further amplified.

How does the CIO’s role change?

So what does this all mean for next generation CIOs? In the short term, they will have to become informed, poke at the promises coming from suppliers, and manage the cloud hype curve on behalf of their organization.

Beyond the short term, they will need to address and manage – via a robust and sensitive change management program –the impact of the cloud’s technological transformation on a much broader set of stakeholders, including the internal IT team.

Another subtle but significant shift will be from the role of service manager to one akin to an air traffic controller for workloads. For example, with a workload that requires 240 CPU hours and you have procured a cloud that gives you 10 virtual machines, a CIO can choose to turn on one virtual machine and leave the nine others to run other workloads, but the process will take 10 days. Or, the CIO can turn on 10 virtual machines to process the same workload in one day, but the organization will be out of capacity for that day. Managing that trade off will be a new to many CIOs, and a regular situation for all.

What other concerns should CIOs have, and how should they prepare themselves?

For information on what CIOs want from the cloud: http://cloud.savvis.com/information-center and download the CIO LinkedIn Market Pulse Survey

For how roles are changing because of the cloud: http://www.pcworld.com/businesscenter/article/227238/panel_the_cloud_requires_fresh_it_skills.html

For an introduction to the economics of cloud enterprise computing that CIOs should consider, register and attend the May 24 Everest Group webinar on the topic.

The Facts, not the Fluff and Puff: What Really Happened with Amazon’s Cloud Outage | Gaining Altitude in the Cloud

In the more than 1,000 articles written about Amazon’s April 21 cloud outage, I found myriad “lessons,” “reasons,” “perspectives,” and a “black eye,” a “wake up call” and a “dawn of cloud computing.”

What I struggled to find was a simple, but factual, explanation of what actually happened at Amazon. Thankfully, Amazon did eventually post a lengthy summary of the outage, which is an amazing read for IT geeks, to which the author of this blog proudly belongs, but may induce temporary insanity in a more normal individual. So this blog is my attempt to describe the event in simple, non-technical terms. But if I slip once or twice into geek-speak, please show some mercy.

The Basics of Amazon’s Cloud Computing Solution

When you use Amazon to get computing resources, you’d usually start with a virtual server (aka EC2 instance). You’d also want to get some disk space (aka storage), which the company provides through Amazon Elastic Block Store (EBS). Although EBS gives you what looks like storage, in reality it’s a mirrored cluster of two nodes … oops, sorry … two different physical pieces of storage containing a copy of the same data. EBS won’t let you work using only one piece of storage (aka one node) because if it goes down you’d lose all your data. Thus in Amazon’s architecture, a lonely node without its second half gets depressed and dedicates all its effort to finding its mate. (Not unlike some humans, I might add.)

Amazon partitions its storage hardware into geographic Regions (think of them as large datacenters) and within Regions to Availability Zones (think of them as small portions of a datacenter, separated from each other). This is done to avoid losing the whole Region if one Availability Zone is in trouble. All this complexity is managed by a set of software services collectively called “EBS control plane,” (think of it as a Dispatcher.)

There is also a network that connects all of this, and yes, you guessed right, it is also mirrored. In other words, there are two networks… primary and secondary.

Now, the Play-by-Play

At 12:47 a.m. PDT on April 21, Amazon’s team started a significant, but largely non-eventful, upgrade to its primary network, and in doing so redirected all the traffic to another segment of the network. For some reason (not explained by Amazon), all users traffic was shifted to the secondary network. Now you may reply, “So what…that’s what it’s there for!” Well, not exactly. According to Amazon, the secondary network is a “lower capacity network, used as a back-up,” and hence it was overloaded instantly. This is an appropriate moment for you to ask “Huh?” but I’ll get back to this later.

From this point on, the ECB nodes (aka pieces of storage) lost connections to their mirrored counterparts and assumed their other halves were gone. Now remember that ECB will not let the nodes operate without their mates, so they started frantically looking for storage space to create a new copy, which Amazon eloquently dubbed the “re-mirroring storm.” This is likely when reddit.com, the New York Times and a bunch of others started noticing that something was wrong.

By 5:30 a.m. PDT, the poor ECB nodes started realizing harsh reality of life, i.e., that their other halves were nowhere to be found. In a desperate plea for help, they started sending “WTF?” requests (relax, in IT terminology WTF means “Where is The File?”) to the Dispatcher (aka EBS control pane). So far, the problems had only affected one Availability Zone (remember, a Zone is a partitioned piece of a large datacenter). But the overworked Dispatcher started losing its cool and was about to black out. Realizing the Dispatcher’s meltdown would affect the whole Region, causing an outage with even greater reach, Amazon at 8:20am PDT made the tough, but highly needed, decision to disconnect the affected Availability Zone from the Dispatcher. That’s when reddit.com and others in that Availability Zone lost their websites for the next three days.

Unfortunately, although Amazon sacrificed the crippled Availability Zone to allow other Zones to operate, customers in other Zones started having problems too. Amazon described them as “elevated error rates,” which brings us to our second “Huh?” moment.

By 12:04 p.m. PDT, Amazon got the situation under control and completely localized the problem to one Availability Zone. The team started working on recovery, which proved to be quite challenging in a real production environment. In essence, these IT heroes were changing tires on a moving vehicle, replacing bulbs in live lamp, filling a cavity in a chewing mouth… and my one word for them here is “Respect!”

At this point, as the IT folks started painstakingly finding mates for lonely EBS nodes, two problems arose. First, even when a node was reunited with its second half (or a new one was provided), it would not start operating without sending a happy message to the Dispatcher. Unfortunately, the Dispatcher couldn’t answer the message because it had been taken offline to avoid a Region-wide meltdown. Second, the team ran out of physical storage capacity for new “second halves.”  Our third “Huh?”

Finally, at 6:15 p.m. PDT on April 23, all but 2.2 percent of the most unfortunate nodes were up and running. The system was fully operational on April 24 at 3:00 p.m. PDT, but 0.07 percent of all nodes were lost irreparably. Interestingly, this loss led to the loss of 0.4 percent of all database instances in the Zone. Why the difference? Because when a relational database resides on more than one node, loss of one of them may lead to loss of data integrity for the whole instance. Think of it this way: if you lose every second page in a book, you can’t read the entire thing, right?

Now let’s take a quick look at our three “Huh?s”:

  1. Relying on SLAs alone (which in Amazon’s case is 99.95 percent uptime) can be a risky strategy for critical applications. But as with any technological component of a global services solution, you must understand the supplier’s policies.
  2. Cloud platforms are complex, and outages will happen. One thing we can all be certain of in IT; if it cannot happen, it will happen anyway.
  3. When you give away your architectural control, you, well, lose your architectural control. Unfortunately, Amazon did not have the needed storage capacity at the time it was required. But similar to my first point above, all technological components of a global services solution have upsides and downsides, and all buyer organizations must make their own determinations on what they can live with, and what they can’t.

An interesting, final point: after being down for almost three days, reddit.com stated, “We will continue to use Amazon’s other services as we have been. They have some work to do on the EBS product, and they are aware of that and working on it.” Now, folks, what do you think?

Where Are Enterprises in the Public Cloud? | Gaining Altitude in the Cloud

Amazon Web Services (AWS) recently announced several additional services including dedicated instances of Elastic Compute Cloud (EC2) in three flavors: on demand, one year reserved, and three year reserved. This should come as no surprise to those who have been following Amazon, as the company has been continually launching services such as CloudWatch, Virtual Private Cloud (VPC), and AWS Premium Support in an attempt to position itself as an enterprise cloud provider.

But will these latest offerings capture the attention of the enterprise? To date, much of the workload transitioned to the public cloud has been project-based (e.g., test and development), and peak demand computing-focused. Is there a magic bullet that will motivate enterprises to move their production environments to the public cloud?

In comparison with “traditional” outsourcing, public cloud offerings – whether from Amazon or any other provider – present a variety of real or perceived hurdles that must be overcome before we see enterprises adopt them for production-focused work:

Security: the ability to ensure, to the client’s satisfaction, data protection, data transfer security, and access control in a multi-tenant environment. While the cloud offers many advantages, and offerings continue to evolve to create a more secure computing environment, the perception that multi-tenancy equates to lack of security remains.

Performance and Availability: typical performance SLAs for the computing environment and all related memory and storage in traditional outsourcing relationships are 99.5– 99.9 percent availability, and high availability environments require 99.99 percent or higher. These availability ratings are measured monthly, with contractually agreed upon rebates or discounts kicking in if the availability SLA isn’t met. While some public cloud providers will meet the lower end of these SLAs, some use 12 months of previous service as the measurement timeline, while others define an SLA event as any outage in excess of 30 minutes, and still others use different measurements. This disparity leads to confusion and discomfort among most enterprises, and the perception that the cloud is not as robust as outsourcing services.

Compliance and Certifications: in industries that utilize highly personal and sensitive end-user customer information – such as social security number, bank account details, or credit card information – or those that require compliance in areas including HIPPA or FISMA, providers’ certifications are vital. As most public cloud providers have only basic certification and compliance ratings, enterprises must tread very carefully, and be extremely selective.

Support: a cloud model with little or no support only goes so far. Enterprises must be able to get assistance, when they need it. Some public cloud providers – such as Amazon and Terremark – do offer 24X7 support for an additional fee, but others still need to figure support into their offering equation.

Addressing and overcoming these measuring sticks will encourage enterprises to review their workloads and evaluate what makes sense to move to the cloud, and what will remain in private (or even legacy) environments.

However, enterprises’ workloads are also price sensitive, and we believe, at least today, that the public cloud is not an economical alternative for many production environments. Thus enterprise movement to the cloud could evolve one of several ways. In a hybrid cloud where the bulk of the production environment will be placed in a private cloud and peak demand burst to the public cloud. Or will increased competition, improved asset utilization and workload management continue to drive down pricing, as has happened to Amazon in both of the past two years? If so, will enterprises bypass the hybrid path and move straight to the public cloud as the economics prove attractive?

The ability to meet client demands, creating a comfort level with the cloud and the economics all play a role into how and when enterprises migrate to the cloud. The market is again at an inflection point, and it promises to be an exciting time.

Will the Sun Come out Tomorrow? | Gaining Altitude in the Cloud

Cloud computing promises increased flexibility, faster time to market, and drastic reduction of costs by better utilizing assets and improving operational efficiency. The cloud further promises to create an environment that is fully redundant, readily available, and very secure. Who isn’t talking about and wanting the promises of the cloud?

Today, however, Amazon’s cloud suffered significant degradation in its Virginia data center following an almost flawless year+ long record. Yes, the rain started pouring out of Amazon’s cloud at about 1:40 a.m. PT when it began experiencing latency and error rates in the east coast U.S. region.

The first status message about the problem stated:

1:41 AM PT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region.

Seven hours later, as Amazon continued to feverishly work on correcting the problem, its update said:

8:54 AM PDT We’d like to provide additional color on what were working on right now (please note that we always know more and understand issues better after we fully recover and dive deep into the post mortem). A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We’re starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them.

No! Say it’s not so! A cloud outage? The reality is that cloud computing remains the greatest disruptive force we’ve seen in business world since the proliferation of the Internet. What cloud computing will do to legacy environments is similar to what GPS systems did to mapmakers. And when is the last time you picked up a map?

In the future, businesses won’t even consider hosting their own IT environments. It will be an automatic decision to go to the cloud.

So why is Amazon’s outage news?

Only because it affected the 800-pound gorilla. Amazon currently has about 50 percent of the cloud market, and its competitors can only dream of this market share. When fellow cloud provider Coghead failed in 2009, did anyone know?  We certainly didn’t.  But when Amazon hiccups, everybody knows it.

Yes, the outage did affect a number of businesses. But businesses experience outages, disruptions, and degradation of service every day, regardless of whether the IT environment is legacy or next generation, outsourced or insourced.  In response, these businesses scramble, putting in place panicked recovery plans, and having their IT folk work around the clock to get it fixed…but rarely, do these service blips make the news.   So with the spotlight squarely shining on it because of its position in the marketplace, Amazon is scrambling, panicking, and working to get the problem fixed. And it will. Probably long before its clients would or could in their own environments.

Yes, it rained today, but really, it was just a little sprinkle. We believe the future for the cloud is so bright, we all need to be wearing shades.

Cloud Services and CFOs’ Triple Hat Role | Gaining Altitude in the Cloud

We had the pleasure this week of participating in a CFO Forum hosted by TechAmerica, along with representatives from Microsoft, Softlayer and SOURCE, on the topic of “Navigating the Cloud.” The overall discussion focused on the benefits of the rapidly expanding universe of cloud services, along with key risk, compliance and security considerations for CFOs. During the panel discussion and audience Q&A, it became apparent that CFOs wear three different hats when thinking about the cloud:

CFO as Cloud User – like everyone else, CFOs are potential users of cloud services, primarily via ERP and F&A-related SaaS offerings. Discussion in this area focused on several topics:

  • Cloud ERP and accounting solutions from vendors like NetSuite and Intacct have been traditionally focused almost exclusively on SMBs. Though still early, enterprise options are emerging from cloud-focused vendors such as Workday. CFOs need to keep on top of the rapidly evolving set of alternatives that exist for the F&A function.
  • New cloud deployment models are emerging for ERP, such as the ability to run SAP on virtualized private clouds, and availability of select modules through public multi-tenant models. CFOs need to realize that it’s not just SaaS or nothing – new models are being introduced that capture virtualization and private cloud benefits without the perceived risks of moving sensitive financial data to the public cloud.

CFO as Cloud Buyer – the second major relationship CFOs have with the cloud is as a buyer, given the ownership they have over corporate and IT budgeting processes and spend. Points mentioned during the Forum included:

  • CFOs should give strong consideration to “Cloud First” policies such as one recently announced by Vivek Kundra, CIO of the United States, who is seeking to move 25 percent of the Federal Government’s IT budget to cloud services. The policy doesn’t say that cloud should be adopted whenever available, but rather that it be strongly considered “whenever a secure, reliable, cost-effective cloud option exists.” Sounds like a smart policy for the private sector as well.
  • CFOs should also work with CIOs and business owners to ensure that a comprehensive assessment has been made of the potential value of migrating to cloud services at the SaaS, IaaS (infrastructure-as-a-service) and PaaS (platform-as-a-service)levels, and that an overall transformation plan exists. Many experiments currently exist, but there is little understanding of where adoption goes after that.

CFO as Fiduciary – the panel also explored the impact of the cloud on CFOs fiduciary responsibilities for the organization.

  • Duke Skarda, CTO of Softlayer, described the four categories of risk in the cloud that CFOs need to evaluate: compliance, governance, security, and disaster recovery. As with cloud services overall, there’s no one right answer – organizations need to understand their risk posture, requirements, vendor capabilities, and supporting SLAs and contractual agreements. It was also noted that, in some cases, cloud services can actually serve to decrease organizational risk profiles.
  • CFOs need to understand any potential impacts of applicable compliance or data privacy regulations (especially in Europe) on where and how they can leverage cloud services.
  • IT policies and controls themselves don’t necessarily change with cloud services, but how they are implemented likely will. CFOs need to ensure IT has taken the right steps to implement appropriate governance and control of cloud services.

Overall, it was a great discussion, with interesting questions and comments from a very engaged CFO audience.

Welcome to “Gaining Altitude in the Cloud” — Another Blog on the Cloud? | Gaining Altitude in the Cloud

Does the world really need another blog about the cloud?

Here at Everest Group we believe the answer is a resounding yes.

The “signal to noise” ratio around the cloud is reaching a fever pitch.  In fact, the hype alone has driven most enterprises to dip their toe in the water with initial pilots and “experiments.”  While moving dev / test environments to the cloud is a good thing, we believe most enterprises should be moving faster and smarter.

What’s missing in the current market conversation about the cloud?

Real, data-driven perspectives on the true ROI and business impact enterprises can expect to see, and in many cases are seeing from the cloud.  In what scenarios do IAAS (Infrastructure-as-a-Service) or StaaS (Storage-as-a-Service) offerings make sense in a given enterprise, and from which vendor?  Can IAAS and cloud services make sense today even if data center assets are fully depreciated?  Can 90 percent of the economic benefits of the cloud be captured via private cloud and virtualization?  Or is there more on the table to be gained?  While opinions on these topics abound, fact-based analysis is hard to find.  And we think the answers might surprise you.

Our goal with “Gaining Altitude in the Cloud” is to create a forum to help enterprise decision-makers, cloud service providers and technology infrastructure vendors better understand the underlying customer economics driving cloud adoption dynamics.  Our blog will take a comprehensive view and look across enterprise-class cloud services and major vendors in the areas of:

  • BPaaS (Business Process as a Service)
  • SaaS (Software as a Service)
  • PaaS (Platform as a Service)
  • IaaS (Infrastructure as a Service)
  • StaaS (Storage as a Service)

We’ll be featuring best practices, case studies and insights and analysis on how cloud and other next generation IT technologies and services are driving fundamental changes in the economics of IT.   By providing customer-centric, vendor-neutral analysis of cloud economics, we hope to inject a much better fact base into the market conversation.

We’re looking forward to the discussion – we hope you are as well!

Service Provider Cloud Strategies – Differentiated? Appropriately Focused for Success? | Sherpas in Blue Shirts

As Unique as Everyone,” the title of Everest Group’s just completed research study on service providers’ cloud computing strategies, tells a large part of the current story. Indeed, after exhaustive discussions with 14 major outsourcing service providers – a mix of multinationals and offshore providers – we found little differentiation among their cloud strategies. Most of them have broadly similar offerings that we segment as:

  • Cloud Advisor, where the provider engages with buyers to help with business case development, planning, roadmap, security, and governance
  • Cloud Enabler, where the provider typically does not offer a homegrown solution but instead sells cloud services offered by industry partners, and usually takes care of implementation and maintenance of the solution
  • Cloud Orchestrator, where the provider acts as an integrator/broker of a buyer’s hybrid infrastructure environment
  • Cloud Provider, where the provider either offers the infrastructure engine to run cloud applications, or homegrown cloud solutions in any of the cloud layers.

Against this backdrop, where are the major players seeing cloud computing potential, and where are they focusing their efforts?

While most of the known providers have entrenched partnerships with known cloud application vendors such as Google, Amazon and Microsoft, what they are doing with these partnerships, apart from implementing their cloud services, is not yet clear. Most of the multinationals are focused on leveraging their technology competence to enhance their hosting offerings. In this scenario, for example, an HP may use its infrastructure and technology power to customize the hosted Microsoft Exchange platform, or an Atos Origin may provide global agreements to buyers per its own agreement with a cloud application vendor partner.

All the providers with which we spoke for this research study have standard offerings for cloud business case development, ROI analysis, third-party cloud implementation, and some business utilities as a service. And while serving as an orchestrator is the most lucrative segment in which providers can play, the absence of comprehensive management platforms, and challenges with cloud-to-cloud integration challenges, are holding service providers back in this area. They believe orchestration eventually will be the winner, but the industry needs to develop the needed tools, processes, and standards for seamless management.

Asset heavy players are touting their infrastructure competence by building private clouds hosted on their premises, whereas asset light players are focusing on private clouds from the design and management perspectives. Most of the providers do not believe a pure infrastructure-as-a-service (IaaS) play is going to lead them ahead, and, as such, they want to provide value-added services and differentiate on provisioning, management, and governance.

Do you see any demonstration of differentiation here among these major providers? Neither do we. While there is excitement about the promise of cloud computing in both the buyer and provider communities, our research found that many (nay, most) of the service providers are unclear and unfocused – especially in that their offerings span all cloud layers, which we believe is an untenable proposition – about the path they need to take to tap the market’s potential. To truly cash in on cloud computing, providers need to significantly change their business strategy. Will be able to do it? Only the time will tell.

For more details, please see Everest Group’s research report on Service Provider Cloud Strategies – “As Unique as Everyone.”

Eleven in 2011: Everest Group’s Predictions for the Global Services Industry | Sherpas in Blue Shirts

The number 11 holds far-reaching significance in numerology, in the bible, among doomsday theorists, and in the dice game craps, to name just a few instances. What’s the significance of Everest Group’s 11 predictions for the global services industry in 2011? Let’s take a look.

 1. Unleashed discretionary spend that fueled the 2010 global services industry will fizzle out by the end of Q2 2011, resulting in market growth flattening.

While most forecasters suggest accelerated market growth through 2011, we predict a flattening of growth in the global sourcing market as companies work through pent up demand. Assuming economic forecasts hold, we expect to see continued increased activity fueled by the discretionary spend that began in 2010 to last through Q2, then drop back to its “natural” level consistent with the economy. Transformation agenda activity, precipitated by changes in the industry, will increase slightly in tandem with a slowly recovering economy. “Run and maintain” activity will also move in the direction of the economy.

2. Provider pricing power will not regain pre-recessionary traction in 2011.

We predict low and modest pricing power in the provider segment, or a return to pre-recessionary pricing levels. This will force them to continue to differentiate themselves by performance outcomes, rather than price. As a result, select providers will grow disproportionately, taking clients away from others.

3. Global companies will continue to channel efforts toward services portfolio rationalization and increased adoption of hybrid sourcing models.

While for the last decade the industry worked to understand, test and deploy models to capture value from labor arbitrage, organizations with mature global services portfolios are now focused on making them more effective and able to deliver end-to-end impact. The global services market has shifted accordingly, helping global companies extract value from third-party service providers, shared services organizations and combinations of both via hybrid models. At the same time, attractive new markets for labor arbitrage remain for mid-sized companies who are late adopters of more robust sourcing strategies or for portfolio extensions of mature clients.

4. More inflammatory dialogue on offshoring, driven by political posturing in a weak economy, will drive offshore companies to establish a greater onshore presence.

2011 may see more protectionist measures proposed by the United States and politicians due to the high level of continuing unemployment. Most of these measures will fail to gain traction and pass into law, and those that do will be difficult to implement and audit. Yet the increased negative press will drive the major offshore providers to increase their onshore delivery capabilities, a trend already well underway per their need to deepen relationships with clients and add more complex and intermit work to their offerings. The United Kingdom will also likely move in a similar direction with proposed quotas on non-EU immigration of skilled workers. While it’s very unlikely that U.S. politicians and legislative agendas will have a significant impact on the sourcing industry, these pressures will probably eliminate or significantly restrict new markets for offshoring with government buyers.

5. Strong sourcing market growth will be in geographies with strong economies, led by Brazil, China, India, and the Middle East.

Countries with strong economies represent big markets with big demand for transformational and discretionary spend activity. Consulting firms and service providers will focus efforts on reaching these robust markets.

6. Cloud computing will be the technological breakthrough causing the most disruption.

While it will take time for cloud computing to mature and for companies to adopt it on a widespread basis, it is currently creating significant discontent and disruption in delivery models, and will continue to do so in 2011. Expect to see service providers continue to push development of cloud solutions and, in some cases, acquire smaller players to gain intelligence and/or expand capability offerings.

7. Industry consolidation will pick up speed.

Industry consolidation will be driven by several factors: 1) Infrastructure hardware providers seeking to extend services; 2) Japanese service providers engaging in increased M&A activity as they continue to look to expand their global networks; 3) Buyer organizations’ continuing desire to have a smaller portfolio of service providers; and 4) service providers seeking diversified offering capabilities as they continue to see traditional growth areas slow.

8. Emerging low-cost destinations will increase their momentum.

 The new, emerging destinations such as South Africa, Egypt, and Argentina will continue their impressive rise as attractive locations for specialized services, providing an increasingly important complement to the mega destinations of India and the Philippines. This increased importance will drive substantial job growth and present increased demands on countries infrastructure and, in turn, command more support from governments in the form of increased investments in regulations that are more attractive and policies, and more proactive measures to attract and maintain inflows of work and investment.

9. Cities, Counties, States and Provinces will join the party.

Proactive government entities, such as cities and states that have traditionally outsourcing to other locations, will realize the untapped potential of becoming mini hubs of global services work. These innovative government entities will be able to make targeted investments that attract high paying services jobs into their jurisdictions, leveraging the under-employment of key skills combined with emerging work from home and telecommuting technologies and business models. Over time, these will enable a new class of complementary and compelling services offerings, further enriching global services portfolio options while greatly enhancing the standard of living and tax bases of the locations that  embrace this new model.

10. There will be rising dissatisfaction and pricing pressure on the traditional IT infrastructure market.

 HP’s move to take out a substantial portion of EDS’ cost structure has already set off a chain reaction as other IT infrastructure companies increasingly recognize the new competitive realities and strive to cut costs and match price. The primary vehicle for cost reduction will be to move a greater portion of the delivery staff offshore to low-cost destinations, primarily in India. This mass migration of work is and will further stretch offshore delivery capabilities, resulting in decreasing quality and communication problems. We expect these issues, combined with the rising expectations emanating from the emergence of new disruptive models such as cloud and Remote Infrastructure Management Outsourcing (RIMO), will amplify the already growing dissatisfaction in the buying community.

11. Arbitrage will increase.

We expect increased wage inflation in the low-cost destination countries, and increased fourfold pressures as currencies of developing countries increase relative to their nation currencies of the Euro and Dollar. Nevertheless, we still expect it will become cheaper for providers such as Infosys, Wipro, TCS and others in that class, to provide work out of their low-cost destination locations relative to the cost of delivering them onshore. This is not to say we expect pricing drops from these firms; indeed, they will likely be vocal in pointing to their rising costs as a strong rationale for pricing increases. This arbitrage increase will apply to the overall cost of delivering the work, and may be misunderstood at an individual person or job class level. The reason for this surprising and counter-intuitive prediction is that we believe the class of providers that has mastered talent factories will be able to apply lean process improvements, and continue down-shifting their work to more junior and cheaper resources, overall widening an already growing arbitrage gap. This downshifting of work, which has been under way for a number of years, will be accomplished without materially affecting the quality of the services delivered.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.