Gaining Altitude in the Cloud

Finish Line

Cloud Computing in ITO – Everybody Wins, but Who Gets to Win More? | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

Less than three years back, there was widespread excitement (and alarm and despondency) in many quarters about the impact of cloud computing on traditional IT outsourcing providers.

Cloud computing was predicted, though not by us, to greatly disadvantage the incumbent players, but as of today, such a prediction is difficult to stand by (just take a look at TCS’s and Accenture’s results since then). Sure, public cloud providers continue to grow rapidly, and the traditional license model is increasingly giving way to the pay-as-you-go paradigm. Yet most leading providers of outsourced IT services seem to be adapting well through a combined strategy of alliances, acquisitions, and in-house cloud solutions. Cloud computing appears to be increasingly well integrated as part of the delivery model for most traditional ITO providers. Consider the following statistics from our recently released report, Enterprise Cloud Adoption: Role of Cloud in Global Services:

  • In the second half of 2011, approximately eight percent of all ITO/BPO deals serviced by traditional outsourcers (excluding SaaS product companies, and public cloud and hosting providers) included cloud delivery models or platforms within their scope. This is up from four percent in the first half of 2011.
  • The average total contract value (TCV) of 2011 global services deals with cloud delivery in scope  was US$168 million, compared to US$95 million for deals without cloud in scope.
  • Cloud deals seem to be more transformational in nature, almost at the cutting edge of ITO capabilities if you will. 53 percent of all ITO deals with cloud delivery in scope involved significant infrastructure transformation of test, development, and production environments. Clearly, traditional ITO providers view cloud computing as an important solution component for large, transformational deals.
  • Cloud computing seems to be helping service providers get access to markets that were previously unprofitable or too complicated to serve. Approximately 38 percent of all global services contracts with cloud in scope were awarded by enterprises with less than US$500 million in revenues. And government and non-profit sectors together account for 20 percent of all global services deals with cloud delivery in scope.

Clearly, there’s a big pot of gold somewhere amidst all these clouds, but what’s interesting to note is that few service providers  have all of what it’s going to take to win all of it:

  • Design and Consulting – Service providers, such as Accenture, with a consulting legacy and orientation are going to have an advantage when it comes to advising clients on how to build their cloud solution from scratch.
  • Host and Implement – Players like IBM and HP with  a deep legacy of asset-based infrastructure transformation will have an advantage in providing these services
  • Management and Professional Services – Offshore players such as TCS, with their global delivery models, have an advantage in offering the “cloud management” role

The problem is that these activities are seldom commissioned in isolation. This is not something where a best-of-breed approach always works, despite buyers being wary of lock-in risks. The opportunities are tightly coupled, and service providers need intelligence on the characteristics of relevant opportunities as they are torn between focusing on what they have, and plugging the gaps through alliances and acquisitions.

The fact of the matter is that there will be winners and losers, and the market today is too dynamic to predict who will play which part. It will be interesting to see if there are ground-breaking disruptions (e.g., a major public cloud provider making a headline acquisition of a giant system integrator, thereby making its move in the private cloud market, potentially disintermediating a lot of other system integrators, and at one stroke making a deep thrust in the enterprise market) as the stakes get higher. Or an asset-light provider marking a strategic u-turn by investing in physical infrastructure to build its own cloud solution, complete with consulting, system integration, and management services delivered through a global platform?

To learn more about the nature of cloud-related opportunities for providers of global services, check out Enterprise Cloud Adoption: Role of Cloud in Global Services.

Road Block

Where Are the Transformers? Enterprise Cloud Adoption Roadblocks | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

As discussed here before, a number of different enterprise cloud adoption paths are emerging. These patterns range from “Observers,” who are taking a reactive, wait and see approach to migration to “Transformers,” who are using private and public clouds to drive wide scale IT transformation and modernization programs. Not surprisingly these transformers are the Holy Grail being pursued by many cloud service providers and enterprise IT vendors. The opportunity to drive significant pieces of an enterprise IT environment to cloud environments (private and public) in multi-year transformation efforts creates visions of big services, hardware, and in some cases software dollars.

While many service providers are crafting go-to market strategies around these types of client opportunities, they’re running into an interesting challenge. They’re not finding a lot of Transformers out there yet. Enterprise cloud adoption, particularly for IaaS, is still largely focused on specific use cases or initial pilots. While many CIOs have long-term visions for cloud-centric future state environments, few CIOs are actually doing it today.

So why aren’t we seeing more Transformers in the market? Our experience suggests that in many cases there are a set of tactical (and often mundane) issues preventing CIOs from getting to the cloud more aggressively. While by no means comprehensive, several of the issues we frequently see are:

  • Licensing handcuffs – legacy enterprise software vendors clearly understand the business model disruption that cloud represents. Not surprisingly, most enterprise software houses are in no hurry to get their customers to the new world. For example, nearly all of Oracle database licensing policies are still based on physical server CPUs. One notable exception is with Amazon AWS, for which Oracle does support a “BYOL” (bring-your-own license) model based on virtual cores; at this time, Amazon is the only cloud service provider certified by Oracle.  In addition Oracle software licensing also provides no or limited technical support for major non-Oracle virtualization platforms such as VMware, KVM, Xen and Hyper-V. Needless to say, if you’re a CIO running an Oracle shop (as many Fortune 500 companies are), there are significant constraints to migrating to even private cloud environments. While not every legacy enterprise software vendor has staked out a position as extreme as Oracle, many are still using licensing as leverage to drive clients to preferred models (or keep them there).
  • Shortage of skills – cloud expertise and experience is hard to find. Without cloud architecture and solution skills, enterprises are finding it difficult to drive wide-scale transformation efforts. While retraining would seem to be the obvious answer, CIOs that have tried going down that path are finding it to be a dead end. As discussed at our Organizational Readiness track at Cloud Connect Santa Clara last February, IT leaders are finding that the cloud paradigm shift is a bridge too far, and that most of their current employees are unable to make the shift. The lack of internal talent, combined with the wariness to trust vendors and service providers, is leading to a real constraint to further adoption, particularly in IaaS and private cloud models.
  • Analysis paralysis – private cloud provides an interesting example of the proliferation of options facing enterprise IT. Private clouds can be provided in a variety of flavors, with important choices to be made around delivery model (VPC vs dedicated), location (on-premise or hosted), asset ownership (customer or service provider), platform (proprietary vs open source) and, of course, vendor.  Given the skills shortage mentioned above, even sophisticated enterprise IT shops are challenged with the variety of vendor and service options in the market, particularly given the pace of change. Of course the recent flare-up of IaaS platform wars doesn’t help make these choices clearer for risk-averse CIOs.  The result of too many choices? It’s not uncommon for us to see clients experiencing “vapor lock,” not really knowing what to analyze, let along what methodology to use. Clients are finding the frameworks, methodologies and tools they’ve historically used to make similar decisions in the past aren’t applicable or relevant in the cloud paradigm. As simple as it seems, many of our clients simply don’t know where to get started.

Why isn’t security and compliance on the list? Because in many cases, we’re finding that security and compliance is a red herring that IT is hiding behind. This is not to say that there are not workloads and use cases where security and compliance issues prevent certain public cloud models; however, these situations in reality are the minority. A variety of examples exist of enterprises leveraging the cloud today while still maintaining compliance with PCI, HIPAA and other mandates (most of which are open to auditor interpretation anyway). Best practices, tools and architectures for addressing common security issues are also becoming more prevalent, as are more mature CSP offerings and security practices for common use cases. Net, net: where there’s a will there’s a way, and in most cases if CIOs are truly interested in getting to the cloud, there are secure, compliant ways of getting there.

Overall, we believe that the wave of transformation is coming in the enterprise. Early movers exist and are achieving the promised payoff. Unfortunately the timing and shape of the wave for the mainstream organization is not as clear as those in the enterprise IT world would like, and the pace is being shaped primarily by a set of factors that are largely non-technical and beyond the IT leader’s control.

Yugal Joshi

Cloud Wars: the Demise of Simplicity and Standardization | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | 5 Comments

In its comparatively short yet highly significant lifetime, the cloud industry has quickly devolved into a confusing morass of technology jingoism, marketing hype, aggression, and even negative allegations. Though the SaaS world is reasonably understood, it’s the infrastructure cloud that is creating an enterprise cloud war. Just think about the flurry of announcements and assertions about the big boys of technology taking sides with various cloud platforms or hypervisors:

  • Rackspace announced that OpenStack will be its cloud platform for public infrastructure service
  • Terremark introduced its private cloud offering built on VMware’s hypervisor
  • Sungard and CSC are using the vBlock architecture (based on VMware) for their cloud offerings
  • Savvis has chosen VMware for its Symphony Dedicated cloud
  • IBM is investing in a KVM-based public cloud offering
  • Amazon Web Services are based on proprietary implementation of open source Xen
  • GoGrid prefers the Xen hypervisor
  • HP proclaimed support for KVM/OpenStack for public cloud services
  • OpenStack announced large technology providers such as IBM, Yahoo, HP, AT&T, Dell, Cisco, and Canonical becoming platinum and gold members of OpenStack Foundation. Citrix, a supporter of OpenStack until a few weeks back, bemoaned that it is “tired” of the speed of evolution of OpenStack, and thus gave its CloudStack platform to the Apache Software Foundation. Though market watchers may say that Citrix made the move because OpenStack was perceived as being inclined towards the open source KVM hypervisor rather than Citrix’s  XenServer (a commercial hypervisor by Citrix based on open source Xen)
  • Amazon partnered with Eucalyptus, another open source cloud platform, for hybrid cloud computing thus giving Eucalyptus a big boost as a cloud platform
  • VMware claims there are over 100 providers across 24 countries that offer cloud services based on its technologies. Large enterprise technology providers have partnered with VMware for various cloud offerings
  • Similar providers (e.g., Dell, Fujitsu, Hitachi, Hewlett-Packard, IBM, and NEC ) earlier also signed the Microsoft Hyper-V Cloud Fast Track Program to offer private cloud

Therefore, as happens in enterprise IT, large providers are partnering with all the known players to offer services across different markets, technologies, and customer type. It is evident that the large enterprise providers are choosing commercial platforms for private cloud and open source for public cloud offerings. Unfortunately, this whole muddle of messages have left buyers in an increasingly dense smoke cloud of confusion regarding vendor lock-in, maturity of technologies, reliable support for platforms, services around cloud, etc.

Granted, the implementation architecture of these cloud platforms/hypervisors are in some respects similar, yet the way they handle files, storage, virtual LANs, etc., have sometimes subtle and other times very evident differences.  Customers need different tools and resources to manage these myriad of platforms, hypervisors, and technologies.

However, the premise of cloud was based on standardization and simplicity, wherein customers were simply supposed to self-provision their infrastructure needs irrespective of the underlying platform, and manage it with minimal effort. But the ecosystem doesn’t seem to be evolving in that manner. Rather, it appears to be becoming more confusing and a personal dual between technologists and supremacy of technology than an attempt to improve enterprise IT delivery. Indeed, with so much variation in cloud middleware, how can we expect simplicity and standardization? Which leads to the all important question, will the cloud end up being another technology silo or will it transform the enterprise IT landscape?

To cut through this complex maze of intertwined offerings, buyers must understand the nuances of different cloud technologies, including their impact, limitations, costs, and use specific to their own ecosystem. Approaching it in this manner, the cloud can be a real game changer. Providers will keep on overselling and hyping up their offerings and the buyers require relevant skills to evaluate these offerings for their requirements.

Of course, this is easier said than done. While it’s a given that the enterprise technology world can never be imagined to have a single technology, system, or innovation, and differences will always prevail, there is a dire need to simplify the entire cloud paradigm in terms of its architecture, standards, implementation, usage, and evolution. Too many complexities may scare buyers, and the industry may miss out on exploiting a once-in-a-generation idea.


Pick or Pass on BPaaS? | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | 2 Comments

While buyers have typically approached, evaluated, and made third-party business process service delivery sourcing decisions at the operational level, and separated out decisions on the underlying software applications and/or technology infrastructure, they are increasingly realizing the value of looking at IT and BPO in an integrated manner.

Enter Business-Process-as-a-Service (BPaaS), a model in which buyers receive standardized business processes on a pay-as-you-go basis by accessing a shared set of resources – people, application, and infrastructure – from a single provider.

There are many potential upsides of BPaaS…but is it right for your organization? The answer lies in evaluating the model based on a holistic business case that looks at a range of factors including total cost of ownership (TCO), the nature of the process/functional area under consideration, business volume fluctuations, time-to-market, position on the technology curve, and internal culture and adaptability to change.

Let’s take a deeper look at the TCO factor, which must be analyzed in terms of both upfront and ongoing costs for all three layers of service delivery.

TCO cost elements to evaluate BPaaS versus traditional IT+BPO model

For our just released research report, Is BPaaS the Model for You?, we developed and used a holistic evaluation framework that compared TCO for BPaaS and the traditional IT+BPO model across three buyer sizes: small (~US$1 billion revenue/5,000 employees), medium (~US$5 billion revenue/20,000 employees), and large (~US$20 billion/100,000 employees.)

Our findings?

Small Companies

BPaaS brings big benefits. It is independent of deal duration, delivers 35-40 percent savings compared to traditional IT+BPO, enables leverage of the provider’s economies of scale, provides access to otherwise cost-prohibitive technology, and allows entry into BPO relationships that as stand-alone’s lack the necessary scale. Small buyers are also highly amenable to BPaaS’ process and technology standardization requirements.

Medium Organizations

BPaaS is pretty impressive. It delivers 25-30 percent savings over the traditional IT+BPO model, which while less than what small buyers reap is still significant in driving a successful business case. Medium buyers’ increased scale allows them to capture some of the economies of scale benefits even in the traditional IT+BPO model, thereby having a lower differential between the two models.

Large Enterprises

BPaaS is not too shoddy. While it only provides ~10 percent cost savings compared to traditional IT+BPO, the absolute differential in cumulative TCO can still be substantial given the high base. And, in certain buyer-specific situations such as technology enhancements, exploration of new BPO/IT infrastructure relationships, and expiration of legacy technology licenses, BPaaS can be a good model for large buyers to evaluate. But…their tendency to balk at following a tightly defined, standardized approach – unless significant configuration features offset a good portion of customization needs – reduces BPaaS’ appeal for them.

As you see, our evaluation framework shows an inverse relationship between buyer size and cost savings from BPaaS – i.e., the larger the buyer, the lower the percentage savings. In fact, as buyer size increases, the scale benefits of renting versus owning infrastructure and applications can dip into the negative column over ten years. Of course, the assumptions in our BPaaS to IT+BPO model analysis are ideal, and your organization’s individual reality may be quite different.

To learn more about how to evaluate BPaaS’ applicability to your company, select a BPaaS provider and solution, and implement the selected solution, please read our report, Is BPaaS the Model for You?

Cloud over Frozen Water

Will Platform Wars Freeze the Enterprise IaaS Market? | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

As we work with our clients to understand the implications of Next Generation IT technologies, it’s clear that large enterprise adoption of public cloud IaaS is progressing more slowly than other types of cloud services (e.g., SaaS, private cloud). When we ask ourselves “why,” we continue to come back to three critical issues:

  • Vision and reality gap – we continue to be impressed with the sophistication that many of our client IT executives have around how private, public and hybrid clouds can be used to fundamentally transform their IT infrastructures. They then talk to vendors and face the disappointing gap between the state of cloud technologies today and their expectations and requirements (legitimate or not).
  • Risk aversion – it’s one thing for a CIO to passively support their VP of Sales as they roll out  It’s quite another to own the decision to migrate critical IT workloads out of the data center to public cloud services. While early adopters are clearly out there experimenting with IaaS, don’t expect your typical Fortune 500 CIO to be eager to get on the diving board and jump in until they have to, or they feel it’s safe.
  • Market “noise” – just when CIOs think the drumbeat of vendor provider announcements around public, private and hybrid cloud offerings and standards can’t get any louder, someone dials it up a notch.  The noise (and uncertainty) is now being amplified even further by the emerging battle around enterprise cloud platforms / operating systems like vCloud and Open Stack (more on this later).

Certainly we’re finding that these issues are reflected in enterprise IaaS adoption patterns that are not quite what many in the enterprise CSP vendor community had hoped for at this point. Namely we’re seeing:

  • Enterprises growing cloud usage from the “inside out” – nearly all the activity we see in the enterprise market around cloud and infrastructure today is focused around private cloud pilots or full deployments (hosted or on-prem). Rather than experiment with cloud with public service providers, they’re opting to try the model internally first. Some call it “server-hugging,” others a reactive move to keep IT spend in house, and still others a rational response to the current state of technology and services.
  • Heavy reliance on proprietary enterprise IT vendors – despite their vision, promise and industry support, new open source platforms (and Eucalyptus) have seen limited adoption in enterprise private clouds. While OpenStack has had success with service providers, many CIOs don’t consider it ready for prime-time yet in their data centers. CloudStack has had more success, but enterprise deployments still likely number only in the double digits. Perhaps not surprisingly we see enterprise cloud deployments (private cloud) dominated by VMware and IBM.
  • Selective, incremental migration of targeted use cases – where we do see enterprise IT migrating to public cloud or hybrid infrastructure models is for very targeted or smaller scale, lower risk use cases. Examples include test / dev environments, backup and archival, websites and batch data analytics. IT is dipping their “toe in the water” with public cloud, and not feeling a compelling need to drive widescale transformation – yet.

So where are we headed?

In general, enterprises are obviously not comfortable with the current risk / return profile associated with public IaaS and hybrid cloud models. We believe one of the few levers that would pull both components of this ratio would be a cloud management platform that would enable true workload portability / interoperability and policy enforcement across private, public and hybrid models. Not surprisingly, competing enterprise cloud vendor platforms, standards and ecosystems are emerging around VMware, Open Stack and Amazon (and to a limited extent Microsoft) to address this market gap. Several major announcements over the past several weeks that have served both to partially clarify and muddy this evolving landscape at the same time include:

  • The Amazon / Eucalyptus announcement around extended  API compatibility for hybrid clouds
  • The Citrix announcement that they will be breaking away from Open Stack and open sourcing CloudStack to the Apache Software Foundation
  • HP’s announcement of the Converged Cloud portfolio of public, private and hybrid cloud offerings based on a “hardened” version of OpenStack and KVM.

Most major enterprise IT vendors are still hedging their bets and publicly keeping feet in multiple camps. With the marketing engines in overdrive it’s difficult to understand what commitments vendors are really at the end of the day making to the different platforms. In fact it’s quite instructive to take a look at who’s putting their money where their mouths are when it comes to open source efforts like Open Stack, not just in terms of sponsorship fees but also developer contributions.

Historically IT platform markets end up with a dominant leader and one to two credible challengers that end up with 2/3 to 3/4 of the market, with the remainder shared among niche players. When we take a look at the enterprise cloud operating system or management platform market, we don’t see why it would be any different here, though we’re obviously still a long, long way from the end game.

The critical question in our mind is: Is a cloud platform market shakeout required for enterprise adoption of IaaS to accelerate and hit the tipping point? If so, we could be waiting a long time.

What are your thoughts?


Electronic Medical Records: Is Cloud-Based or Client/Server Delivery Right for You? | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | One Comment

Electronic Medical Records (EMR) has the ability to transform and enhance virtually all communications, transactions and analysis related to healthcare information. All 50 states are quickly adopting EMR, and the government has made adoption of EMR a cornerstone of the healthcare initiative. While EMR can have significant positive impact on physicians’ productivity, patients’ access to information, and insurance companies’ ability to reduce errors and claims administration costs, it must be implemented properly in order to achieve those benefits.

Much of the implementation solution answer lies in what delivery model is best for any given healthcare organization: a private cloud-based next generation IT approach or a client/server-based legacy approach.

Cutting through the hype, there are a number of advantages to adopting cloud-based EMR:

  • No upfront software license purchase costs
  • No hardware to purchase or maintain
  • Better overall support, including for disaster recovery
  • Typically stronger security and data protection mechanisms, and more likely compliance to HIPPA regulations, through host companies
  • Accessibility for physicians on the move

Indeed, a private cloud may be the right EMR solution in many cases. Consider Beth Israel Deaconess Medical Center. It has 1,500 member physician practices and facilities distributed throughout 173 locations in eastern Massachusetts. The Beth Israel Deaconess Physician Organization (BIDPO) provides medical management services. By becoming a member of the BIDPO, physician practices receive reduced contractual rates from health insurance companies. But for compliance, those practices must be able to measure the quality of patient care and transmit those metrics electronically to the insurance companies.

As putting servers in each facility, per a client/server model, was not going to be an effective or cost efficient approach to the electronic transmission requirements, the Center instead adopted a private cloud-based model with a centralized database and application services. BIDPO selected VMware as the virtualization platform, Third Brigade as the security solution provider, and Concordant for the day-to-day operational management of the environment and help desk for the physicians. The solution it adopted is modular, enabling it to grow as more facilities are migrated to the system.

On the other hand, there are numerous potential downsides to a cloud-based EMR solution:

  • Latency or lag times
  • Lack of availability of a robust and reliable Internet connection in rural areas
  • Bandwidth limitations
  • Constrained back-up and data accessibility
  • Inability to access or work with data if the service provider’s network is down

Given these issues, a rural practice of five physicians who see 35 patients a day and want quick access to their medical records and prescription history, especially for those on multiple drugs that could cause adverse or allergic reactions, will fair far better with a client/server EMR model.

If you’re wondering which EMR delivery model is a better fit for your healthcare organization, the following table should help:

EMR Delivery Models

Private cloud-based EMR solutions do provide flexibility and scalability, and we will see more healthcare organizations following Beth Israel Deaconess Medical Center’s lead in the near future. But before you jump on the bandwagon, you must consider whether the cloud is suitable for your particular and unique situation.

Cost Savings in the Cloud

Energy Efficiency in the Cloud | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

The advent of cloud computing has brought many exciting changes to companies’ IT strategies. One aspect of the cloud that is frequently overlooked, however, is energy efficiency. On the face of it, one might expect cloud computing to be more energy efficient than the alternative. But is it really?

Let’s take a quick look at the three drivers behind increased energy efficiency in cloud environments.

First and most obvious is economies of scale. It’s not rocket science to understand that fixed costs are best allocated among a greater quantity to bring down per-unit cost. Similarly, conducting a benchmarking exercise to measure Power Usage Effectiveness entails significant fixed costs in devoting resources to counting equipment and measuring individual devices’ power consumption. There are certainly economies of scale to be gained in doing this for a larger datacenter than for a smaller one.

The second driver of energy efficiency in cloud environments results from the abstraction of the physical and virtual layers in the cloud. A single physical server running multiple server images will obviate the additional power load from purchasing additional physical servers. Also, if a virtualized environment incorporates redundant server images on different physical boxes, then individual boxes do not need multiple power supplies. The failure of one machine becomes a non-issue when redundancy is built in.

Finally, a datacenter serving cloud clients will have more users from more disparate places, each with different needs. This means that system loads will be more evenly spread throughout each day (and night), which enables the datacenter to average higher system loads and thus more efficient utilization of equipment. Everest Group research shows that individual servers in a cloud datacenter experience three to four times the average load of those in an in-house datacenter.

By now it should be clear that a large cloud datacenter has distinct energy efficiency advantages over a smaller, in-house datacenter. But there are corresponding energy drawbacks to cloud migration that may not be immediately apparent. First, as processing and storage shift to the cloud, energy usage increases. This is primarily from the routers transporting the data over the public Internet; their power use increases with throughput and frequency of accessing remotely stored data.

Also, in a SaaS, PaaS, or simple cloud storage scenario, frequent data access can cause data transport alone to account for around 60 percent of the overall power used in storing, retrieving, processing, and displaying information. At this point, the efficiency advantages gained by the three drivers cited above may be lost due to the extra power required to move the data between the user and the cloud datacenter in which it is stored or processed.

It is true that migration to the cloud can yield significant gains in energy efficiency for certain applications. However, for applications involving high transaction volumes, an in-house data center can provide better energy efficiency.

As power prices become increasingly important in determining data center operating costs, energy efficiency will play a greater role in companies’ cloud strategies.

The Truth

Using Cloud Flexibility to Drive Enterprise-Class Cost Efficiencies – A Tale from the Frontlines | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | One Comment


One of the current mantras that many enterprise cloud enthusiasts are chanting is that “it’s not about cost.” Cloud is all about business agility and flexibility with cost being an interesting side benefit, but not necessarily compelling on its own. Focusing on cost efficiency and TCO is indicative of a stodgy, legacy IT mindset that doesn’t understand the true paradigm shift of cloud.

The TruthNothing could be further from the truth. In fact, we’re finding that some of the more interesting cloud enterprise use cases these days involve leveraging cloud agility to aggressively reduce infrastructure and IT costs.

Take a recent client of ours, a Fortune 500 global energy company seeking to reduce corporate IT infrastructure costs. Its focus was on reducing costs across two primary datacenters that delivered HR, finance, accounting, operations and other applications to business operations across 30 countries. Understanding cloud options for migrating its SAP deployment was a central focus of their effort.

Facing an imminent and significant hardware upgrade cycle, it was more interested in exploring opportunities to reduce costs through traditional IT outsourcing (ITO) vehicles, as well as next generation, cloud-enabled delivery models. Critical objectives included:

  • Reducing asset ownership
  • “Variabilizing” its IT cost structure
  • Outsourcing commodity IT skills

Based on these requirements, our client evaluated potential solution options from nearly 20 service providers, including traditional enterprise IT service providers, cloud service providers (CSPs), offshore ITO vendors and telcos/carriers.

Our client narrowed the field to three potential solution providers, each with different recommendations on where to migrate existing applications and workloads (which were largely in dedicated and virtualized models). Recommended solutions varied not just across cloud delivery model (public vs. private), but also across asset ownership (on-prem private vs. hosted and virtual private):

Cloud Providers Solution Overview

And what did the client find? As shown below, leveraging a mix of virtual private and public cloud models offered the opportunity to reduce its annual infrastructure costs by over 30 percent!  “Provider A,” which suggested migrating approximately 30 percent of the clients’ workloads to public cloud environments ended up with the most compelling business case. While they recommended migrating 80 percent of the workload portfolio to cloud-enabled models, they did recommend keeping the client SAP instances in a traditional, dedicated model.

IT Infrastructure Annual Cost

Some additional observations:

  • Costs reflect all required migration and replatforming investments
  • Public cloud costs were indicative of current market pricing generally at the same unit price levels across the period. As shown by the recent AWS price drop of up to 37 percent on reserved instances, this is a very conservative assumption
  • Efficiencies do not reflect additional potential opportunities from active workload management

So where did the savings come from? Our client found that the savings were driven by four primary levers:

  • Consolidation and rationalization of underutilized servers
  • Migration of unpredictable and “spiky” workloads to public cloud models with consumption-based billing
  • Reduced IT operations and management costs
  • Defacto outsourcing of maintenance and support to CSPs

We’re seeing similar results across our other clients, who are finding that cloud-enabled delivery models, leveraged correctly, can drive substantial and lasting reduction in IT infrastructure costs.

Maybe cloud and cost efficiency aren’t so boring after all…


Amazon in the Headlines | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

I’m sure many of you have read the reports of Amazon’s new CEO’s steps to revitalize the company’s growth. News of restructuring that could involve widespread layoffs that cut deeply across Amazon, including some of its key development areas are also driving changes across the company’s management ranks.

Meanwhile, there’s at least one part of Amazon that is taking aggressive steps to fuel growth rather than cutbacks.

Amazon Web Services (AWS) announced today that it is reducing prices for the 19th time in the last six years. And it’s not just a nudge downward:

  • EC2 prices for longer term (3-year) Reserved Instances in some configurations are dropping by 35 to 40 percent
  • EC2 On Demand prices for high memory instances are now 10 percent lower
  • Similar price reductions span services beyond EC2 as well (e.g., RDS, EMR, ElastiCache)

While the reductions are meaningful for its flagship EC2 On Demand services, I interpret the very large reductions for longer term Reserved Instances as yet another salvo that plays to the enterprise market. Moreover, the introduction of volume tiering that enables additional discounts should turn many CIOs heads who are in the midst of pilots that test the value of cloud services in a modest way.  Spend over US$250K on Reserved Instances and get 10 percent off the amounts above that level – more than $2 million steps that up to a 20 percent incremental discount. And finally, in a distinct departure from previous positions, AWS is inviting “one off” deals by asking those spending more than $5 million to “call me!”  Some of AWS’ largest users are ending up with pricing that is over 50 percent lower than before these actions.

The business case for broader adoption across the enterprise continues to get stronger. Enterprises should be including ongoing pricing improvements in their Infrastructure-as-a-Service models; can internally-delivered infrastructure be cost competitive with options that are likely to drop another 20 percent over the next few years?

AWS appears to continue its leadership in cloud infrastructure services with this pricing action, and it continues to add solutions and features that should appeal to enterprise buyers. Recent discussions with enterprise CIOs, however, suggest a gap continues to exist – at Amazon and most of the other cloud service providers – around the ease of enterprise solutioning. The low touch, self-service approach enables attractive price points but still leaves the enterprise with do-it-yourself tasks that impede their widespread adoption of mainstream solutions.  AWS’ strategy appears to rely on the VAR / SI channel do the solutioning, focusing on the horizontal cloud delivery platforms (which we suspect may be higher margin, at least for AWS). This provides an opening for other cloud pioneers – Rackspace, Savvis, Terremark, and others – to step up to fulfill the enterprise market’s needs for true enterprise-class solutions that include the all-important solutioning capabilities. Competing on price is essential – but the value player is likely to seize the enterprise leadership role in the long run.

In Search of Mad Cloud Skills

Video Interview: Outlook for Mad Cloud Skills | Gaining Altitude in the Cloud

By | Gaining Altitude in the Cloud | No Comments

Cloud skills are different than traditional IT skills. At the recent CloudConnect conference in Santa Clara, we had a panel discussion on how cloud is changing the CIO’s wish list for new hires.

In this last video blog of the series, Clayton Pippenger, Applications Development Manager at Quest, shares his thoughts on the outlook for hiring cloud skills in the coming years.

In case you are just tuning in, this is the fourth video interview of a series we taped at CloudConnect 2012 in Santa Clara. Everest Group’s Scott Bils chaired the Organizational Readiness track and enlisted an impressive lineup of speakers.

Watch the first video, featuring Francesco Paola of Cloudscaling.

Watch the second video, featuring Simon Wardley of the Leading Edge Forum.

Watch the third video, featuring Erik Sebesta of CloudTP.