Tag: IT Infrastructure

The Infrastructure Outsourcing Market in 2012: On the Cusp of Transformation? | Gaining Altitude in the Cloud

Earlier this year, Everest Group conducted its annual study of high-value Infrastructure Outsourcing (IO)  deals to gain insight into how a range of parameters correlate with deal activity in the IO market. The study, which is part of our Infrastructure Outsourcing Market Update 2012 report, analyzed 164 IO deals across a combination of 17 MNCs, Tier-1 offshore and Tier-2 offshore providers.

Infrastructure Outsourcing 2012 – Key Findings:

  • Buyers: Buyers across geographies found increased value in offshore providers’ remote infrastructure management outsourcing (RIMO) model due to its flexibility. Faced with the high costs associated with IO, buyers appeared very tactical in their approach. Analysis of the basket of IO spend showed clear signs of carefully planned allocation across traditional IO, RIMO and cloud-based services
  • Service providers: Though MNCs remain by far the leaders in the IO market, offshore providers appeared to be steadily gaining ground in sales strategy as well as deal wins. We also observed similarities between MNCs and offshore providers on a number of parameters such as buyer segments, deal size and geographies
  • Cloud-based services: As transformation of infrastructure is the major driver of cloud adoption across enterprises, we devoted an entire section in the study to cloud adoption in IO. Not surprisingly, cloud is helping buyers create a flexible and scalable infrastructure environment, with Infrastructure-as-a-Service (IaaS) solutions leading cloud adoption

Infrastructure outsourcing: On the cusp of transformation?

Overall, the IO market appears to be on the cusp of transformational change. IO seems to be showing the way not only in cloud adoption but also in how IT delivery and pricing models are transforming. The growth of IaaS says a lot about the IO’s impetus for buyers and providers alike.

To find out more about these trends, our analyses of and insights on the infrastructure outsourcing market, check out the Infrastructure Outsourcing Market Update 2012 report (a preview deck is available).

Offshore Providers and the Cloud – No Datacenter Is Not a Choice! | Gaining Altitude in the Cloud

As large IT services buyers increasingly embrace cloud-based delivery, offshore IT services providers are being forced to innovate beyond their traditional strengths of labor arbitrage, process excellence, and delivery maturity. Indeed, as these providers witness their application services reaching wallet share saturation in the large buyer market, there is growing perception in the industry that if they do not offer “next generation” services they risk losing even their traditional business.

Granted, these providers are not sitting idle. They have created “cloud advisory” teams and executed multiple application migration/porting engagements as part of their global services contracts. But the crux of cloud opportunity lies in the transformational nature of these engagements, which invariably involves owning IT infrastructure.

Our discussions with enterprise IT services buyers point to three types of roles for offshore providers, which extend beyond typical SaaS implementation and integration. These roles will also require services related to consulting, architecture, application migration, etc.

Cloud Offshore Providers

Offshore providers possess varying degrees of competence for these roles, but to remain relevant, they must continue to invest in newer capabilities. Today, a select few are investing in areas such as cloud management platforms, consulting services, readiness assessments, and migration services to move beyond simplistic cloud engagements. However, most lack a comprehensive datacenter-driven cloud infrastructure service, which is needed to drive transformational engagements.

One of the key findings in Everest Group’s recently released Cloud Vista research study was that more than 50 percent of large cloud-related engagements – and even most application transformation deals – contain a significant amount of infrastructure transformation, but offshore providers have scant presence in these engagements.

Cloud Adoption Drivers

It is becoming abundantly clear that offshore providers need to swiftly tackle the area of cloud infrastructure services. One of the biggest challenges they must overcome is their lack of willingness to invest in owning datacenters, instead opting to relegate core datacenter operations to the partners. Many buyers convey their disappointment with this type of partnership model, believing it can at best support running IT operations, but that it is not appropriate for enterprise class cloud infrastructure services that can assist them to variabilize their costs and access self-service, consumption-linked infrastructure.

Given their general reluctance to own large scale datacenters, offshore providers may at least evaluate “white labeling” hosting providers’ datacenters so that they can offer cloud infrastructure services which will allow them to calibrate their investments while simultaneously serving their buyers. Given that white labeling of datacenters is an accepted practice and even large scale datacenter service providers white label datacenters from other core datacenter operators (e.g., Equinix), this model will find acceptance with the buyers.

Offshore providers need to understand that for a game changing paradigm such as cloud, there always will be a risk associated with investments. The days of cherry picking attractive contracts are over, and they can no longer walk away from complex deals that do not meet their sweet spot. Therefore, they must inculcate a culture of risk taking, and invest in areas outside their comfort zone, especially in cloud infrastructure services. The cloud is changing buyers’ sourcing strategies, and offshore providers that fail to change accordingly risk losing their relevance and even their traditional business.

Cloud Wars: the Demise of Simplicity and Standardization | Gaining Altitude in the Cloud

In its comparatively short yet highly significant lifetime, the cloud industry has quickly devolved into a confusing morass of technology jingoism, marketing hype, aggression, and even negative allegations. Though the SaaS world is reasonably understood, it’s the infrastructure cloud that is creating an enterprise cloud war. Just think about the flurry of announcements and assertions about the big boys of technology taking sides with various cloud platforms or hypervisors:

  • Rackspace announced that OpenStack will be its cloud platform for public infrastructure service
  • Terremark introduced its private cloud offering built on VMware’s hypervisor
  • Sungard and CSC are using the vBlock architecture (based on VMware) for their cloud offerings
  • Savvis has chosen VMware for its Symphony Dedicated cloud
  • IBM is investing in a KVM-based public cloud offering
  • Amazon Web Services are based on proprietary implementation of open source Xen
  • GoGrid prefers the Xen hypervisor
  • HP proclaimed support for KVM/OpenStack for public cloud services
  • OpenStack announced large technology providers such as IBM, Yahoo, HP, AT&T, Dell, Cisco, and Canonical becoming platinum and gold members of OpenStack Foundation. Citrix, a supporter of OpenStack until a few weeks back, bemoaned that it is “tired” of the speed of evolution of OpenStack, and thus gave its CloudStack platform to the Apache Software Foundation. Though market watchers may say that Citrix made the move because OpenStack was perceived as being inclined towards the open source KVM hypervisor rather than Citrix’s  XenServer (a commercial hypervisor by Citrix based on open source Xen)
  • Amazon partnered with Eucalyptus, another open source cloud platform, for hybrid cloud computing thus giving Eucalyptus a big boost as a cloud platform
  • VMware claims there are over 100 providers across 24 countries that offer cloud services based on its technologies. Large enterprise technology providers have partnered with VMware for various cloud offerings
  • Similar providers (e.g., Dell, Fujitsu, Hitachi, Hewlett-Packard, IBM, and NEC ) earlier also signed the Microsoft Hyper-V Cloud Fast Track Program to offer private cloud

Therefore, as happens in enterprise IT, large providers are partnering with all the known players to offer services across different markets, technologies, and customer type. It is evident that the large enterprise providers are choosing commercial platforms for private cloud and open source for public cloud offerings. Unfortunately, this whole muddle of messages have left buyers in an increasingly dense smoke cloud of confusion regarding vendor lock-in, maturity of technologies, reliable support for platforms, services around cloud, etc.

Granted, the implementation architecture of these cloud platforms/hypervisors are in some respects similar, yet the way they handle files, storage, virtual LANs, etc., have sometimes subtle and other times very evident differences.  Customers need different tools and resources to manage these myriad of platforms, hypervisors, and technologies.

However, the premise of cloud was based on standardization and simplicity, wherein customers were simply supposed to self-provision their infrastructure needs irrespective of the underlying platform, and manage it with minimal effort. But the ecosystem doesn’t seem to be evolving in that manner. Rather, it appears to be becoming more confusing and a personal dual between technologists and supremacy of technology than an attempt to improve enterprise IT delivery. Indeed, with so much variation in cloud middleware, how can we expect simplicity and standardization? Which leads to the all important question, will the cloud end up being another technology silo or will it transform the enterprise IT landscape?

To cut through this complex maze of intertwined offerings, buyers must understand the nuances of different cloud technologies, including their impact, limitations, costs, and use specific to their own ecosystem. Approaching it in this manner, the cloud can be a real game changer. Providers will keep on overselling and hyping up their offerings and the buyers require relevant skills to evaluate these offerings for their requirements.

Of course, this is easier said than done. While it’s a given that the enterprise technology world can never be imagined to have a single technology, system, or innovation, and differences will always prevail, there is a dire need to simplify the entire cloud paradigm in terms of its architecture, standards, implementation, usage, and evolution. Too many complexities may scare buyers, and the industry may miss out on exploiting a once-in-a-generation idea.

Energy Efficiency in the Cloud | Gaining Altitude in the Cloud

The advent of cloud computing has brought many exciting changes to companies’ IT strategies. One aspect of the cloud that is frequently overlooked, however, is energy efficiency. On the face of it, one might expect cloud computing to be more energy efficient than the alternative. But is it really?

Let’s take a quick look at the three drivers behind increased energy efficiency in cloud environments.

First and most obvious is economies of scale. It’s not rocket science to understand that fixed costs are best allocated among a greater quantity to bring down per-unit cost. Similarly, conducting a benchmarking exercise to measure Power Usage Effectiveness entails significant fixed costs in devoting resources to counting equipment and measuring individual devices’ power consumption. There are certainly economies of scale to be gained in doing this for a larger datacenter than for a smaller one.

The second driver of energy efficiency in cloud environments results from the abstraction of the physical and virtual layers in the cloud. A single physical server running multiple server images will obviate the additional power load from purchasing additional physical servers. Also, if a virtualized environment incorporates redundant server images on different physical boxes, then individual boxes do not need multiple power supplies. The failure of one machine becomes a non-issue when redundancy is built in.

Finally, a datacenter serving cloud clients will have more users from more disparate places, each with different needs. This means that system loads will be more evenly spread throughout each day (and night), which enables the datacenter to average higher system loads and thus more efficient utilization of equipment. Everest Group research shows that individual servers in a cloud datacenter experience three to four times the average load of those in an in-house datacenter.

By now it should be clear that a large cloud datacenter has distinct energy efficiency advantages over a smaller, in-house datacenter. But there are corresponding energy drawbacks to cloud migration that may not be immediately apparent. First, as processing and storage shift to the cloud, energy usage increases. This is primarily from the routers transporting the data over the public Internet; their power use increases with throughput and frequency of accessing remotely stored data.

Also, in a SaaS, PaaS, or simple cloud storage scenario, frequent data access can cause data transport alone to account for around 60 percent of the overall power used in storing, retrieving, processing, and displaying information. At this point, the efficiency advantages gained by the three drivers cited above may be lost due to the extra power required to move the data between the user and the cloud datacenter in which it is stored or processed.

It is true that migration to the cloud can yield significant gains in energy efficiency for certain applications. However, for applications involving high transaction volumes, an in-house data center can provide better energy efficiency.

As power prices become increasingly important in determining data center operating costs, energy efficiency will play a greater role in companies’ cloud strategies.

Cloud Beyond the Borders – Part 1: Asia | Gaining Altitude in the Cloud

We know that cloud computing has taken off in U.S.-based enterprises since the term was coined in mid-2007, but how is it faring in other parts of the world? Where will the growth come from in this US$40.7 billion industry that Forrester Research forecasts will grow to a projected US$241 billion by 2020? While the North American market accounts for the bulk of cloud investment and infrastructure today, Ovum Associates forecast that this will drop to approximately 50 percent by 2016 in the face of strong growth in Asia and Europe.

The Asia Cloud Computing Association has identified ten factors that affect cloud adoption rates, which can be broadly grouped into three classes: regulatory, physical infrastructure, and market conditions. Regulatory concerns include data protection laws, the extent of Internet filtering, and other government policies. Physical infrastructure refers to power grid reliability, broadband penetration rates, and international connectivity. And market conditions relate to the overall sophistication of a country’s IT industry and the perceived political risk of doing business in a country.

With that, let’s first take a look at cloud computing, beyond the borders, in Asia. And as all the above factors are applicable to other geographies, next time we’ll talk about the cloud in Europe.

The IT world is looking to Asia with high expectations…and uncertainty. Everyone agrees that Asian cloud computing growth will be impressive: analysts quote industry CAGR figures of 20-35 percent from 2010 to 2014 and beyond. Nobody knows quite how much it will grow, however, because of persistent and thorny issues.

Asia consists of many countries in various stages of development. Those with well-developed infrastructure and institutions – Japan, Singapore, South Korea, Australia, and New Zealand – have experienced the greatest growth in cloud adoption to date. While Asian interest in the cloud is, in general, sky-high, many other countries lack the infrastructure to deliver. According to Per Dahlberg, CEO of the Asia Cloud Computing Association, this puts Asian cloud uptake approximately three years behind that of the United States.

What drives Asian cloud uptake? What hinders it? Answers to both questions are as diverse as Asia itself.

Many Asian countries are developing economies with poor or outdated IT infrastructure. They see cloud computing as a way to modernize government and private IT systems, while spurring the development of home-grown industry. In its current five-year plan, released in 2010, the Chinese government designated next-generation information technology as one of seven “Strategic Emerging Industries,” designed to drive innovation for indigenous Chinese industry. The plan highlights cloud computing as a key investment area that should receive special focus. Business Cloud News reported that this will help propel cloud investment in China to a forecasted US$154 billion by 2015. China Mobile alone plans to invest US$52 billion in that time-frame to build its cloud offerings.

Another pan-Asian driver of cloud growth is increased broadband penetration. The proliferation of mobile phones with always-on 3G and 4G data connections will continue to drive migration to the cloud. As people gain faster data pipes they can take anywhere, they increasingly want to store their content in the cloud. Likewise, companies want employees to be mobile, thus necessitating the secure availability of company data anywhere it’s needed.

Finally, Asia has untold millions of small businesses with the desire, but not the IT know-how, to be global competitors. IT as a service will come to play an important role in helping these firms reach new markets and compete globally.

Of course, different parts of Asia are poised to take advantage of the impending storm at different levels. Many factors hinder cloud adoption, including shoddy power grids and connectivity issues. But cutting across all countries is the problem of fragmented regulatory regimes with wildly varying requirements for everything from data protection to vendor lock-in. For example, a nation might prohibit its citizens’ data from physically leaving the country. This prevents building regional data centers and realizing the key cloud benefits around economies of scale. Another open question revolves around jurisdiction. If a data center in Singapore holds information on Indonesian nationals, which country’s laws should govern that data and that data center? What if the information is replicated to a data center in China? A coherent pan-Asian regulatory framework will help to alleviate this, but questions around privacy, security, and freedom of speech will likely persist.

Cloud uptake in Asia will see tremendous growth over the next few years. The ultimate heights of that growth and how quickly it is achieved will depend in large part on the degree to which the region as a whole enables it via development of physical and regulatory infrastructure.

Next time up, Europe’s cloud.

Don’t Fret the Cloud “Name Game” | Gaining Altitude in the Cloud

There’s a lot of noise in the industry today about whether or not infrastructure appliances, engineered systems, datacenter-in-a-box, and other similar solutions can be labeled “cloud.”

The basis for this debate is rooted in assertions that the public cloud model, or more aptly Amazon’s, is the only cloud model. There’s no doubt that Amazon is the poster child for the cloud industry, and was around when the cloud buzz was making inroads; and therefore subsequent “definitions” of cloud have become inspired by Amazon’s delivery model. Moreover, the idealistic pursuit of converting IT into a pure utility, as well as addressing overarching pain points of enterprise IT, are also driving some of the arguments. But does doing so restrict the benefits an enterprise can derive from cloud principles?

Private cloud providers make their business case based on security, lower total cost of ownership (TCO), manageability, tight integration, and some other “public cloud-like” benefits. But while they do not always possess the scalability, payment options, and other aspects of public cloud services, is it fair to limit the cloud ecosystem to a particular definition, and term other solutions “cloud washing”? Similarly, various SaaS providers that believe they are the “true cloud application providers” have defined the criteria for a “true SaaS application”, in turn expecting other cloud applications to satisfy their self-anointed criteria to become a SaaS provider. Does this add value to cloud discussions?

Unfortunately, the public cloud provider-driven “definition debates” – that revolve around the pay-per-use aspect of public cloud vis-à-vis a minimum capacity commitment required in a private cloud, the virtually infinite capacity of infrastructure public cloud vis-à-vis requirements to buy “infrastructure boxes” that impact the scalability and flexibility in a private cloud, the minimum capex in public infrastructure cloud vis-à-vis expensive hardware procurement in a private cloud, etc., are doing more harm than good. In retaliation, private cloud providers have also started poking holes into public cloud providers’ security, financial stability, commitment, quality of service delivery, and other seemingly relevant aspects. These assertions are also futile.

The fact is, the name or definition assigned to a given cloud-type solution is moot. The real issue is whether the customer sees value in and gains benefit from a cloud offering. Does it improve IT management? Does it save money? Does it improve IT delivery? Does it help the business become more agile?

Failing to take this client-centric view and instead utilizing the prescriptive, self-created definitions of cloud services can significantly inhibit cloud uptake and the potential benefits from its usage. Just because a cloud solution does not satisfy an “industry definition” should not prevent an enterprise buyer from evaluating it, as long as it offers cloud-related benefits and serves the intended purpose.

Our discussions with a wide range of enterprises show a growing propensity to embrace the hybrid cloud model. And our recent research, in which we analyzed the cost of operations under various infrastructure set-ups such as legacy, virtualized, private cloud, and hybrid cloud), found that the hybrid cloud model is definitely more cost effective and flexible as compared to other cloud models.

But, a word of caution: buyers must differentiate between a silo collection of different private and public cloud solutions and a hybrid cloud. A true hybrid implies the coordinated orchestration of private and public cloud to manage workloads.

So, where should buyers begin? Move beyond the futile “name game” and evaluate serious private and public cloud offerings to create a hybrid cloud environment that can transform their IT organization.

Live from Bangalore – the NASSCOM IMS Summit, September 22 | Gaining Altitude in the Cloud

Hello everybody! I’m back, reporting from day two of the NASCOMM IMS Summit in Bangalore. Today’s conference was focused on discussing alternative models of cloud computing and what works best for who.

First, Adam Spinsky, CMO, Amazon Web Services (AWS), told us his view of what happening out there in the cloudosphere. An interesting factoid to chew on – as of today, AWS is adding as much data center capacity every day as the entire Amazon company had in its fifth year of operation when it was a US$2.7 billion enterprise.

Even more compelling proof of the fact that the cloud revolution is really happening were Spinsky’s examples of the types of workloads AWS supports – SAP, entire e-commerce portals that are the revenue engines of companies, and disaster recovery infrastructure…all are hosted on the cloud. Fairly mission critical stuff, rather than “ohh, it’s only email that’s going to go on the cloud,” you must admit.

Next up, Martin Bishop of Telstra spoke of the customer’s dilemma in choosing the right cloud model. This segued nicely into the panel discussion, “Trigger Points – Driving Traditional Data Center to the Private Cloud,” of which I was a part.

M.S. Rangaraj of Microland chaired the panel and set the context by talking about the key considerations of cloud implementation. According to Rangaraj, the key issues are orchestration and management, as the IT environment morphs into new levels of complexity with multiple providers delivering services across a multitude of devices.

I spoke of the business case for a hybrid cloud model. While private cloud is good, and current levels of public cloud pricing provide slightly better business value, a combination of the two enables clients to reduce the huge wastage of unused data center resources they now have to live with. Today, infrastructure is sized to peak capacity, which is utilized once in a blue moon. The dynamic hybrid model enables companies to downsize capacity to the average baseline. Associated savings in energy, personnel, and maintenance imply dramatic cost advantages over both pure public or private models.

Kothandaraman Karunagaran from CSC took up the thread and spoke of the role of service providers in this new paradigm. While outsourcing may not “die” as a result of the cloud movement, it’s jolly well going to be transformed. Service providers will need to spend far more time in managing, planning, and analyzing usage and consumption data, and less time on monitoring and maintenance. In other words, service providers’ roles will evolve from reactive to proactive management.

Some of my key takeaways from the conference include:

  • Everybody agrees that there is no silver bullet model, meaning that there are no clear winners in a cloud environment, and the hybrid model will keep getting traction as the world becomes increasingly, well, hybrid.
  • Until not long ago, we spoke of the need to simplify IT. Well, the only part of IT that’s going to get simplified is the consumption bit. If you are a CIO reading this, we’ve got bad news for you. Management of IT is going to get more, not less, complicated. Multiple service providers, networks and devices, reduced cycle time, and self-provisioning means that management just got a whole lot tougher.
  • Service providers need to rapidly engage with this new reality and figure out business models can adapt to it. The unit of value is no longer the FTE. It’s what the FTE achieves for the client, or even more complicated, what the consumer actually ends up using. We live in interesting times, and they will only become more interesting as time goes on.

That’s it from my end. I enjoyed the conference, look forward to more illuminating discussions next year, and, hopefully, to seeing you there!

If you weren’t able to attend this year’s conference – or even if you were – you can download all speaker presentations at: http://www.nasscom.in/nasscom/templates/flagshipEvents.aspx?id=61241

 

Changes are Afoot in the Canadian Infrastructure Outsourcing Marketplace | Sherpas in Blue Shirts

HP’s 2008 acquisition of EDS reduced an already small number of large infrastructure outsourcing (IO) service providers with a significant footprint in Canada. While for a time that meant buyers in Canada who were uncomfortable entering into agreements with IO providers with lesser presence in Canada had a smaller pool to choose from, the landscape is quickly changing.

First, the HP-EDS acquisition was quickly followed in 2009 by that of ACS by Xerox and Perot by Dell. Both Xerox and Dell have capabilities and resources on the ground in Canada they can utilize to increase ACS’ and Perot’s presence in the country’s IO space. Additionally, Canada appears to have caught the attention of offshore providers over the last few years, with many of the Tier 1 Indian providers making headway and developing Canadian practices with strong delivery capabilities. Also increasing competition in the IO space are providers that are leveraging emerging technologies to replace traditional IO deals.

What does this mean to the service provider community? The big Tier 1 multinational firms with strong footholds in Canada need to beware. Competition is coming from expected places, such as from major players that in the past have not focused their attention on Canada, as well as less expected up-and-comers that are gaining ground in using cloud offerings, for example in development and  testing environments. The Indian players, continuing their push into the Canadian marketplace, must become better focused in order to effectively compete with the large multinationals that already have a strong track record, strong relationships, and a greater presence.

What does this mean to the buyers of IO services? Many more options. Take, for example, midrange services. They can use a large Tier 1 ITO provider and go the soup to nuts solution route. They can split the physical aspects of the data centre and servers from the services and leverage an offshore provider for remote infrastructure management outsourcing. They can use a cloud solution offered by anyone from a large Tier 1 provider to a niche vendor. As the market has matured and IO services buyers have gained experience, the risk around leveraging a new set of providers and emerging technologies has decreased, which equates to additional option advantages for Canadian buyers.

This increasingly competitive environment challenges service providers to clearly articulate the value that their solution, their proposal, and their company bring to their clients. But buyers aren’t off the hook. They must ensure that their sourcing process allows for the consideration of providers with very different capabilities and value propositions than those to which they have become accustomed.

The Consumerization of IT and Business Processes: Why the Shift to End-to-End Processing Puts Power in the Hands of the User | Gaining Altitude in the Cloud

Service Delivery Automation (SDA) encompasses cognitive computing as well as RPA (robotic process automation). Software providers that provide SDA come to market with an enterprise licensing structure that basically requires the customer to license a number of agents for a specific length of time. But in using this licensing model, service providers unintentionally constrain adoption and open the door for competitors’ differentiation. Along with the average Joe becoming increasingly accustomed to downloading an app to a smart phone in seconds and receiving immediate gratification utilizing powerful, easy to use technology comes uncomfortable questions for corporate IT. “Why can’t you do this? If it only cost me $5 to get this from Apple, why does it cost me millions to get a much worse product months, if not years, later from you?”

Behind this new sense of entitlement is the growing reality that these new apps offer dramatically increased levels of automation, allowing for activities that were previously the providence of experts but are now self-service, giving the user far greater control. Even more profound is the complete reorientation of perspective as technology is developed and deployed from the consumers’ ease-of-use rallying cry, increasingly far away from delivery organizations’ focus on efficiency and corporate control.

These same secular forces that are creating a profound change in IT are also beginning to drive change in shared services organizations and how they address business processes. Think end-to-end processing for talent management and learning in HR, and purchase-to-pay and record-to-report to name in F&A. As with their IT counterparts, these processes are increasingly being automated and shifting toward a self-service delivery structure. This not only reduces costs but also places increased power in the hands of the user community.

Now those internal groups that deliver IT and business processes are facing a harsh reality. They are no longer dominated by stovepipe delivery organizations designed to capture the efficiency of specialization, centralization and labor arbitrage. Rather they are quickly turning into flatter organizations that are delivery-oriented around a user’s view of the process, with emphasis on the transparency of information flow, and process designs that prioritize ease of use over traditional corporate command and control.

As these changes rework the business process landscape, they portend coming shifts in how third parties will be utilized. It is likely we’ll see a reversal in the current trend that allows for increased provider control of processes, with firms increasingly choosing to design solutions that place control within the firm and wherever possible in the user community, thereby also reversing the current provider push for outcome-based pricing. And increased levels of automation may diminish the amount of labor arbitrage which is utilized.

All of this is best summed up by a client who recently told me, “I am no longer looking for a delivery vendor that provides high quality silent running. I am now looking for a transformational partner that will help me implement my new vision and then play a supporting role.”

Time to Call the Real Experts – What We Can Learn from Ants about Cloud Governance | Gaining Altitude in the Cloud

IT rarely loves end users, and for good reason…they constantly invent new problems. They customize their laptops, creating unmanageable software Frankensteins; they bother IT with all kinds of new whims; they want to use all types of new mobile devices, each scarier than the previous one, etc. But the biggest reason of all is that end users always think they know better than IT what tools they need to conduct their business.

But IT can fight these battles with a mighty tool – centralized governance. The less control IT gives end users the better and more stable the system architecture will be, and centralized IT control always produces more efficient results. . Right? Actually, it’s no longer true. While centralized IT governance works well in a traditional IT ecosystem, it quickly fails in the new generation IT environment. The powerful promise of cloud computing is that any user can get easy access to a diverse set of IT resources – not just those available from the internal IT group –for the precise period of time they are needed, and shut them down once the project is completed. But all this requires a new type of governance – decentralized – which allows every user a choice of technology tools and operational flexibility, while still enforcing integrity and consistency of the IT architecture.

Is this even possible? Is there any precedent that shows this can work? There sure is, but not exactly where we would expect to look for it.

Introducing Governance, the Ants Way

Ants solve very complex problems everyday. A few of them include:

  • Conducting comprehensive project management of building large anthills capable of accommodating the whole colony
  • Running sophisticated logistical optimization exercises of finding food for the whole colony day after day
  • Administering a complicated supply chain of anthill maintenance and repair, food storage, and perimeter security per major environment changes (e.g., rain), and constant competition (e.g., other ants)
  • Managing HR – or rather AR, (Ant Resources) – of ~40,000 ants

What’s most striking is that they do all this under conditions of completely decentralized governance!

Ants

In fact, every ant has total operational flexibility to select its own tools, make optimization decisions, and manage its own work. The only guidelines they follow are their direct job responsibilities (e.g., worker, soldier, queen) and the overall goal of the colony.

This type of decentralized governance is what IT today must adopt and embrace to successfully manage cloud-based IT delivery. Business users will need to make IT decisions every day, and there is no way they can run every one of these decisions through IT for architectural approval, procurement for buying authorization, finance for budgeting, etc.

New flexible guidelines must be designed to support end users’ IT decision making. IT will still need to maintain the overall architecture, but it should not, for example, dictate to the end user exactly which server build and OS stack needs to run the user’s email. Procurement will still negotiate deals with cloud providers, but it should not micromanage every end user’s buying decision as long as the decisions comply with the overall goals. And finance will still set the budgets for business units and business users, but it should step away and let users select their own tools within the budget guidelines. Indeed, the enterprise will operate just like a colony of humans, with every worker optimizing his or her IT decisions within the overall company guidelines. Yes, to attain the full benefits of flexibility and agility of the cloud, enterprises need to learn to govern it the ants’ way.

This approach is certainly not without its challenges. While ant colonies have no issues with guidelines enforcement as ants are “compliance hard wired,” we certainly can’t say the same about human IT end users. Hence, IT’s issue will be how to enforce policies while still enabling users to enjoy the benefits of the cloud model. This is a non-trivial challenge for which there are no easy answers…yet.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.