Tag: next generation global services

Sneak “PEAK” into the Banking Applications Outsourcing Service Provider Landscape | Sherpas in Blue Shirts

Per our observations of the evolution of the service provider landscape before and after the recession, the single most important factor we have seen for creating differentiation in the IT applications outsourcing (AO) market is significant strengthening of vertical/domain expertise. And recognizing the need for “vertical-specificity” in the AO market, earlier this year we launched an annual research initiative focused on assessing market trends and service provider capabilities for AO in the banking, financial services, and insurance (BFSI) vertical.

One of the first results that emerged from this research initiative was the Everest Group PEAK Matrix for large banking AO contracts. In a research study released earlier this week, we analyzed the landscape of AO service providers specific to the banking sub-vertical. In a world in which everyone and their uncle delivers AO services to financial services clients, this report examines 22 service providers and establishes the Leaders, Major Contenders, and Emerging Players in the banking AO market.

PEAK Matrix

As we congratulate the five Leaders (Accenture, Cognizant, IBM, Infosys, and TCS), and acknowledge the capabilities and achievements of the Major Contenders and Emerging Players, we also want to highlight three inter-related market themes that suggest the PEAK Matrix in 2012 for large banking AO relationships may look significantly different:

Buyer-driven portfolio consolidation: Most banks currently use a complex collection of service providers for their applications portfolio. Decentralized decision-making, global expansion, and large-scale M&A introduced further complexity into their portfolios. Rationalizing the portfolio creates a less complex sourcing environment, enables strategic partnerships with service providers, and also delivers meaningful financial benefit (our analysis indicates that the financial benefits of utilizing fewer service providers can be as much as 22-28 percent on an annualized basis). As more buyers join the portfolio consolidation bandwagon, the larger/more established service providers are winning at the expense of their smaller competitors.

The Matthew effect: Buyer-driven portfolio consolidation is giving rise to the Matthew effect which (in sociology) states that, “the rich get richer and the poor get poorer.” In the context of the banking AO landscape, the Matthew effect translates to “the big get bigger.” Banking AO buyers are placing disproportionate emphasis on domain expertise as a key decision-making criteria for selecting their service providers. Scale influences a company’s appetite to invest in developing vertical/micro-vertical-specific domain expertise, which in turn determines market success, which ultimately impacts growth and scale. This vicious circle of scale fueling scale is increasing the polarization in the marketplace, and could further widen the gap between the Leaders and the Major Contenders and Emerging Players.

Accelerating M&A: In response to the Matthew effect, as the Major Contenders and Emerging Players seek to achieve the next level of growth, mergers, acquisitions, and alliances will accelerate. M&A will play a significant role in service providers looking to achieve quantum leaps in capability and performance. The M&A activity is likely to significantly alter the landscape in the coming months to create a new set of Leaders and Major Contenders, In fact, since we finalized the Banking PEAK, Emerging Player  Ness Technologies  has already changed ownership.

Given the above three market forces, how much will the landscape of service providers you bank on (pun intended) change in the months to come? Only time and we can tell. Keep watching this space for more!

Related Reports:

Are You Ready to Renew Your Vows With Your Provider? | Sherpas in Blue Shirts

The unfortunately all too frequent seven-year itch – “the spice is gone…should we stay together?” – doesn’t happen just in personal relationships, it also happens in outsourcing relationships. Past the mid-point of a 10-year outsourcing relationship (or whatever the length of the agreement) buyers and service providers often struggle to identify how to maintain the health and happiness of their contractual relationship. Buyers are interested in increasing the level of commitment from the provider, in the form of increased productivity or continuous improvement initiatives. However, the provider is often challenged with supplying service improvements and decreasing the cost of service delivery at narrowing profit margins. With the remaining years in the outsourcing relationship, what relationship modifications are required to ensure mutual benefits for both parties?

Organizations should review their changing landscape, organization, and business requirements to identify their long-term strategic objectives so they can decide on the model that is most appropriate for delivering their services and the supporting sourcing option(s) to help achieve their goals. For example, the fact that an organization is currently in an outsourced relationship does not require that it stay in one. If the organization has the internal capabilities, access to the necessary resources, and time to implement the strategic initiatives, engaging in a sourcing relationship may not be strategically (nor potentially financially) beneficial. However, an organization that is potentially looking for greater flexibility and scalability, access to new skills and resources that are not locally available, or to capitalize on new technological trends may consider partnering with one or more suppliers who have the ability to support the organization’s objectives due to lack of in-house capabilities.

An organization that chooses to engage in a relationship with a third-party service provider should ensure alignment for the long-term: strategic objectives (i.e., business and organizational objectives), cultural fit (i.e., mission and values), and solution requirements (i.e., feasibility and adaptability of the service delivery model to meet the organization’s needs.) An understanding of all three factors is imperative in determining the future strategy of the functional organization and shaping the future direction of the current outsourcing relationship.

What is the right change for your relationship?

There are several options you can consider:

1. Don’t rock the boat (i.e., Renew): 

  • After an honest look at your relationship, you realize that the ”same old, same old” is actually working for you
  • This is akin to renewing the sourcing relationship where you and your incumbent provider agree to continue with the existing contract with minimal changes  

2. Face lift (i.e., Renegotiate): 

  • Following discussions on trade-offs and compromises, you and your partner decide that some tweaking to your old routine is required in order for your relationship to continue
  • Similarly, you and your incumbent provider agree to modify one or a number of limited elements of the outsourcing contract, e.g., price and service levels

3. Overhaul (i.e., Restructure):

  • Small changes are not going to cut it. In order to make this relationship work going forward there must be some fundamental changes
  • In a strategic sourcing relationship, you may realize that while you’ve had a provider that has offered value over time and will continue to do so, it must be under a new set of circumstances. In this case, you and your provider can undergo a strategy exercise to restructure the services being you’re receiving to ensure that they align with your long-term objectives

4. Out with the old and in with the new (i.e., Re-compete):

  • You’ve talked it through with your partner and realize the relationship is not going anywhere. You need someone more supportive and responsive to your needs, and decide it’s time to see other people
  • The decision to re-compete your delivered services is driven not only by cost, but also by your organization’s long-term strategy. If you assess that your current provider is not capable of supporting your cost and strategic goals, it’s time to start seeing other service providers

5. It’s not you, it’s me (i.e., Repatriate):

  • You’ve assessed your relationship, and discovered that you are happiest being on your own.
  • Over time, as your organization evolves, you may find yourself in a position where your long-term goals are best met by bringing services back in-house. This can be the result of M&A activities, a fundamental shift in business strategies, etc.

All kidding aside, buyers must go through a complex exercise when approaching the end of their strategic sourcing relationship. The initial step is to understand their organization’s 10-year strategies and objectives, then begin assessing the current relationship for fit. We typically find there is no single correct answer and, instead, the resulting engagement strategy is a hybrid of the above options. As the marketplace embraces new technologies, the multi-vendor answer is becoming increasingly common. Unlike in personal relationships, it may be beneficial for an organization to have more than one sourcing partner to maintain competitive tension and to optimize the fit with the buyer’s strategies. Organizations can choose their flavor of service providers, a Tier 1, niche, or offshore provider, depending on their objectives and requirements. However, they need to balance the complexity of managing a multi-vendor environment against the benefits provided by each vendor. We strongly encourage full disclosure and consistent communication in a multi-partner model to ensure smooth day-to-day operations and successful service delivery from both/all providers. After all, a little competition never hurt anyone.

Cloud Computing and Finance and Accounting Operations | Gaining Altitude in the Cloud

I recently spent time surveying the cloud computing landscape, and was struck by the minimal information focused on back-office services; nearly everything I read was generic and/or IT-centric. So I decided to look at cloud-based service offerings for Finance and Accounting (F&A).

Given the conservative nature and outlook of accounting professionals, you would probably think that F&A would be a laggard in the adoption of cloud computing models, right? Wrong! In fact, the American Institute of Certified Public Accountants promotes and recommends F&A Software-as-a-Service (SaaS) solutions to both accounting firms and their clients in industry. A number of firms – including NetSuite, Intacct, and Adaptive Planning – are marketing solutions for F&A that they developed specifically to run in the cloud, as opposed to retrofitting pre-existing software applications for deployment to a cloud infrastructure. In addition, SAP and Oracle are developing strategies and service offerings for cloud-based ERP solutions. Available F&A applications for the cloud are a mixture of both point solutions and ERP-like suites that cover all transactional activities, as well as other more judgment-based activities such as management reporting and budgeting & forecasting. Auditing and treasury activities are probably the least served areas right now.

The adoption of cloud computing for F&A has been limited primarily to small- and medium-sized businesses. This is counter to an industry trend in which large-cap companies have been the most likely to adopt cloud computing models. Why is this? As above, many of the F&A applications have been developed specifically for the cloud. While this makes them easy to deploy, in many instances they still have limited functionality. There is also the ongoing concern regarding the security of data stored in a public cloud, but this issue is slowly losing ground as an inhibitor to adoption. The single largest barrier to the adoption of cloud computing models by large-cap firms is the significant investment many have made in the deployment and customization of ERP Finance and Accounting platforms.

Going forward, I predict these barriers will gradually recede, especially as ERP vendors develop offerings to move large- and mid-sized firms toward the cloud. In addition, as the functionality of the current generation of applications continues to improve, it becomes increasingly likely that larger firms may leverage them for a low cost solution to replace a legacy application or meet a need that is currently unmet by their ERP system. We need to remember that some of the most significant SaaS vendors in the marketplace are focused on F&A point solutions. Prime examples are ADP for payroll and Concur for T&E. F&A cloud solutions may also be attractive in a situation in which a large firm is in the process of starting up a new business segment or integrating a smaller acquisition.

Is Cloud-Based F&A Right for You?

When deploying F&A applications in the cloud, you need to consider the appropriate industry dynamics and requirements as well as your current inventory of application workload types. Industry dynamics such as large volumes of M&A or industry-wide margin pressures may make the deployment of cloud solutions more attractive. On the other hand, industry requirements and regulatory constraints may dictate a less aggressive approach. In any event, you will need to evaluate both the applications that are good candidates for deployment to the cloud, and which of the three cloud models – public, private, or hybrid –makes the most sense for a particular application workload type. You would probably only move mission critical and/or proprietary systems with significant data privacy requirements to a private cloud environment, while non-proprietary back-office applications may be more easily deployed into a public cloud environment.

There are other factors to consider when evaluating how to potentially leverage F&A cloud solutions, e.g., what is your company’s approach to its global services model for back-office functions like F&A, HR and Procurement? While many firms have made significant investments in time and resources to deploy shared services and outsourcing solutions for back-office functions, few have been able to fully achieve the benefits associated with standardization and transformation leading to an end-to-end global delivery model. Cloud services can be part of the solution to help a company address the technology and process barriers in a more timely and cost-effective manner. For example, a company might look at deploying cloud-based reporting and control tools for better oversight across all divisions and regions.

CFO Imperatives

CFOs and other financial leaders in large firms have an obligation to ensure that their function does not lag other areas of the enterprise in adopting cloud computing models. They must understand the hype and the reality of cloud computing across functional areas and within their industry. This requires that they:

  • Recognize cloud-related benefits like faster deployment times and reduced CAPEX and OPEX
  • Balance the challenges inherent in cloud computing such as data security and system interoperability
  • Target adoption to high value areas by developing a robust set of evaluation criteria and insisting on a broad assessment of opportunities
  • Define a global strategy roadmap, and map opportunities to the criteria and strategy

Hopefully my thoughts have shed some light on the current landscape for cloud-based F&A services, and will generate some further discussion.


Learn more about Everest Group’s cloud transformation expertise.

Building a Robust Global Services Management Organization | Sherpas in Blue Shirts

Today’s large, global companies face an ever-growing need for talented resources to manage their increasingly complex global services delivery organizations. The requirements extend far beyond traditional models that often found firms simply assigning the “folks who have performed the service for the past 20 years” into various leadership roles. While that was never a best practice, in today’s world, it can be catastrophic.

Global services delivery leaders of today must navigate a complicated mix of internal and external delivery engines while forecasting supply versus demand and keeping clients happy across diverse regions and cultures. To meet these demands, leading global services organizations are increasingly adopting highly integrated cross-business leadership teams sponsored at the most senior levels of the corporation.

The graphic below illustrates this concept in a hypothetical global organization.

Global services management organization

Organizations seeking to build a robust team to deliver their services globally into the next generation should consider the following best practices as observed by Everest Group in working with many of the world’s leading organizations:

  • Global governance structures should be aligned with corporate strategy and chaired by a senior leader at the appropriate level in the organization
  • The governance structure should include an Executive Steering Committee and have formal dedicated roles for key global functions
  • The structure should have responsibility for global strategy, planning, design authority, reporting, delivery, talent development, and innovation
  • The structure should have global process leaders who are responsible for the design, implementation, and compliance of standardized processes across the enterprise, with appropriate representation and input from all lines of business
  • Increasingly, business services are maturing from a legacy functional approach (Finance, Procurement, Tax, Human Resources, etc.) to an end-to-end approach (e.g., Purchase-to-Pay, Record-to-Report-to-File, Order-to-Cash, Hire-to-Retire, etc.).  Appropriate transition and transformation teams must be deployed to manage this migration
  • The structure should also ensure the effective management and integration of all delivery centers

Above all, Everest Group’s experience suggests that companies must treat global services management as a vital business unit with appropriate priority given to recruiting and retaining the talent necessary to execute with excellence and deliver the next generation of value from global services.

The Evolution to Next Generation Operating Models in the Energy Industry | Sherpas in Blue Shirts

As pioneers of global sourcing, leading organizations in the energy industry such as ExxonMobil, Shell, Chevron, and BP have been able to extract significant value from offshoring while maintaining a manageable risk profile. However, the growing complexity of their global sourcing portfolios in terms of internal and external supply options (see Table 1 below), service delivery locations, governance models, and systems/tools has led to a set of challenging issues around design and ongoing optimization.

Comparison of sourcing model for leading energy companies

From a design standpoint, energy companies are rethinking their operating models to address a wide range of issues and concerns including:

The next wave of scope expansion opportunities

  • How can I systematically identify new sourcing areas balancing value and risk?
  • How can I predicate and manage short- and long-term demand?

Building an integrated supply mode and global delivery footprint

  • How can I maintain an optimal, complementary mix of internal and outsourcing supply alternatives reflecting requirements for capacity, competencies, and cost?
  • How can I build a consistent framework to place incremental scope into the right supply model?
  • How can I create an integrated, flexible global delivery model that meets current and future demand while minimizing exposure to risks?

Energy companies also face challenges in building a holistic management approach to enable ongoing value capture and expansion, such as:

How to build a holistic, enterprise-level governance model

  • How can I appropriately assess performance across outsourcing service providers, captives, locations, and service lines?
  • What is the optimal governance approach to enable best practice sharing, risk mitigation, and economies of scale?

How to improve end-to-end process effectiveness and control

  • What are the value and constraints to standardize and simplify end-to-end processes across my enterprise?
  • What are the right effectiveness metrics and process efficiency measures?
  • How can I enhance process control to minimize operational risk?

Many of the top energy organizations are experimenting with next generation operating models to address these issues. For example, a global energy major that utilizes both outsourcing service providers and internal shared services recently embarked on a multi-year journey to integrate services design, delivery, and governance across business units, functions, and geographies. The objectives are to enhance end-to-end process effectiveness and control, reduce complexity and risk at the enterprise-level, and improve service performance and cost efficiency.

Our experience in the energy industry clearly indicates that unlocking the next wave of value requires more deliberate design of an integrated global delivery model, a consistent framework to better align supply with demand, and a holistic approach to govern and optimize services. In addition, corporate culture impacts cannot be overlooked. In global energy companies’ large, complex environments full of competing interest and priorities, strong executive leadership and commitment are vital to success.

mHealth Providers Learning It’s All About Competitive Cooperation | Sherpas in Blue Shirts

mHealth (also written as m-health or mobile health) is a term used for the practice of medical and public health supported by mobile devices. It is fast becoming a top priority for large, complex healthcare organizations seeking to make electronic records, patient information, etc. accessible to a wide range of constituencies via the device or appliance of choice. And its importance is not to be underestimated, as it offers the mobility and flexibility necessary for the user to react instantaneously to the provider, thereby facilitating wellness and avoidance of critical outcomes that require intense and expensive treatments.

Many quality applications already exist that create opportunities for physicians and clinicians in their quest to provide efficient quality healthcare. These applications are available from any number of sources and on a variety of platforms, and are designed to keep people healthy, manage existing diseases, increase health literacy, manage medical information, and ensure medical compliance.

Yet despite the growing importance of mHealth, healthcare payer, provider and pharmaceutical service providers are finding themselves increasingly challenged to find mHealth platforms that can accommodate global 99.9999 percent availability of critical data, as well as provide different levels of information access to physicians, clinicians, pharmacists, patients, plan members and others.

Competitive Cooperation Is the Key

There are several types of mHealth providers. One is phone company carriers that offer voice, telephone and data-driven products based on a device with a set of applications (for example, Android, Blackberry, and iPhone.) Hardware provider organizations such as Dell, HP, and Apple also offer applications based on proprietary operating systems. And providers of integration services have created services based on these separate and distinct platforms (for example, Macintosh versus Microsoft.)

While these different types of mHealth providers have traditionally competed separately for new business opportunities, it is becoming abundantly clear that successful provision of mHealth services with its diversity of needs across traditional market boundaries will require a cooperative effort among these provider types. Healthcare organizations have each embraced the major carriers of choice, and have invested heavily in hardware devices and appliances including iPads, Blackberry, and Android. And most have a healthy mix of these given the individual needs of thousands of physicians and clinicians located across national organizations within diverse settings such as clinics, hospitals, billing offices, and home health.

Indeed, the provision of mHealth services is going to require unique relationships between provider organizations to address the entire spectrum from research and development, implementation, and ongoing support and maintenance to the ongoing provision of new technologies as the marketplace and regulatory compliance demands. There will not be a “one size fits all” solution, but a requirement for unique cooperation and partnerships among the client and multiple service and application providers.

Mastering the art of service provider cooperation to provide the continuum of care needs across an ever changing and somewhat controversial market space will be a formidable challenge, but is an absolute must in order for mHealth to deliver on its promise and need for availability of a robust set of tools anytime and anywhere.

Independent Software Testing Services: What’s the Real Deal? | Sherpas in Blue Shirts

Hardly a day has passed in the past couple of quarters without a service provider boasting of its testing services during the spurts of organizational restructuring, analyst meetings, and earning calls. Industry news portals carry articles on software testing almost every other day, and the revenue from this service line is growing fast, very fast. At the same time, industry sources tell us of amazingly low rates quoted by providers for testing services, even those delivered onsite.

With this – coupled with multiple queries from a variety of providers, buyers, and investors in the past six months, and our own questions on the seeming price play versus potential, leverageable innovations – it was clear the time had come for us to gain deeper insights into the current and future state of the independent testing services industry.

Thus, we recently launched a research initiative to determine the real deal in testing services.

Key areas covered in the report include:

  • Innovations in the horizon (such as cloud, crowd, IP, and engagement models)
  • Drivers and inhibitors of outsourcing the testing process (from both buyer and provider perspectives)
  • Adoption trends across the buyer community (by size, geography, and industry)
  • Contribution of various industries, geographies, resourcing structure etc. for providers
  • Prevalent pricing and engagement models, typical deal size, duration, etc.
  • Growth drivers, challenges in IP monetization, and what lies ahead for testing services

It was interesting to learn some major outsourcing industries are lagging when it comes to outsourcing testing services, how manual functional testing continues to dominate testing offerings, how T&M and fixed pricing are faring, and what’s happening on testing engagement models. Further, while on the surface testing continues to be a “hire manual functional testers in India” strategy for most providers, dig deep and you can see the flux in automation, IP creation, cloud, crowd, and various other innovations.

While discussing testing with large and small service providers, we could not help but see the value of innovation –yes innovation, the great lip service that has been done to death in IT services, but in testing there is more to it. We expected the providers to discuss the new and cool things they are doing in testing services, but such a strong focus on innovation took us a little by surprise. Therefore, the fundamental premise of our report was based on innovation and the path ahead for testing services.

Code Red – Stat! | Sherpas in Blue Shirts

From time to time we get to witness the death of a market. This is seldom pretty, as many are affected with pain and denial. In extremely rare cases an enterprise arises from the ashes of a market implosion, reincarnated with new products and services, a renewed vision, and bright growth prospects. But most often, a string of failures is apparent: some go down spectacularly as they miss the warning signs and skid off the edge of a cliff; some suffer a slow and painful end, dragging along those who dedicated much to the firm, but failed to overcome denial of the changes needed as they dream of the good old days; and some commit suicide, refusing to grapple with fundamental truths as they meet their demise.

For those of you who have an eye for the macabre and are interested in witnessing such a spectacle, we suggest you pay close attention to the legacy IT infrastructure outsourcing (IO) market over the next few years. As illustrated in the chart below, the legacy IO market has been migrating through a maturity cycle, moving from slow growth to flat or stagnant growth and now has large segments that appear to be contracting.

Infrastructure outsourcing market size and growth for traditional IO players

The IO market is primarily built on large contracts, often over $100,000,000 in size, with durations of three to 10 years. Many of these contracts include substantial capital for assets such as servers, data centers, networks, and capitalized transformational costs. These contracts are notoriously inflexible, driven by a combination of factors including the need to predefine service level agreements (SLAs) and scope over a long period of time, pricing that has to anticipate changes in volume and technology, and the substantial capital cost that must be retired over the life of the contract. As you can see from the chart below, these contracts delay the profits to the service provider, and only deliver modest profitability late in the contract term.

Rate of return for a typical traditional IT outsourcing contract

The combination of unrealized earnings and undepreciated assets has the potential to create substantial stranded costs if the contracts are terminated early or significantly renegotiated midterm. These stranded costs have been the bane of the industry, creating a steady stream of blow-up deals – some of which consistently further suppress earnings in the sector.

Sometimes a decision to transfer assets to a service provider was driven by an artificial increase in return on capital that the buyer would show after assets were moved from its books to those of a provider. Nonetheless, even this benefit eventually disappeared due to changes in assets accounting in an outsourcing transaction (see Everest Group’s Research report “Show Me The Assets! The Changing Role of Asset Ownership in Infrastructure Outsourcing”)

The patient starts to look anemic

Over the last few years, customers have been increasingly turning away from these long-term IO transactions. Most current market activity involves only those already committed to these contracts – and only then renewing their arrangements because of the substantial difficulties associated with bringing processing back in-house once the talent and data centers have been transferred to a third party. In many cases, these organizations have opted to descope their contracts and/or actively seek alternative delivery vehicles. To this point, the industry has been able to cope with these negative secular trends through consolidation, cost cutting and restructuring. One need only look so far as the acquisition of EDS by HP, and the ongoing struggles at CSC, to confirm this truth.

Aggressive new organisms attack the weakened prey

The growth of remote infrastructure management outsourcing (RIMO), co-location services and, most recently, cloud computing, provide the market with less expensive and far more flexible alternatives. When combined, these transformational offerings present the IO market with a compelling rationale and clear migration path to move quickly from the legacy IO model. Everest Group’s analysis of the economic justification for wide scale industry adoption points to significant acceleration of existing IO customers’ movement away from their traditional models by breaking contracts, non-renewal or substantial descoping. We have been tracking the dynamics of this departure from traditional IO for some time. See our research on this and other IT-related topics.

A terminal outlook?

The picture we’ve painted above depicts a mortally ill market whose heart has failed but is being kept on life support. Yet massive organ failure is imminent, and will set off a cascading sequence of events. Consider a scenario that might play out as follows:

  • Industry-wide awareness of compelling next generation economics combined with frustration with IO inflexibility drive an accelerated movement away from traditional IO.
  • Enterprise customers recognize the rapid pace of change and refuse to lock in to longer-term IO arrangements due to fears that such commitments will enable their competitors to race by them as they adopt the flexible, low cost next generation solutions.
  • Service providers with a large book of IO business are left with substantial stranded cost and massive losses in top-line revenue. While they may have the technical prowess to deliver next generation offerings, a strategy of cannibalizing the legacy is unattractive as replacement revenue is far less than the original legacy business model.
  • This loss of revenue, combined with the hit to profits as the providers absorb stranded costs and severance due to layoffs, overwhelm attempts to build positive momentum in participating in the next generation of RIMO and cloud services.
  • Leaders in the alternative solutions far outpace the incumbents who try to compete, but any growth just isn’t large enough to offset the legacy IO runoff.

The final breath is snuffed out as legacy outsourcers find it increasingly difficult to cross-sell other services as frustration mounts in the IO customer base due to actions taken to safeguard their stranded investments and prolong the inevitable.

Stat

While the timing element of the prognosis is still a bit uncertain, the wolves are at the door, the doctor has left the room, and the hearse is on its way.

Economic Forecast Calls for More Clouds | Gaining Altitude in the Cloud

Have you ever stopped to think why cloud computing is at the center of any IT-related discussion? In our conversations with clients, from the boardroom to the line manager, cloud is sure to enter into the discussion. Today, many of those conversations are around understanding, and to a lesser degree, implementation. But once the discussion crosses the threshold of understanding, the topic immediately goes to, “How can I get into the cloud?”

Everest Group recently held a webinar on the economics of cloud computing. There were two objectives: 1) Help clarify just how disruptive, in a good way, cloud computing is and can be; and 2) Demonstrate the economic benefits that exist in the cloud economy, and that there are those striving for this competitive advantage today.

The Hole in the Water That You Throw Money Into

One of the key economic drivers that hampers today’s data center environment is the relatively low utilization rate across its resources. Think about it like this: You’ve probably heard the old adage that owning a boat is like having a hole in the water that you throw money into. That is because the majority of boats are seldom used. (Trust me, I know, I used to own one.) The per use cost of a $25,000 (and quickly depreciating) boat that you actually use three or four times a year is quite high, and the reality is you could have rented a boat often for a fraction of the cost. The same thing is happening in your data center. If your utilization is 20 percent, or even 30 percent, you have essentially wasted 70-80 percent of your spend. That is an expensive data center.

Workload Utilizations1

Cloud computing is like that little boat rental shop tucked away in a nice cove on your favorite lake. What if you could get rid of excess capacity, better manage resource peaks and valleys, and rent public capacity when you need it, and not pay for it when you don’t?

What if we leverage public cloud flexibility1

The Economics

As you can see in the graphic below, the economics related to cloud are dramatic, and the key lies in leveraging the public cloud to pay only for what you use, eliminating the issue of excess capacity.

Public cloud options unlock extraordinary enterprise economics

There is a variety of point examples in which this is done today, with the above economics reaped. For instance, Ticket Master leverages the public cloud for large events, loading an environment to the cloud, specifically sized for each given event. The specific event may only last several hours or days, and once complete, Ticket Master takes down the environment and loads the data in its dedicated systems.

There are also enterprises and suppliers working to enable peak bursting more seamlessly. For example, eBay recently showed where they are working with Rackspace and Microsoft Azure to enable hybrid cloud bursting, allowing eBay to reduce its steady state environment (think hole in the water) from 1,900 to 800 servers, saving it $1.1 million per month.

Hybrid economics example eBay

 The Steps to Getting Started

Dedicate yourself to getting rid of your boat (or should I say boat anchor?) Begin a portfolio assessment. Understand what you have, and what is driving utilization. Consolidate applications, offload non-critical usage to the valleys, and look for ways to leverage the public/private cloud. When I unloaded my boat, I freed up capital for the more important things in life, without sacrificing my enjoyment. Doing so in your data center will allow you to take on strategic initiatives that will make you even more competitive.

Innovation Junkies – One for Your “Bucket List” | Sherpas in Blue Shirts

I have yet to talk with an executive in charge of maximizing value of an enterprise’s global services portfolio (whether largely in-house/shared services, outsourced, or an amalgamation of approaches) who did not have innovation among their top challenges or disappointments. Fostering innovation starts with creative processes aimed at articulating a problem and then defining different ways to solve that problem.  Cracking the code on innovation takes both hard work and the “spark.” I recently visited MIT Media Lab‘s new home and witnessed showers of “sparks” across a wide ranging variety of issues. For example:

What if a computer could predict how you will behave better than you can communicate your own feelings? What if you could marry biology, technology, physics, and engineering to take the “dis” out of disabled?

Believe it or not, these questions aren’t pie-in-the-sky dreams. My visit to the MIT Media Lab left me absolutely invigorated. My time there was punctuated by discussions with some of the professors and students doing big brain research. And I came away thinking that it was all about technology, yet nothing about technology; it was inspiring and sobering – all at the same time.

One professor shared the work in his lab that is essentially advancing the frontier of making the “bionic man” a reality. Applying technology and (really) advanced mathematical modeling, projects are enabling amputees to achieve functional performance essentially equal to the sacrificed biological limbs. They have mind-blowing working prototypes that actually enable thoughts to drive mechanical tasks, e.g., sensors carefully positioned in a person’s brain that result in an artificial hand opening and closing when the person thinks “open my hand” and “close my hand.”

Another project demonstrated results in which sophisticated real-time image processing of facial expressions predicted what people were thinking more accurately than individuals’ own responses. Analysis of facial expression changes better predicted whether someone really liked a product better than focus groups, responses, interviews, etc. Imagine the power of such feedback from the market in a business setting (or about a discussion you just had with your children!).

Although I could go on and on about the pioneering scientists and artists (another project has created an opera in which machines are the performers and vice versa – I can’t even describe it right!) brought together in this creative crucible for innovation, here I’ll just whet your taste buds a bit. But I will say that prior to my visit, when I learned that the Media Lab is actually part of MIT’s School of Architecture and Planning, I hoped that the innovation spark produced by integrating multiple disciplines would live up to its promise – and I was not disappointed. Since my undergraduate degree and early career were in the design profession, I have a special appreciation for different problem-solving approaches with a special dose of creativity driving for breakthrough outcomes. If you care about innovation, put the MIT Media Lab on your bucket list – you won’t regret it.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.