Tag: enterprise

Enterprise Cloud Goes Vertical | Gaining Altitude in the Cloud

Most enterprise cloud offering conversations to date have focused on the horizontal benefits…flexibility, scalability, auto scaling, cost savings, reliability, security, self provisioning, etc.

Advantageous as these are, CIOs are increasingly interested in learning more about cloud benefits that are specific to the industry in which their organizations operate. For example, latency requirements, failover mechanism and data encryption are important to a CIO in the financial industry. A healthcare industry IT executive will be interested in hearing more about mobility and data archiving. How the cloud can improve supply chain or logistics is important for a CIO in manufacturing industry. And a media industry IT executive, quite aware of the various platforms being used to access content, will want to hear more about Content Delivery Networks (CDN) supported by the cloud.

A growing number of enterprise cloud providers are beginning to understand this interest in vertical cloud benefits. While their focus has been on “SaaS-i-fying” their offerings to meet unique, industry-specific application requirements, the trend will continue towards “PaaS-i-fying” and even “IaaS-i-fying” their offerings.

Let’s take a quick look at some of today’s verticalized enterprise cloud offerings.

IBM’s Federal Community Cloud is dynamic and scalable to meet government organizations’ consolidation policies as mandated by the Obama administration’s CIO. It is in the process of obtaining FedRAMP certification to meet Federal Information Security Management Act (FISMA) compliance standards, a requirement for government IT contractors, and will be operated and maintained in accordance with federal security guidelines.

Savvis provides customized IaaS solutions that cater to the financial industry. Growth in this vertical has been led by providing infrastructure services – such as proximity hosting and low latency networks – which support electronic trading. Savvis has added six new trading venues and an international market data provider. Its customers can now cross-connect, or have network access, to over 59 exchanges, Electronic Communication Networks (ECNs), and market data providers. For example, it hosts Barclays Capital’s dark liquidity crossing network, LX, which aggregates its global client bases’ market structure investments.

Infosys took advantage of Microsoft Azure PaaS platform and its SQL Data Services (SDS) to provide automotive dealers with cloud-based solutions to go from a point-to-point dealer connection for inventory management to a hub-based approach. In this solution, an inventory database for all dealers is hosted at a dedicated instance of SDSin the cloud. It provides middle tier code and business logic to integrate data between participating parties and a web-based interface for dealer employees wanting to check inventory at other dealerships.

Amazon Web Services (AWS) has cloud solutions that cater to the media industry’s needs for transcoding, analytics, rendering, and digital asset management. It developed a CDN, based on CloudFront™, which provides the streaming from edge nodes strategically located throughout the United States for a robust streaming experience.

AWS’ Gov Cloud™ provides a cloud computing platform that meets the federal security compliances FISMA, PCI, DCC and ISO 27001. The Department of State and its prime contractor, MetroStar Systems, built an online video contest platform to encourage discussion and participation around cultural topics, and to promote membership in its ExchangesConnect network. The contest drew participants from more than 160 countries and took advantage of AWS for scalability. AWS hosts websites for many federal agencies such as the Recovery Accountability and Transparency Board (recovery.gov) and the U.S. Department of Treasury (treasury.gov). AWS provides multiple failover locations within the United States, a provision which meets the security requirement that only people physically located within the United States have access the data.

Game hosting companies are running their games in the cloud for faster delivery and scalability. And AWS’ S3 platform provides the storage capacities for gaming companies such as Zynga and Playfish.

GNAX’s healthcare cloud specifically caters to the healthcare industry and understands the nuances of HIPPA. It provides a private cloud solution to healthcare companies that scales up and down depending on patient volume.

Of course, there are both pros and cons to adopting vertical-specific cloud offerings.

Pros:

  • Customized solutions based on industry regulations
  • Immediate creation of competitive advantage

Cons:

  • Vendor lock-in
  • Proprietary workloads may not be migrated

These issues can be mitigated through a careful sourcing methodology, now being provided through cloud agents who negotiate the contracts with multiple vendors as per the needs of the client organization.

As illustrated above, there are significant benefits to be gained from industry-specific cloud solutions, and I predict we’ll see an increasing number of them emerging in the near-term.

Enterprise CIOs Get no Cloud Satisfaction from Incumbent Vendors | Gaining Altitude in the Cloud

Enterprises are frustrated when it comes to cloud migration, and it appears they have good reason to be.

During the past three months, we have had conversations with IT and executive leadership in upwards of 50 Global 2000 firms that rely on distributed, global IT operations. These companies operate dozens of data centers running hundreds of workloads that support tens – or hundreds – of thousands of employees around the world.

Our discussions covered three basic topics related to their migration path from dedicated and virtual infrastructures to cloud. Their answers revealed disappointment and a growing sense of frustration with the incumbent vendors that built their global network of data centers. And their comments suggest a major misalignment of technology and marketing, as well as a potentially huge opportunity for disruption by new competitors in enterprise cloud.

1.     “Tell us about the conversations you’re having with your incumbent equipment and software vendors about next generation IT migration.”

IT leadership stated that vendors are “stuck in technology speak,” focusing on their latest version of private cloud rather than demonstrating reference installations that support a business case. They also reported frustration at how each vendor defines cloud terminology differently, making rational comparisons impossible. Market noise has become deafening, creating distractions for their IT staffs that are trying to cut through the cloudwashing and map out a cloud migration strategy.

Perhaps most troubling is that these enterprises reported that their incumbent vendors are focusing on technology, with little to no focus on business value.

2.     “Are you impressed with what they’re telling you?”

Despite the answer to the first question, the CIOs told us they are impressed in select cases, primarily with vendors that have developed vertical-specific solutions to address data privacy, security and compliance issues.

For the most part, however, the IT professionals we spoke with reported seeing lots of impressive slide decks with long-term cloud visions, but receiving unsatisfactory answers about the ability to execute in the short-term.

They also cited transparency of security and controls as a major issue. Those we spoke with require a level of visibility into solution performance that their incumbent vendors are simply unable to deliver.

3.     “What action plan have you developed with your legacy vendor?”

Here’s where it became apparent that incumbent vendors are missing the mark.

While it seems obvious that vendors would recommend their own solutions, enterprise buyers want objectivity when it comes to the cloud. “Vendors guide us to their own solutions,” and “their incentives to do so are apparent,” were consistent themes. Consequently, enterprise buyers are not relying on one vendor when it comes to cloud migration action plans, even if their incumbent is a Tier 1 ITO vendor.

This seems to be a direct result of enterprise buyers’ frustration with the lack of direct answers regarding what is available for deployment today, and what is merely a toolkit or development environment.

There’s not much improvement when talking about native cloud providers. Several people noted that while these vendors are able to bring ready-to-wear solutions to the table, their experience bases are either with the developer community or with service providers, but not enterprises. This experience gap raises questions among enterprise IT leadership regarding these providers’ ability to provide a seamless implementation and ongoing support.

We drew several important conclusions from these conversations:

  1. Vendor “over-marketing” in the race to grab cloud share is confusing the market, and may actually be slowing adoption by introducing risk and doubt among enterprise buyers. This became apparent when several CIOs told us they have essentially black listed some of their incumbent vendors from further conversations about their cloud migration strategies.
  2. We’re seeing a surprising volume of Global 2000 enterprises – most prominently in the U.S. and Europe – issuing RFPs for complete outsourcing of their data centers to IaaS providers. Of course, this does not mean they’re going to do it, but the aggressiveness with which they’re exploring the option points to a fundamental dissatisfaction with the ability of their trusted partners to deliver them to the cloud.
  3. The next issue to contend with is organizational and cultural readiness within the enterprise IT function. CIOs are aware of this, they’re concerned about it, and they don’t see any reliable best practices to guide them.

It’s clear to us that incumbent vendors have stumbled, leaving the door to the enterprise CIO’s office open. Opportunity awaits providers that can bring ready-to-deploy cloud solutions to the enterprise, backed by vertical market experience and an ability to assist with cultural transformation.

Enterprise Cloud Migration: What if We’re All Wrong? | Gaining Altitude in the Cloud

This blog originally appeared on Sandhill.com. Read the original post.


Current conventional wisdom suggests that enterprise adoption of cloud services will accelerate as service providers and offerings become more “mature” and “enterprise friendly.” Adoption will grow and extend beyond initial test/dev website, and backup use cases as enterprises become more comfortable with cloud services. The common belief is that, over time, cloud will in fact become a strategic component of most IT environments but that it will be a decade-long (if not longer) transition. Most also believe that data security, privacy, and audit issues significantly constrain some verticals such as healthcare and financial services from effectively migrating in the near term, particularly to cloud platform and infrastructure services.

But as we discover far too frequently, conventional wisdom often turns out to be quite wrong. While adoption rates for new technologies tend to be overestimated in the short term and underestimated in the long term, it’s an interesting exercise to think about the factors and unexpected developments that could dramatically accelerate enterprise migration to the public cloud.

Let’s consider some of the basic assumptions many in the market make around enterprise and the cloud.

What if enterprises architect around SLAs?

The terms of many current cloud service provider SLAs are effectively meaningless. The burden of proof often falls on the user to fully document service interruptions and outages. Even if proven, compensation for violations often equates to a slap on the wrist at best. But what if enterprises come to the conclusion that SLAs are the wrong way to think about ensuring availability?

The highly visible Amazon outage in its Northern Virginia data center in April resulted in significant interruption and service degradation for users of Quora and FourSquare, while other websites and companies appeared to suffer no impact. The reason? Availability through redundancy. Rather than relying on SLAs, many unaffected companies simply architected redundancy through failover approaches that rolled to other Amazon data centers or service providers. What if enterprises decide that pushing cloud service providers on SLAs is akin to beating a dead horse and, instead, simply decide to take the SLAs as a given and architect around them?

What if enterprises standardize to conform to cloud service provider offerings?

Enterprises historically have been addicted to IT customization – both in what they buy, and how they buy it. Service providers that weren’t willing to modify offerings, pricing, or contract terms for large enterprise buyers were quickly shown the door. Many believe that enterprises will never migrate to cloud services that are essentially “take it or leave it” propositions to the customer.

Yet in many cases, enterprises have driven customization in processes, applications, and services that in fact add little or no business value. Cloud is opening many enterprises’ eyes to the fact that there may in fact be significant value in using cloud services as a lever to drive standardization across the organization, particularly for non-strategic applications and processes.

What if enterprises learn to live with standardization and limited configuration, and dramatically streamline support for non-strategic applications and assets?

What if data security and privacy issues are mitigated?

Data residency, security, and privacy issues are providing significant cloud migration constraints for some global enterprises, particularly those in compliance-sensitive verticals like healthcare and financial services. But what if these barriers were significantly reduced or fully eliminated?

Salesforce.com recently gave a glimpse into one way this may happen through the recent announcement of its Data Residency Option (DRO), which gives customers the ability to keep data on-premise behind their firewall while providing encrypted access to the Salesforce.com cloud application. Some cloud infrastructure service providers, like Savvis and Rackspace, offer dedicated hosting and private/public cloud services in the same data center, enabling hybrid models that support data “ownership” and the benefits of dynamic bursting into public cloud models.

While enterprise customers are seeking more transparency, Amazon has in fact achieved compliance with FISMA, HIPAA and PCI DSS and other standards. Some in the audit community are also discussing the need to reexamine common policies and controls in light of cloud services and architectures. The net net? Data security, privacy, and residency issues may end up being addressed faster than expected. What would happen to adoption if these concerns were taken off the table?

What if pricing for common IaaS services drops by 50 percent?

To date, cloud service providers have very effectively used private cloud economics as a pricing umbrella for their public cloud services. The result? Highly attractive margins for current cloud providers and an onrush of new providers. If microeconomics holds here (and I don’t know why it wouldn’t), pricing for public cloud infrastructure services will begin to drop, and potentially dramatically. Amazon has already established a pattern of driving consistent reductions in pricing for its core cloud services. What will happen when new entrants get aggressive in trying to grab share? The enterprise business case ROI for cloud service migration could be much more compelling in the very near future.

What if mission-critical applications migrate first?

The assumption is that adoption of cloud applications starts at the edge with line-of-business and functional applications and then, over time, migrates to more strategic and mission-critical applications. But what if CIOs determine to go in the opposite direction?

Examples exist of large global enterprises that have migrated to cloud service providers that offer hosted private cloud SAP ERP services in conjunction with community cloud spiking environments. While the common belief is that mission-critical apps will be the final frontier for enterprise cloud migration, what if it turns out to be the first?

All of these scenarios are unlikely to play out as described, but we can be sure that the conventional wisdom will be wrong in a market that is evolving as rapidly as enterprise cloud. Current expectations for the rate and pace of adoption are based largely on past trends in enterprise technology, which is probably a bad assumption in itself. It is increasingly clear that adoption curves for new enterprise technologies are actually accelerating.

I bet my money that the pace of enterprise cloud adoption will surprise many … it will be interesting to see what unexpected scenarios might open up the floodgates.

You Want the Cloud? You Can’t Handle the Cloud! | Gaining Altitude in the Cloud

If you can’t measure it, you can’t manage it. This oft-repeated mantra is one of the many cornerstones of IT governance. It serves as a reminder of the importance of measuring those outcomes that matter. So what’s all the hubbub about cloud governance? It’s just another compute and software delivery platform, right? Governance is governance, right?

With the increasing adoption of cloud solutions, organizations are quickly discovering that the traditional application of supplier governance and measurements do not effectively translate to cloud solutions. For example:

  • Several cloud services do not offer traditional service guarantees –  e.g., accuracy, speed, fidelity, and availability –  or means to measure the desired outcomes
  • When service levels are offered, they are rarely customizable to the buyer’s needs, e.g., measurement period, or frequency
  • Cloud providers often do not support a joint governance model but rather put forward a take-it-or-leave-it solution
  • IT services are being purchased outside the IT organization by employees throughout the enterprise, causing a variety of internal governance issues, including:
    • Duplicate services creating increased costs and operational confusion with regards to solution performance and responsibility
    • New integration issues with current systems aren’t considered
    • Strategic data becomes increasingly disaggregated from central decision repositories

Compounding these issues is the emerging practice of best-of-breed solutions. These models, when successfully implemented, spread much of an enterprise’s application services across a broad set of suppliers. But, when you want to measure cross functional performance, say in order-to-cash, how do you actually do it?

With more service providers supporting your environment dictating their own service levels, when they are even available, and ringing a death knell to end-to-end service levels, what’s a governance group to do?

To answer this and related questions, our perception of governance must fundamentally change. For example, managing consumption of cloud services will have to be far more internally focused on managing end users. When employees can purchase services in the same time it takes to type sixteen digits, the IT organization and governance function will need the agility of a world-class athlete.

The newly required form of governance has to address multiple challenges, including:

  • Defining cloud computing use policies
  • Managing the organization to enterprise-approved use of a readily available cloud services
  • Removing barriers to computing needs to quickly enable compliant consumption of cloud services
  • Determining how and when services migrate from traditional to cloud environments
  • Shifting workload to quickly lower compute costs
  • Managing demand in a new world with practically infinite supply

Stay tuned as we continue the conversation in an effort to dive deeper and discuss these and other topics for governing the next generation of IT services.

Live from Bangalore – the NASSCOM IMS Summit, September 21 | Gaining Altitude in the Cloud

CIOs, service providers, analysts, and the business media rubbed shoulders on the power-packed first day of the NASSCOM Infrastructure Management Summit (IMS) in Bangalore. This year’s conference has the twin themes of Enterprise Mobility and Cloud Computing, with one day dedicated to each, which seems to lead to a more focused set of discussions than a super broad-based event that leaves you struggling to absorb all of what you just heard.

After the welcome address and keynote speech from Som Mittal, President of NASSCOM, and Pradeep Kar, Chairman of the NASSCOM RIM Forum, we settled in for a series of insightful presentations and panel discussions with global technology leaders.

BMC CEO Robert E. Beauchamp spoke about how the parallel paradigms of cloud, consumerization, and communication (yes, I am in alliteration mode today) require CIOs to think of a unified approach to service management. Of particular interest were Beauchamp’s insights on how different service providers are trying to interpret the cloud differently in an attempt to a) disintermediate the competition; b)  avoid being disintermediated; or c) both a and b.

IBM’s interpretation of the cloud: The cloud is all the bundled hardware, software, and middleware we have always sold to you, but now you can buy the whole stack yourself instead of us having to sell it to you.

Google’s counter: Who cares about the hardware anyway? We will buy the boxes from Taiwan – cheaper and better. It’s about what you do with it, and that’s where we come in…again.

VMWare chips in: You already own the hardware – and we will tell you how best to make use of it.

Beauchamp sees more than one way of “belling the cloud cat,” and CIOs need to figure out which direction to take based on their legacy environments, security requirements, and cost imperatives. (“Belling the cloud cat” is my take-off on a fable titled Belling the Cat. It means attempting, or agreeing to perform, an impossibly difficult task.)

As for service providers, he also foresees successful survivors and spectacular failures as the cloud conundrum disrupts traditional business models.

Mark Egan, VMWare CIO spoke about how consumerization and cloud computing are nullifying the efficacy of traditional IT management tools. According to Egan, IT needs to move from a “we’ll place an agent on the device” mode to a “heuristics” mode of analyzing data in order to prevent every CIO’s security nightmare from coming true in a consumerized enterprise.

Next up, Brian Pereira, Editor, InformationWeek, and Chandra Gnanasambandam, Partner, McKinsey, inspired us with real stories about how mobility is transforming the lives of unbanked villagers, saving billions of dollars worth of healthcare expenditure, and improving and optimizing the enterprise supply chain.

Here’s a gem of an insight: Do you know what most urban workers in the Philippines, Vietnam, or India do if they need to transfer money to parents living in rural areas? They buy a train ticket. Then they call Mum and Dad, share the ticket number, and ask them to go to the local railway station, cancel the ticket and collect the refund (minus a small cancellation fee). Wow – that’s what I call consumer-led innovation!

To summarize today’s sessions:

  • While many discussions highlighted the correctness of what Everest Group analysts are already predicting, it was invaluable to get validation on what we suspected, complete with more live examples.
  • Cloud and enterprise mobility are here to stay. With the momentum behind them – unlike other hyped up technologies – these are being demanded by the consumer, not dumped on them. And that is always going to mean something.
  • Service providers and CIOs need to evolve. In themselves, cloud and mobility do not represent a threat. But it’s a lot of change. And the threat lies in how CIOs, and their service providers, gauge the pace of the change, and react to it.

That’s it for now. Tomorrow, I share a panel with CSC and Microland to discuss “Trigger points – Driving traditional datacenter to private cloud.” Right now I’m heading off the gym in an attempt to burn all the calories I’ve put on during the day, thanks to the excellent food. Stay tuned!

Joint Sourcing Opportunities for a Portfolio of Companies | Sherpas in Blue Shirts

Most small, independent companies can attain the services they need at a comfortable price point from a “one size fits all” sourcing model provided by a second-tier or niche provider. But for an enterprise that owns multiple companies that run different administrative solutions with varying degrees of automation, process robustness, standards, and technology platforms, this model can become prohibitively expensive and fall severely short of required results.

To help a holding company with a portfolio of over 20 companies find ways of lightening the administrative costs across all its holdings, we recently conducted an exercise to test the efficacy of “joint sourcing” multiple SG&A functions to a single first-tier provider.

The companies belonged to a variety of different industries, and, as expected, they varied in their administrative solutions and infrastructure. Some had an internal support staff; others had already sourced portions of their services; and others still relied on the parent company for administrative support. Thus, we decided to focus our attention on six of the 20 companies we felt were best suited for this exercise. The selected companies operate in the automotive and parts, manufacturing, personal and household goods, pharmaceuticals and biotechnology, and communications industries.

To estimate the joint sourcing cost savings, we followed a five-step approach:

  • Perform a high-level peer group analysis – Assess the relative operational efficiency of each company by evaluating SG&A spend as a percentage of revenue in comparison to a peer group
  • Derive baseline spend for  IT, HR, F&A, CRM, procurement, logistics, knowledge services and engineering based on the client’s financial reports
  • Develop estimates for addressable spend – Identify addressable portions of functional spend (those sensitive to optimization levers such as sourcing and process improvements) and fine tune estimate by considering each company’s pre-existing optimization and sourcing programs  (See Table 1 below)
  • Estimate potential savings for the function – Estimate the savings potential for each addressable portion, and derive the range of savings estimates based on existing operational efficiency (See Table 1 below)
  • Estimate the impact of earnings per share (EPS) – Calculate the impact of savings on EPS (by comparing to operating income)

Table 1: Addressable spend and savings potential by function

Table 1: Addressable spend and savings potential by function

We next analyzed each company individually with the assumption that each could achieve the pricing and solutions available to large organizations. Table 2 is a portion of the outcome of the analysis we performed for a subset of the functions for one of the companies.

Table 2: Annual Cost Savings Estimate

Table 2: Annual Cost Savings Estimate

The results of the exercise and analysis, shown in the following table, speak for themselves.

Table 3: Joint Sourcing Savings Potential

Table 3: Joint Sourcing Savings Potential

By joint sourcing to a major provider, we were able to identify a combined savings that would not be available if sourced separately as the owned-companies would benefit by:

  • Greater attention
  • Speed of implementation
  • Solution standardization, in turn resulting in improved management reporting and compliance
  • Access to mature capabilities (e.g., ERP technology platforms)
  • Significant investment already made in offshoring
  • Increased viability of captive/shared services centers

Companies with multiple holdings and private equity firms alike are constantly looking for ways to achieve a positive impact on their SG&A expenses. We believe joint sourcing is an interesting option for them to consider when evaluating potential cost savings scenarios.

The Art of Conducting VERY Bad Meetings | Sherpas in Blue Shirts

We’ve all attended meetings that went on for too long, were poorly managed, and accomplished little, if anything. But breaking just about every single principle of good meetings is truly an art, and one in which a recent client excelled. To be fair, the company had three characteristics that made conducting good meetings a significant challenge:

  • Culture – the culture of the country in which the company is headquartered is very relaxed
  • Age – management in the organization was very new
  • Lack of attention – the prior leadership lacked true managers and corporate did not pay attention to the organization

Still, if an awards category for bad meetings existed, this company would be a shoe-in as the winner.

Here are the do’s it did – or allowed – all of which are definitely good meeting don’ts:

  1. Meeting invites lacked objectives, agendas, location information and conference dial-in information for remote participants
  2. Far too many people were involved in each meeting
  3. Invitees arrived late, very late
  4. Invitees failed to show up at all, even after accepting meeting invites
  5. Critical materials were not made available to attendees either before, during or after meetings
  6. Meetings extended far beyond the allocated time
  7. Agreements and actions were not documented, and in many cases meetings ended without assigned responsibilities
  8. Action deadlines weren’t met, and thus appeared on agendas week after week
  9. Participants were focused on just about everything but the meeting (e.g., email, texts, phone calls, other work on their laptops)
  10. Meetings were consistently rescheduled, time and again
  11. Requests for responses (actual response, redirection to another individual, etc.) were ignored
  12. Action item or other approvals took far longer than acceptable, either due to lack of specific request or the request being buried in an email
  13. Recurring issues were not addressed
  14. Priorities were not communicated, leading to focus on less important things
  15. Communications were consistently misunderstood as the who, what, and by when weren’t clearly presented

Although helping this client conduct better meetings wasn’t a specific part of our engagement, we did work with it to help it understand meetings best practices. Insights we provided include:

  • The meeting requester should send reminders to all invitees
  • Meeting materials should be submitted to all invitees well in advance, especially if prior review or action is required, and invitees should be required to submit input or questions prior to the meeting
  • Designate a note taker responsible for distributing to all participants a document that clearly identifies actions, responsible parties, due dates, and next steps
  • Designate a person responsible for tracking actions, and documenting issues and risks
  • Establish meeting rules, e.g., set cell phones to vibrate (or even better, route cell phone to assistant), no laptop use, and no smart phone use
  • Prioritize and determine the root causes of recurring issues
  • Engage leadership to communicate the priorities and track their execution
  • Make certain all requests are abundantly clear in terms of the who, what, and when

A well-known quote says, “Imitation is the sincerest form of flattery.” But the above 15 “do’s it did – or allowed” practices are ones you don’t want to imitate. Instead, create your own best practices meeting art!

Evolving Cloud, Evolving Advisory Role | Gaining Altitude in the Cloud

Avid readers of this blog can tell by now that Everest Group is excited to participate and contribute to the market discussion on the impact of the rapidly evolving cloud industry. We get tremendous satisfaction from both the online and in-person conversations our blog topics have generated in the last year and promise to continue to contribute our informed viewpoints with continued enthusiasm.

Enterprise IT leaders we talk with every day find themselves at a crossroads. The cloud revolution is nearing an inflection point, promising to radically transform the way IT services are delivered. At the same time, there is an equally strong “echo chamber” effect in which promises and benefits are refracted through various service provider prisms, creating a significant challenge in separating what’s possible from unhelpful hyperbole. Additionally, the focus in the current market tends to be on the ever-evolving technology upgrades and releases of various cloud components, which leaves most CIOs in the dark when it comes time to try and sell the economic benefits of cloud technology to their key internal stakeholders.

IT organizations have had enough theory and are ready to start working in more practical terms:

  • How do we transform the provision of IT services to our business to meet its needs more effectively?
  • How do we build a strategy to get us there?
  • What does the financial case look like to achieve our desired outcomes?

We’re excited to share the vision of our Next Generation IT Practice with you, because we believe it to be the natural evolution from our current practice of assisting Global 1000 firms drive greater operational efficiency. Our expertise allows us to help transform IT organizations to strengthen both their long-term strategic and economic positions by leveraging the next generation of technologies.

Our vision for this new practice is simple: provide a bridge between strategic direction and technical execution for IT transformation without bias towards the desired end state. We believe this is where the current advisory market falls short and Everest Group can add the most value.

Our Next Generation IT team is successfully able to:

  • Build on existing experience helping large IT clients develop strategies
  • Leverage our breadth of research on the service providers’ strengths and weaknesses
  • Adopt a time-tested methodology to include next-generation technologies
  • Utilize our business case modeling skills to construct a versatile tool for helping assess transformation economics in a way that is unique in the marketplace

We cannot wait to share more details about how our team at Everest Group has helped clients develop a roadmap towards transformation in the coming weeks, so that we can continue to contribute thought leadership in this space.


Learn details about how our Cloud Transformation and Next Generation IT offerings can help your organization achieve the strategic value it’s seeking.

Procure-to-Pay: Measuring Outcome Beyond Efficiency Gains | Sherpas in Blue Shirts

More and more companies are recognizing the value of end-to-end business process management as it breaks down functional and organizational silos to enable a more holistic approach to enterprise performance management.

Of the common sets of end-to-end processes – which include Source-to-Contract (S2C), Procure-to-Pay (P2P), Order-to-Cash (O2C), Record-to-Report (R2R), and Hire-to-Retire (H2R) – P2P is most often identified as the priority for optimization. There are two key drivers of this trend. First, compared to other end-to-end processes, P2P activities are typically more common across the enterprise, making them easier to standardize. Second, the business case for P2P is frequently the most compelling. Through process standardization, workflow automation, system integration, and rigorous compliance enforcement, companies have been able to achieve rapid and significant spend and operating cost savings while simultaneously gaining the ability to better manage risk.

A case in point: a global software and products company achieved an initial operating cost reduction of 35 percent. It subsequently realized spend savings of US$700 million (~9 percent on a spend base of US$8 billion) and captured more than US$10M in Early Payment Discounts (EPD). The savings and benefits accrued generated a break-even on the business case in less than six months.

Based on Everest Group’s experience, one of the most critical success factors of P2P transformation is the institutionalization of a common set of well-defined performance metrics across the entire organization, including both internal and third party delivery partners. The performance metrics should be closely linked to desired business outcomes, and applicable across segments and geographies. Moreover, both P2P efficiency and effectiveness should be easily quantified, measured, and benchmarked.

The table below presents a P2P metrics framework that starts with clearly defined business objectives that are measured by a small set of outcome-based metrics to reflect the overall efficiency and effectiveness of the P2P process. The diagnostic measures are designed to identify specific process breakdowns and improvement opportunities, and are tracked and reported at the operational level.

P2P Metrics Framework

 

We strongly recommend companies follow a structured approach to develop a holistic P2P performance management framework:

  1. Define common metrics, and clearly delineate objectives, descriptions, and interdependencies with other performance measures
  2. Establish a standard methodology and systems to track and report performance; key components include:
    • Measurement scope, parameters, method, data source, and frequency
    • Benchmarking methodology and data source
    • Reporting dashboards, frequency, and forum
  3. Assign accountability for:
    • Measuring and tracking performance metrics
    • Benchmarking and reporting overall P2P performance
    • Identifying and prioritizing continuous improvement (CI) opportunities
    • Reviewing and approving CI projects
    • Implementing and monitoring CI initiatives
    • Calibrating performance metrics based on evolving business objectives

There’s no question that the old management adage “You can’t manage what you don’t measure” holds true in the case of end-to-end process management. Having a common set of appropriately-designed performance metrics is both an enabler for and indicator of successful P2P transformation.

Is End to End Really the End All? | Sherpas in Blue Shirts

It’s all the rage. Global organizations are starting to take a more “user” centric view of process workflows and operations. As opposed to organizing their delivery capabilities around discrete functions like procurement, finance and accounting (F&A), and HR, the world’s leading firms are organizing around end to end (E2E) processes like Procure to Pay, Hire to Retire, and Record to Report. But is E2E simply a “Hail Mary” pass, a wishful attempt to find value beyond labor arbitrage? Or, as evidence suggests, are the benefits – e.g., better EBITDA, tighter compliance, and greater financial control – real and proven?

A comprehensive CFO survey IBM conducted last year clearly demonstrated that organizations that consistently outperform their peers in EBITDA do, in fact, organize and deliver their global services around principals consistent with an E2E approach. Additionally, these companies all have standardized finance processes, common data definitions and governance, a standard chart of accounts, and globally mandated, strictly enforced standards supporting these E2E processes.

Everest Group’s experience supports IBM’s survey results. And while it seems clear that every large, complex global organization should be chasing E2E in order to improve results and reduce risk, it’s important to note that doing so is neither easy nor without challenges.

To realize the benefits of E2E, Everest Group typically recommends a developing a three- to five-year roadmap with a heavy focus on building the business case, defining the target operating model, and managing stakeholder expectations and change.

Roadmap

Yet even the best game plan will have to address key challenges, including:

  • Fragmentation – The core of many E2E processes, F&A, is often fragmented in large companies. Finance processes are commonly distributed not only by business unit, but also often by geography. A global rationalization of F&A to understand the base case (current state) is a critical first step.
  • Vision – It is essential to document and agree on a common target operating model definition for each E2E process which details:
    • activities, standards, and data definitions
    • a common set of E2E process metrics used to measure performance, provide transparency on delivery performance, and underpin dashboard reporting
    • a framework for controls, oversight, and balance sheet integrity
    • a compelling and thorough business case that clearly defines the current state, investments, and future benefits
  • Technology – Even under the best of technology frameworks, a single, global instance of an ERP system like SAP, a further “thin” layer enabling technologies and tools, may be needed to drive standardized processes.

No, E2E is not a Hail Mary pass, but rather a sustained and balanced drive down the field for a game winning touchdown. Success will require strong leadership, talented personnel, technology, a sound game plan, and solid coaching staff to pull it all together, building momentum and confidence along the way.


Related Blog: Building a Robust Global Services Organization

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.