Tag: benchmarking

The Three Foundational Elements of Cost Benchmarking | Sherpas in Blue Shirts

In the global services industry, cost benchmarking is a method enterprises use to compare their outsourcing cost competitiveness against those of similar organizations. Yet, in Everest Group’s experience and observations, businesses all too often erroneously view salary benchmarking as indicative of overall expenditures.

While salaries constitute the biggest component (60-70 percent) of operating costs, salary benchmarks fall short of providing the requisite insights, as higher salaries don’t necessarily mean higher overall costs. There are multiple other factors driving costs. The top three factors driving outsourcing operating costs, other than salaries, are:

  • Pyramid and talent model
  • Scope of work
  • Non-compensation cost

These factors are specific to companies’ context and typically depend on their positioning. In addition, there are market-driven forces impacting costs, such as attrition, wage inflation, and the exchange rate in different countries.

A typical benchmarking exercise takes all these factors, and others, into consideration.

Following are three cost benchmarking best practices.

Best practices of cost benchmarking

Take a holistic view

Cost benchmarking should consider a comprehensive set of factors effecting cost. Everest Group classifies these components into three broad buckets:

  • Compensation-linked costs (e.g., salaries, benefits)
  • Non-compensation costs (e.g., real estate, technology, support staff, transportation, recruitment, and training)
  • Policy levers (e.g., delivery pyramid, support staff leverage, and space usage)

An ideal cost benchmarking takes a holistic view across all three categories.

Identify underlying cost drivers

By definition, cost benchmarking determines differences within the market. However, on their own, these differences offer limited insights. To discover opportunity areas for cost optimization and subsequent calibration, enterprises need to identify the underlying drivers of differences.

For example, if an organization’s real estate costs are higher than the market average, benchmarking should identify whether it is due to rentals, space per seat, seat utilization, or a combination of these factors. Similarly, for companies with higher support staff costs, benchmarking should identify if it is driven by higher support staff salaries, skewed support staff ratios, or both. There are multiple such costs elements (e.g., transportation, recruitment, training) for which benchmarking could help identify the underlying drivers for calibration.

Normalize data

Even in situations where cost drivers are identified, it is critical to ensure like-to-like comparisons in order to derive meaningful conclusions. For illustration, in the real estate example above, economies of scale can result in different real estate costs for a 100 seat center and a 1,000 seat center.

Thus, organizations should normalize data along key dimensions impacting the cost. Typical dimensions to normalize include:

  • Locations (e.g., onshore/offshore, Tier-I/II/III)
  • Scope of work (e.g., ITO/BPO, front office, back office)
  • Nature of work (e.g., transactional, complex)
  • Player type (e.g., GIC, service provider, specialist)
  • Scale (e.g., mid-size, large scale)

Cost benchmarking is not an easy, close your eyes and toss the dart exercise. Benchmarking that fails to take a comprehensive view of cost, identify underlying drivers, and normalize data runs the risk of making misleading comparisons that may lead to flawed results.

Can Benchmarks Be Leveraged for Forward-looking Pricing Negotiations? | Sherpas in Blue Shirts

A colleague of mine recently had an interesting discussion with a senior stakeholder from a large buyer organization. The buyer had engaged multiple partners to benchmark the company’s contracted ADM prices every year since 2009 and had difficulty understanding why the actual pricing trend was opposite what the benchmarks suggested each year. After some probing, it emerged that the buyer was using the benchmarks to interpret likely future pricing, rather than assess current pricing.

Let’s look at the difference between the two via a weather example.

The average temperatures in Delhi in Q1 and Q2 2012 are summarized below.

Delhi Temperature

Clearly, temperatures increased consistently during Q1 and Q2. Based on that information, a tourist planning an August 2012 trip to the region could have concluded that it would be even warmer during that month and added more sunscreen to his or her list of pre-travel purchases.

In reality, as shown below, Delhi was cooler in Q3 (and it was rainy). If the tourist incorrectly surmised that past weather trends meant that August temperatures would be higher, he or she would have packed an extra pair of sunglasses, rather than an umbrella…and gotten wet.

Delhi Temperature2

Note: for those who use the Fahrenheit scale and are itching to know what Delhi’s temperatures were last Q1-Q3, an easy – but not quite precise – conversion is to multiply the Celsius number by two, and then add 30. (The actual formula is to multiply by 1.8, and then add 32.)

Similarly, although price benchmarks represent current market pricing, they may, or may not, indicate the future trend. Thus, they are not particularly useful for buyers trying to negotiate future prices with their service providers. Rather, proper pricing negotiations require having a pulse on multiple drivers broadly classified as demand-side, supply-side and macro-economic. Equally important is understanding the relevance of these drivers in the deal-specific context of a particular buyer. For example, in Everest Group’s PricePoint: Q4 2012 analysis of ADM services, we noted that prices were at the same level as 12 months prior. Looking ahead, supply-side cues indicated stability in operating costs, potentially indicating status-quo on pricing. However the recovery in demand for transformation projects is expected to materialize in more ERP deals. Thus, buyers with significant ERP initiatives could witness higher prices in 2013.

So how can benchmarking exercises work to a buyer’s benefit?

Here is Everest Group’s take:

  • Price benchmarks typically represent current market (or sometimes past) pricing. They can help identify whether a service provider has been, or is, over-charging. In many situations, buyers can realize more than 5-7 percent savings simply by calibrating their contracted prices per the benchmarks
  • Where pricing is currently in line with the benchmarks but is due for revision, understanding the pricing outlook is most beneficial. However, this involves accurate assessment of multiple pricing drivers without which any forward-looking pricing discussion will be incomplete (and could leave the buyer out in the rain, sans umbrella.)

 

Application Development Productivity – You’ve Got to Be on the Indifference Curve! | Sherpas in Blue Shirts

The main indicator of productivity in application development (AD) – periodic reduction in the number of application defects per 1000 lines of code written in X man hours – has stood the test of time at a modular level, as have CMMI standard metrics against which any calculation of productivity can be benchmarked.

However, in the midst of mounting pressure to optimize discretionary IT spend, buyers have no option but to justify their spending via data that evaluates productivity at the enterprise level. CIOs must meet this requirement, while also facing the following two challenges:

  • First, they must simultaneously deliver increasing value and achieve year-on-year annual reduction in costs
  • Second, they have to contend with educated business users who may not have the patience to deal with a behemoth enterprise IT organization to get their functional needs fulfilled. Indeed, with the advent of market forces such as social media, mobility, analytics, and cloud (SMAC), business users today understand more about applications and technology than they ever did in the past

As a result, IT buyers must ignore all the ambient noise created by multiple metrics and focus only on the following two factors:

  • Business functions: The functionality that the business users are demanding, complete with SLAs
  • Application cost: The cost to acquire the applications that provide the required functions.

Fortunately for them, data indicates that “business functions versus cost” can be useful in benchmarking the productivity of any application development project.  As the following image illustrates, for AD projects of specified complexity and type, a combination of the number of functions (F) in an application and the cost (C) to acquire that application can be plotted into an “indifference curve.” An indifference curve is a graph showing different bundles of two variables between which a consumer is indifferent. Basically, all points on the curve deliver the same level of utility (satisfaction) to the consumer.

Application Develop Productivity

Based on benchmarking data over many years, all points that form these indifference curves deliver an optimal level of productivity for the complexity specified.

This productivity benchmarking can be used to assess and optimize the productivity of AD projects. For instance, if a particular AD project, for example application 3 in the above image, falls below its indifference curve, it is delivering sub-optimal productivity. Thus, by using this method of benchmarking, a buyer can immediately raise a flag with its service provider and force it to either:

  • Reduce costs to achieve optimal productivity, or
  • Provide more functionality to align the project with productivity indifference curve

This benchmarking tool also delivers benefits to service providers. If their case in point AD project is application 1, which falls above the indifference curve, they: 1) have consumer surplus into which they can dig (i.e., buyers could be willing to more); and 2) can use the data to advertise their higher productivity performance to generate more business and differentiate themselves from their competition.

We’d love to hear your thoughts on this type of productivity benchmarking. Have you employed it? What lessons learned can you share with your peers?

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.