Tag: enterprise

Will Corporate Venture Funding Lead to the Death of VCs as We Know Them? | Sherpas in Blue Shirts

For some time now, large companies have shied away from corporate venturing, unsure of returns and/or efficiency of capital usage. Enterprises have seen their venture initiatives fail, and many give up hope quickly after initial enthusiasm. Even corporates that have managed to run successful funds have struggled to monetize their leveraged investments as they scale up. Given these challenges, it’s not surprising that enterprises with large, under-utilized cash piles chose to maintain the status quo, rather than invest in an emerging startup economy.

However, we’re starting to see a significant change in the funding landscape. This week, Google announced plans for a new operating structure that effectively makes the eponymous search engine giant a subsidiary of a new holding company, Alphabet. One of the main driving rationales for this decision was to delink Google from other ventures in which the parent organization is involved, and give it more room to experiment with new ideas. After all, Google has diversified into areas including life sciences, drone delivery, space research, and home automation.

Last month, Workday, the enterprise SaaS poster boy, announced Workday Ventures, the company’s first strategic fund focused on identifying, investing, and partnering with early to growth stage companies that place data science and machine learning at the core of their approach to enterprise technology. In June 2015, Intel Capital led a US$40 million Series D investment in Onefinestay, best described as a luxury Airbnb competitor. And other corporate venture funding efforts have figured prominently in the recent hyper-competitive boom in the deals landscape.

Corporate VCs don a new avatar

Corporate venture funding has taken a new lease on life, and aroused widespread interest, notably in The Economist and Harvard Business Review. This is not without reason. AMD, Dell, and Google are technology giants with early venture funds, and firms such as Microsoft and Salesforce made similar moves later. A CB Insights study on corporate venture investment trend found that corporate venture capital activity witnessed a significant uptick in 2014, with deals by corporate venture arms jumping 25 percent YoY and funding rising 76 percent. The most active corporate venture investors in 2014 among technology companies were Cisco, Comcast, Google, Intel, Salesforce, Qualcomm, and Samsung, underscoring the attention being paid to this route.

In terms of exits by corporate venture investors, technology players again emerged on top, led by Google Ventures (OnDeck Capital, Hubspot, and Nest Labs), Intel Capital (Yodlee, [x+1], and Prolexic Technologies), and Samsung Ventures (Fixmo, Cloudant, and Engrade), and Qualcomm (Divide, MoboTap, and Location Labs). The marquee corporate venture deals in 2014 were Cloudera (US$900 million, led by Intel Capital), Tango (US$280 million, led Alibaba), and Uber (US$1.2 billion, led by Google Ventures). The chief areas of investment include Internet of Things, analytics, security, and platform technologies.

Differences between corporate venture funding and conventional VCs
  • While VCs tend to focus on growing portfolio companies and time their exit from a ROI standpoint, corporate venture funds take a strategic view of investments, and look to use their expertise to guide start-ups
  • Acquisition of portfolio companies is not uncommon for corporate venture funds (e.g., Google Ventures – Nest Labs). Funding a startup and acquiring it later, rather than building one organically, makes for a stronger business case. Traditional VCs frequently work with the intention of taking investments public
  • Corporate venture funds are less risk-averse than conventional VCs, given their deep pockets and long-term position. This is also reflected in their higher involvement in seed funding rounds
  • Typical VCs tend to lag corporate venture funds in terms of average deal size or term, also due to corporates’ deep pockets and long-term holdings
  • Corporate venture funded start-ups tend to go public more often than their VC portfolio peers

 

Strategic technology investment or desperate spend?

Given improved macroeconomic confidence, there is a lot of “easy money” floating around the technology continuum. And this is beginning to result in a “perpetual investment bubble.” While this isn’t to say that doomsday is just round the corner, with everything and anything getting funded (does anyone remember Yo?), utility, monetization models, and future relevance seem to be the last things on investors’ agenda. More often than not, there is a fine line between a blunder and a brilliant bet. Everyone and anyone in this easy dollar-fueled utopia tend to be under the messianic illusion that the next multi-billion dollar bet is around the corner and will change the world. Most players tend to add incremental value over existing processes, systems, and interfaces, rather than changing them as we know it, which is the reality of investing.

Given their tremendous business acumen, corporate funds have talent, skill set, pedigree, and, ultimately, deep pockets to exist and thrive in a volatile knowledge economy as they look to identify and nurture a truly revolutionary idea beyond just incremental technology value. That said, there is likely to be significant churn once the rose-tinted glasses come off. Still and all, with the strategic depth and domain guidance large enterprises can provide, their portfolio companies are likely to be better positioned to ride the wave.

Why You Need to Buy Security Differently from Managed Services | Sherpas in Blue Shirts

In many newspapers these days, one doesn’t have to read very far without tripping over the latest sensational article on a security breach. The black hat community conducting security attacks is incredibly well funded and incredibly sophisticated and our traditional firewall security precautions are woefully inadequate. The implications of this for companies are stark and robust. I think we must start with how we approach security.

The list of attacks is long and includes, for instance, Target’s customers, Anthem’s healthcare customer records, and the U.S. federal government apparently being penetrated by the Chinese. Behind all this is the frightening prospect of a highly sophisticated black hat community potentially funded by national governments in China and Russia and increasingly being in alliance with organized crime. The black hats are conducting security threats on a scale that is both mind boggling and deeply worrying – not only right now but even more so over times as the R&D effort of this community drives increasing levels of sophistication.

To date, we have approached security as a hygiene vehicle – one and done. We think about it in terms of firewalls securing our data center or making different layers of IT or technology architecture secure. We invest once to try to imbue our technology with a level of defense, and then we seek to spread that investment over the technologies; and we expect the cost to decrease as the learning curve goes down. The problem with this is that it cannot stand against the R&D effort and the rate of improvement in the black hat community.

Therefore, we must change our expectations and how we buy security. We must have a separate security tower in which the expectation is the cost will rise over time and we will invest ever more money and time into ways to counteract the growing black hat menace. The black hats are not constrained to attacking just one functional element of an organization’s service chain; therefore, businesses need an overarching security solution that secures everything. The consequences of not countering this threat are immense.

When we approach security as a hygiene vehicle, we ask for a component of security and monitoring in each technology function. Whether it’s a data center, applications, network, or other infrastructure, we use firewalls, encryption, or other tools and techniques to harden our environment and make it less vulnerable. That’s all well and good, and this should continue. However, this is woefully inadequate on its own with the increasing sophistication and threat from the black hat community. We cannot expect to be defended or even maintain our corporate responsibility if we assume that a hygiene approach is adequate.

It’s clear that we must also procure a different kind of security that is overarching and that matches the rapidly changing security landscape vulnerabilities uncovered and exploited by extremely well-funded and incredibly gifted black hats. We must realize that a hygiene approach to security will prove to be dramatically ineffective against the black hats’ innovation. And we must expect that the cost of an overarching security function will increase because of the need to constantly invest in our capabilities to innovate – and innovate faster – to counteract their threats.

We see the changing expectations starting to happen with the chief security officer in a role outside of technology and reporting directly to the CFO, CEO or board. But we have not seen the kind of budget and capability being invested into that function that are necessary to counteract the growing threat.

Furthermore, we have yet to see service providers providing a managed service to this new entity. The managed services they offer are based on the normal managed services principle of providing a constant service that will get cheaper over time as the learning curve and technologies mature. That’s the underlying theme of all managed services. That principle gets stood on its head in the context of security when the adversaries’ sophistication keeps rising exponentially. The cost of sophistication to counteract the adversaries must rise equally – which doesn’t work in the managed services principle.

Furthermore, no one firm can have the sophistication to take on the Russians, Chinese, organized crime mob, and the black hat ecosystem. That’s not a reasonable expectation for even the largest organizations. Therefore, organizations must turn to service providers that can aggregate customers in order to match the investment of the black hat community. The services industry must get together to defeat this massive threat to businesses, but managed service offerings are not the answer. We must innovate at the same rate at the black hats; thus a provider’s expectation of cost dropping over time is false because the learning curve will not go down.

Bottom line: The cyber attacks situation will get worse. All businesses – including service providers and their customers – must expect that their investments in security will increase to match the ever-escalating threats.


Photo credit: Flickr

SaaS versus Enterprise IT as-a-Service | Sherpas in Blue Shirts

New business models are capturing growth and stand to reshape the services industry. Two of the most promising of these are SaaS and “Enterprise IT as-a-Service.” Buyers need to understand that each model takes a different approach to delivering services; their risks also are not the same, and each approach has different consequences.

They are the same in one aspect: both models attempt to change the relationship of IT to the business by better aligning the IT environment to the customer or business needs and making the IT infrastructure more agile and responsive to changes in the business environment. So the prize is better alignment to business value, more responsiveness, shorter time to get new functionality, and more efficient use of IT dollars.

But how they deliver that prize is very different.

SaaS approach

The SaaS promise is strong on efficiency gains. SaaS is inherently a multitenant-leveraged approach in which common platforms are built, standard functionality is developed and is allowed to be configured to a customer’s needs. The platforms come with flexible, powerful APIs that allow the SaaS offerings to be integrated into enterprise systems. But at its heart, SaaS is a one-size-fits-all, unyielding standard. Efficiency gains in a SaaS approach come from having many customers use the same software and hardware, limiting customization through configuration and APIs.

SaaS apps are typically function-based, so they evolve quickly. When the SaaS owner brings innovations, it does so in one stable environment, not having to update or take into account very diverse environments that traditional software packages must accommodate. Therefore, SaaS delivers rapid changes in functionality that benefit the entire ecosystem. In contrast, the traditional software model evolves much more slowly and imposes constant requirements for upgrades that are both expensive and have knock-on consequences for the systems that integrate with them.

Enterprise IT as-a-Service approach

The Enterprise IT as-a-Service model takes a supply-chain approach, moving each component of the supply chain into an as-a-service model. This allows the whole supply chain to be better aligned. It loosely couples each part of the supply chain, allowing each component to evolve and allowing the supply chain to capture the benefits as each component evolves at its own pace.

Similarity in benefits

Both models deliver similar flexibility and agility benefits, but they achieve them in different ways. The Enterprise IT as-a-Service model allows far more customization than the SaaS model. It assembles components into a customized end-to-end offering, whereas a SaaS vehicle achieves those aims through API configuration.

Both models also achieve similar benefits in cost savings. Both approaches can achieve significant efficiencies. SaaS achieves this through high leverage over an unyielding standard. The Enterprise IT as-a-service approach achieves efficiencies partly through reducing the over-capacity that all functional-driven IT departments maintain inside their ecosystem.

Substantial differences in cost, risk, and technical requirements

The two models differ substantially in cost, risk, and technical depth. The risk and technical depth to adopt a SaaS vehicle can be very substantial, particularly if it’s a system of records. Existing systems of records come with substantial technical depth in terms of the integration of the system, unamortized assets and also ongoing software licenses. In moving from that environment over to SaaS, organizations often face substantial write-downs and a risky implementation.

The risk is different in the Enterprise IT as-a-service model in that it can be managed component by component. Organizations can achieve substantial flexibility by not having to move off existing systems of records or existing software. Those platforms can be migrated down this path, therefore lessening the technical debt and presenting a different risk profile.

Which approach is better?

I believe that both approaches are important to the future of IT. At the moment, large enterprises adopt SaaS mostly as point solutions. Enterprise IT as-a-service poses the opportunity to operate at the enterprise level, not at a point-solution level.

I don’t believe the two approaches are mutually exclusive and expect that organizations will embrace both capabilities over time.

Hadoop and OpenStack – Is the Sheen Really Wearing off? | Sherpas in Blue Shirts

Despite Hadoop’s and OpenStack’s adoption, our recent discussions with enterprises and technology providers revealed two prominent trends:

  1. Big Data will need more than a Hadoop: Along with NoSQL technologies, Hadoop has really taken the Big Data bull by the horns. Indications of a healthy ecosystem are apparent when you see that leading vendors such as MapR is witnessing a 100% booking growth, Cloudera is expecting to double itself, and Hortonworks is almost doubling itself. However, the large vendors that really drive the enterprise market/mindset and sell multiple BI products – such as IBM, Microsoft, and Teradata – acknowledge that Hadoop’s quantifiable impact is as of yet limited. Hadoop’s adoption continues on a project basis, rather than as a commitment toward improved business analytics. Broader enterprise class adoption remains muted, despite meaningful investments and technology vendors’ focus.

  2. OpenStack is difficult, and enterprises still don’t get it: OpenStack’s vision of making every datacenter a cloud is facing some hurdles. Most enterprises find it hard to develop OpenStack-based cloud themselves. While this helps cloud providers pitch their OpenStack offerings, adoption is far from enterprise class. The OpenStack foundation’s survey indicates that approximately 15 percent of organizations utilizing OpenStack are outside the typical ICT industry or academia. Moreover, even cloud service providers, unless really dedicated to the OpenStack cause, are reluctant to meaningfully invest in it. Although most have an OpenStack offering or are planning to launch one, their willingness to push it to clients is subdued.

Why is this happening?

It’s easy to blame these challenges on open source and contributors’ lack of coherent strategy or vision. However, that just simplifies the problem. Both Hadoop and OpenStack suffer from lack of needed skills and applicability. For example, a few enterprises and vendors believe that Hadoop needs to become more “consumerized” to enable people with limited knowledge of coding, querying, or data manipulation to work with it. The current esoteric adoption is driving these users away. The fundamental promise of new-age technologies making consumption easier is being defeated. Despite Hortonworks’ noble (and questioned) attempt to create an “OpenStack type” alliance in Open Data Platform, things have not moved smoothly. While Apache Spark promises to improve Hadoop consumerization with fast processing and simple programming, only time will tell.

OpenStack continues to struggle with a “too tough to deploy” perception within enterprises. Beyond this, there are commercial reasons for the challenges OpenStack is witnessing. Though there are OpenStack-only cloud providers (e.g., Blue Box and Mirantis), most other cloud service providers we have spoken with are half-heartedly willing to develop and sell OpenStack-based cloud services. Cloud providers that have offerings across technologies (such as BMC, CloudStack, OpenStack, and VMware) believe they have to create sales incentives and possibly hire different engineering talent to create cloud services for OpenStack. Many of them believe this is not worth the risk, as they can acquire an “OpenStack-only” cloud provider if real demand arises (as I write the news has arrived that IBM is acquiring Blue Box and Cisco is acquiring Piston Cloud).

Now what?

The success of both Hadoop and OpenStack will depend on simplification in development, implementation, and usage. Hadoop’s challenges lie both in the way enterprises adopt it and in the technology itself. Targeting a complex problem is a de facto approach for most enterprises, without realizing that it takes time to get the data clearances from business. This impacts business’ perception about the value Hadoop can bring in. Hadoop’s success will depend not on point solutions developed to store and crunch data, but on the entire value chain of data creation and consumption. The entire process needs to be simplified for more enterprises to adopt it. Hadoop and the key vendors need to move beyond Web 2.0 obsession to focus on other enterprises. With the increasing focus on real-time technologies, Hadoop should get a further leg up. However, it needs to provide more integration with existing enterprise investments, rather than becoming a silo. While in its infancy, the concept of “Enterprise Data Hub” is something to note, wherein the entire value chain of Big Data-related technologies integrate together to deliver the needed service.

As for OpenStack, enterprises do not like that they currently require too much external support to adopt it in their internal clouds. If the drop in investments is any indication, this will not take OpenStack very far. Cloud providers want the enterprises to consume OpenStack-based cloud services. However, enterprises really want to understand the technology to which they are making a long-term commitment, and are cautious of anything that requires significant reskill or has the potential to become a bottleneck in their standardization initiatives. OpenStack must address these challenges. Though most enterprise technologies are tough to consume, the market is definitely moving toward easier deployments and upgrades. Therefore, to really make OpenStack an enterprise-grade offering, its deployment, professional support, knowledge management, and requisite skills must be simplified.

What do you think about Hadoop and OpenStack? Feel free to reach out to me on [email protected].


Photo credit: Flickr

How Service Providers Can Illuminate Clients’ Path to Transformation | Sherpas in Blue Shirts

One of the biggest issues facing executives today is that they see the need to change their organization through automation, analytics, or other big ideas that are clearly vetted, but they struggle to drive the change. Their organization is reluctant or frightened to change, much like horses in a steeplechase race that shy at jumping the fences. Consequently, service providers are frustrated. They see potential for their business, but they’re unable to move from project to program. How can a provider help clients to jump the obstacles instead of shying away from them?

How can a provider illuminate the path forward to transformation and get the client on board for change? Let’s start by talking about what doesn’t work – white papers. No executive these days reads white papers. It just doesn’t happen. As I’ve blogged before, they’re too dense, too theoretical and too preachy.  And they’re nakedly self-interested.

So what are the tools that enable clients to jump the fence? At Everest Group we’ve been thinking about and researching this. Executives need simple yet rigorous, relentlessly objective instruments that they can use to challenge their organization. And once they examine an instrument, a clear path forward opens up behind it.

What would such an instrument look like? It might be a maturity index showing competitors’ degree of progress in the field. Everest Group’s PEAK Matrix™ is another example of a first step in instrumentation; it illuminates providers’ capabilities in the marketplace.

Every organization is unique, and they struggle with how to apply transformation or disruptive technologies to their uniqueness. A set of objective instruments allows executives to have a conversation in their organization and challenge their organization. The organization can then ask for help and imagine a roadmap – without a provider telling them what the roadmap should be.

Simple but rigorous instruments will illuminate the way to transformation and assist in the organization internalizing the path forward through the challenges.


Photo credit: Flickr

Groundbreaking Rio Tinto and Accenture As-a-Service IT Deal | Sherpas in Blue Shirts

Rio Tinto, a global diversified mining company, recently announced a groundbreaking initiative they are undertaking with Accenture. This can best be described as moving Rio Tinto’s enterprise IT function into an as-a-service model. Game-changing benefits permeate this deal, and it’s an eye-opener for enterprises in all industries.

Let’s look at what Rio Tinto gains by pulling the as-a-service lever to achieve greater value in its IT services.

First, it changes the relationship between the business and IT. It breaks down the functional silos of a traditional centralized IT organization and aligns each service. In doing so, it creates an end-to-end relationship in each service, whether it be SAP, collaboration or any other functional services.

Second, this initiative moves Rio Tinto’s entire IT supply chain to a consumption-based model. This is incredibly important for a cyclical commodity industry, where revenues are subject to the world commodity markets. Rio Tinto’s core product, iron ore, is a commodity that can result in revenues slashed in half in the course of a year, leading to the need for cost reduction initiatives. Correspondingly, in boom times, commodities can double and triple in price, resulting in frenetic energy to expand production. The as-a-service model ends this commodity whiplash impact. It gives Rio Tinto a powerful ability to match costs to their variable consumption patterns.

This move will change the pace of innovation within Rio Tinto, allowing it to future proof its investments in IT. As many enterprises discover, multi-year IT projects often end up being out of date by the time they are implemented. Rio Tinto sought to shorten the IT cycle time so it can take full advantage of innovations the market generates. In the as-a-service model, it can pull those innovations through to the business quickly – which is a struggle for traditional siloed centralized IT functions.

These are game-changing benefits. It’s important to recognize that the journey to capture these benefits required a complete rethinking of how Rio Tinto’s IT (applications and infrastructure) is conceived, planned, delivered, and maintained. Moving from a siloed take-or-pay model to an integrated consumption-based model required wide-ranging re-envisioning and reshaping the ecosystem for deploying its technology; it touched IT talent, philosophies, processes, policies, vendors, and partners.

Clearly this journey will be well worth the effort given the substantial game-changing benefits. Challenging times call for breakthrough answers. The cost benefits alone are significant; but even more important is the ability of this approach to accelerate the transformation of the company into a more digital business. Rio Tinto chose to partner with Accenture to move its organization to this fundamentally different action plan for delivering and consuming IT and meeting the rapidly evolving needs of the business.


Photo credit: Rio Tinto

Old Wine in Old Wineskins | Sherpas in Blue Shirts

A famous teaching of Jesus explains that it’s a mistake to pour new wine into old wineskins because it will burst the skins and both the wine and the wineskins will be ruined. New wine belongs in new wineskins. I think we’re seeing this principle playing out in technology – where the consequences are profound.

New wine expands and grows fast; so it requires a supple, pliant container to allow for that expansion. Old wine is stable and mature; it does better in a stable, consistent environment.

For the most part, now that the cloud experiment is over, we see that new technologies and functionalities have many of the properties of new wine. They are effervescent, change continually, move quickly and often rely on heavy iteration. They constantly expand and change. They are best suited for new architectures such as cloud infrastructure and SaaS services. New technologies also have new requirements; thus, they require new structures, new and more flexible governance vehicles to allow them to capture their full value.

Legacy applications, the systems of records in which enterprises have invested hundreds of millions of dollars, are mature and were designed for their traditional environments, which tightly govern change. They are in data centers that have the requisite management support and requisite talent pools.

The services industry is starting to recognize the profound truth of the new and old wineskins: At this point in time, legacy applications are best left in their old, original containers where they can continue to operate in a mature fashion. Old applications or systems of record need to remain in their existing frameworks or architectures. They should be changed only slowly. Furthermore, new functionalities and technologies need to go into new wineskins, or architectures, that allow for and encourage agility and other attributes that support evolving change.


Photo credit: Flickr

Digital Is Shading Out Cloud | Sherpas in Blue Shirts

Three years ago global services industry was abuzz that the world would be set on fire by cloud computing. Today, although CIOs and senior executives, accept the cloud model and are looking to implement it, they are increasingly excited about infrastructure and the digitization of business. The digital revolution is shading out cloud, capturing the imagination and mindshare of the C-suite.

Cloud is certainly important, but its impact is just starting to take traction and already the C-suite is moving on to a new horizon. My, how short our attention spans are.

Although digital can incorporate aspects of cloud computing, its impact compared to cloud is enormous in proportion and potential.

I wonder what will be next in line to capture our imaginations and how quickly that will come to gain prominence.

 

The Downside for Enterprises in Automating | Sherpas in Blue Shirts

Service delivery automation is obviously powerful. But there’s a downside and potential risk for enterprises in the shift to automation.

Benefits include reducing the number of FTEs in a process and therefore reducing the cost. And automation can be applied without changing the system of records. Plus the implementation cost is significantly less than traditional reengineering of ERPs, and it can be driven from a business unit or from the process owner instead of the IT department.

The challenge comes in the automation layers on top of and between an organization’s system of records. They are highly sensitive to changes in underlying systems, and most organizations will struggle to maintain the automation layers.

As an example, if a system of record or a government website, or a website from which the automation tool pulls data changes in any way, the tool or robot could make a mistake or could stop working. So these tools need to be carefully monitored and constantly adjusted or tuned to the changes in underlying systems. It’s not realistic to believe the underlying system of records will be static; like all systems, they change. And even small changes will require retuning the robots.

Lack of monitoring and adjusting the automation layers opens up risks. For example, it could turn type-1 errors (such as making a mistake in a manual process on one invoice, which is a problem but recoverable; you have a $10,000 or $20,000 or $100,000 problem) into type-2 errors (making a mistake on all your invoices, resulting in millions of dollars of problems).

The automation layers will be fragile and can even break. So it’s clear that service delivery automation requires constant attention and maintenance to deliver on its promise. Many IT organizations have the capability to implement the automation files; it’s the monitoring and adjustments that they will struggle with. The significant risks are a strong argument for using third-party providers.

Challenger’s Advantage | Sherpas in Blue Shirts

Every morning in Africa a gazelle wakes up knowing that it must outrun the fastest lion. Every lion wakes up knowing it must outrun the slowest gazelle. So when the sun comes up in Africa, you’d better be running. We see this happening in the services world — as cloud and as-a-service models move into mainstream adoption and trump labor arbitrage, everybody is running and the hunters become the hunted.

It’s clear that the services world is changing due to the new technologies and models. Historically the dominant players in one era failed to make the transition and become dominant players in the next era. Established dominant hunters do not know how to behave or succeed as game; the emergence of a super predator disrupts the natural order.

The dominant providers really struggle with making the change. They talk about it. Their senior executives recognize the need. They have structured their business to perfection to facilitate the incumbent model. It’s very difficult and very unusual for them to successfully transition to a new model. We see this time and time again.

Here’s a real-world example. I was on an airplane and headed home after a meeting with senior executives of a major IT provider. At the meeting they laid out their commitment and strategy to cloud and as-a-service models and the massive investments they made and are facilitating to make to facilitate this transition.

On the airplane I sat next to another executive from the same company. He was returning from a trip to South America where he advised clients about future technology. He spent most of the trip spouting scorn and ridiculing that the new cloud technologies are not appropriate to run enterprise-class applications and stating confidently that they would never replace or threaten the existing order.

Think of the confusion and conflict customers face when they hear dueling and contradictory positions coming from the same company. They are much more likely to adopt a provider that is completely aligned with the new models. This is why, historically, challengers succeed.

A similar situation occurred when I returned from a provider conference where top execs laid out their grand vision. But less than a week later Everest Group observed the provider working in a client account and the account team espoused exactly the opposite of what the senior leaders said.

We see similar behavior within Indian firms. They make the most money when they deliver work from a low-cost location (ideally a tier-3 city) with the most junior people (the freshers). That’s the heart of the pyramid, the heart of their factory model and it achieves the highest margin a service provider can make. Incumbent providers with factory models have high turnover as they constantly push to the next generation of junior people coming in.

They do this even though they know their customers want less turnover and more work delivered onsite at the client or at least in country as they want more customer intimacy. So their needs and commercial interests are unaligned.

We see providers’ executives making big announcements about more people delivering services in country and on site. But what their salespeople say and what the management and operations people do is the opposite.

In the above examples, providers’ employees did not buy in to the new models. And this is but one of a thousand different points of alignment that needed to happen. The incentive structure, organization structure and underlying technology enablement must change. And the hearts and minds of employees need to change.

Customers aren’t stupid. And they do change providers. We’ve seen a big jump in challenger models across the board in outsourcing. Increasingly the challenger has an advantage over the incumbent. They’d better be running.


Photo credit: Flickr

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.