Category

Sherpas in Blue Shirts

Why Is ADP So Successful? | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

At Everest Group, we’ve been assessing why some service providers are so successful. Using a framework we created that focuses on six characteristics, it’s easy to understand why ADP is so successful. At the heart of their success is the fact that they live up to their promise of being the most trusted firm in payroll services.

As the figure below illustrates, branding, go-to-market approach, and portfolio are three key characteristics in successful companies.

Assessment framework technology service companies

I think what’s remarkable about ADP is that they align their brand of trusted payroll services with all their operational aspects. They go to market in a way that aligns with their brand choices and allows them to dominate or at least serve every geography, both large and small. They design their portfolio of products to be payroll itself and surround the payroll system to reinforce or deliver their complete promise.

They have the most comprehensive ecosystem around the payroll process, connecting with tax jurisdictions and integrating into a wide range of HRIS, financial, and ERP systems. As such, ADP may not at any one point be the leading provider of technology, but they are the most trusted provider. They achieve this through the breadth of their ecosystem, the breadth of their global offerings; there is no jurisdiction in the world in which they don’t keep up with the regulations of tax and payroll.

This allows them to service something very rare – both very large and very small companies. And they are the safest pair of hands in payroll.

Assessing the other characteristics necessary for success, it’s clear that ADP is always relevant in terms of technology. They continue to invest in technology, never allowing their technology to become out of date or antiquated. Staying relevant with technology doesn’t necessitate that they be leading edge; in fact, the leading-edge role would take away from their “most trusted” status.

They largely grow their own talent and don’t rely on large recruitment from outside. Therefore, they are able to deliver a high degree of quality and consistency in their talent. They ensure that ADP is a rewarding place to work and grow a career, which allows them to nurture talent.

ADP’s business model is completely aligned with where the services industry is headed. For example, any way you look at it, ADP was one of the first users of SaaS – before most of us knew what SaaS and BPaaS were.

All of these characteristics make ADP incredibly formidable in all things payroll and able to serve an incredibly wide variety of customers in almost every industry and geography. Bottom line: ADP delivers a nice, steady return to shareholders and trusted services to clients.


Photo credit: Flickr

Digital Transformation – Will IBM Attain its Aspirational Leadership Position? | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

Everest Group had the opportunity to attend IBM’s APAC analyst day in India on 11-12 June 2015. Business and technology leaders from IBM presented their offering portfolio, demos, and real life transformative case studies with active participation from their clients. One thing that stood out was how Big Blue is communicating not only its technology vision, offerings, and organizational commitment toward open technologies, but also its internal transformation to serve clients and reclaim its technology leadership position. It realizes that the “old IBM” ways will no longer work, and it needs to become more nimble and innovative, and play an important part in shaping the technology disruption the digital age has brought onto us.

What’s happening?

Earlier this year, IBM aligned its go-to-market strategy around key industry verticals. It also created internal structures to make myriad of its offerings, technology groups, services business, sales and marketing, and its research lab work in sync. It believes this will help create solutions that are required to leverage digital technologies, and thereby not only redefine itself, but also create a new ecosystem of product and service providers around it.

Going back in the history, IBM truly transformed the technology industry when it invented the Mainframe. And while today’s technology becomes tomorrow’s legacy, no one can deny that the Mainframe was a historical system that shaped and created the technology industry as we know it today.

However, since then, IBM became a nuts and bolts company providing middleware, desktops, and back-end efficiency solutions focused on enterprise computing. While it did introduce incremental innovation and acquire many technology companies, it did not play a meaningful role in shaping the industry vision. It continued to invest in its research labs, and its products were always considered leaders in enterprise computing. But it hasn’t been a leader in true enterprise technology transformations such as the rise of ERP, virtualization, SaaS, or IaaS.

This has changed. The analyst meeting demonstrated that digital has become the new pivot around which IBM will take back its earlier pedestal position of being the company that forms, shapes, and guides the technology industry. This story was ably supported by multiple client interactions during the event. Clients say that this is not the IBM they had earlier worked with, or had expected to work with.

IBM’s much publicized partnerships with digital native firms like Facebook and Twitter, and leading user experience and design companies such as Apple, are an important but small part of its digital journey. The bigger part is moving away from its traditional way of working, and realizing that it must play a key role in the digital everywhere environment. Its increased focus and core commitment toward open technologies is highly apparent. And it has always had the technology, scale, and reach to transform businesses. Now, the muscle it’s putting behind Softlayer and BlueMix, its mobility play, and its investments in analytics, the Internet of Things (IoT), and Watson have the potential to transform not only its clients but itself as well.

Is there any challenge?

With its go-to-market alignment with industry verticals, IBM can bring effective solutions to clients looking to transform their businesses. However, disruption in most industries is happening from the outside, (e.g., Uber to the taxi industry, Airbnb to hospitality, Apple Pay to banks, and Google cars to automotive), rather than within. Therefore, a rigid structure around industries may not work well. IBM will need to ensure that its technology, industry verticals, and innovation groups talk to each other, an area where it has historically struggled.

Moreover, monetization of some of these innovations will be a long, drawn out process. IBM has had significant growth challenges, and has shed many of its businesses. For its growth and profitability to return –which should be the big drivers along with reclaiming its innovator status – IBM has to do a lot more. It has historically been viewed as a company that helps clients’ operations run more efficiently; it now needs to carefully position and communicate its willingness and ability to partner in clients’ growth.

Where does IBM go from here?

In addition to the digital technologies IBM possesses, other of its strong strategic initiatives include: internal transformation around reskilling the workforce toward innovation and design thinking; commitment to open technologies; collaborative alignment between its services business and its technology groups; renewed commitment toward client centricity; improved sales effectiveness; and focus on solving core industry problems.

IBM’s changes have been pushed right from the CEO’s office, and IBM executives believe results will be visible in the next 6 to 12 months. IBM needs to play a dual role in which it helps some clients disrupt their industries and business models, and assists others sail through the digital disruption. It again needs to become a technology innovator. While it’s a difficult task, we believe it has the needed technology, vision, and now internal alignment to achieve these objectives.

Avoid the “Gotchas” in Purchasing Next-Gen Tech Services | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

The new technologies sweeping the market hold great promise of competitive advantages. But there’s a disturbing trend occurring in the services sales process for these technologies that poses a risk for buyers. Look out for providers talking about cloud, mobility, big data, the Internet of Things, and social in the same breath as SaaS/BPaas, automation, robotics, and artificial intelligence. Providers that jumble these technologies together as though they are homogeneous really don’t understand the implications of what they’re trying to sell you. They’re basically throwing mud against your wall and seeing what sticks.

The possibilities with all of these technologies are exciting, but they have distinctly different impacts on the buyer’s business.

As illustrated in the diagram below, we can bucket one class of impacts as those that create new business opportunities. They provide new types of services that enterprises can use to change the composition of their customers or provide different kinds of services. For example, the Internet of Things holds enormous promise around allowing enterprises to provide a completely different class of services to their customers. In mobility and social technologies, the digital revolution holds the promise of changing the way businesses interact with their end customers.

Changing technology opens up new opportunities but also creates strategic challenges

Changing technologies

The second class of new technologies (Saas/BPaaS, automation, robotics, and artificial intelligence) changes how services are delivered. For example, SaaS takes a functionality that was available but delivers it through a different mechanism. Automation and robotics changes the way service is provided by shifting from FTE-based models into an automated machine-based delivery vehicle.

The two buckets of technologies have different value propositions. The first class of technologies (cloud, mobility, big data, IoT, and social) are about getting new and different functionality. The impacts in the second class are lower costs and improved flexibility and agility. Each class of technologies has different objectives and value propositions and thus needs a different kind of business case. Buyers that mix these technologies together in a business case do themselves substantial disservice.

The way you need to evaluate the two distinct types of technologies (and providers offering them) is completely different. A provider that recognizes that automation, robotics, and SaaS are about changing the nature of delivery will have a much more thoughtful conversation with you and build its value proposition around flexibility, speed, and quality of service and cost.

A provider that recognizes the impact of mobility, cloud, big data, and the IoT technologies will talk to you about a value proposition around standing up exciting new capabilities, creating new offers and changing the conversation with your end customers.

So, buyer beware. If you’re talking with a provider that mixes these technologies’ distinct value propositions together, you’re dealing with a provider that really doesn’t understand what they’re offering.


Photo credit: Flickr

Onshoring, Talent Development, Automation – My Top 10 Picks from RevAmerica 2015 | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

Last month I had the opportunity to attend and co-present with Eric Simonson at a special event in the outsourcing sector, RevAmerica 2015, held in New Orleans, LA. You can download our keynote presentation here. For those who might not know, RevAmerica is a domestic outsourcing event in its second year. The event focused on a multitude of topics and was attended by a strong community of service providers, buyers, economic development agencies, analysts/consulting firms, and academic institutions. Here are my top 10 takeaways from the event:

  1. Buyers are looking at their IT and BP service delivery portfolio more holistically than ever and asking the shoring question more seriously. They are willing to evaluate onshoring as an alternate and in some cases willing to even bend their rules around cost savings to get the extra flexibility in delivery.

  2. Service providers have a major role to play in onshoring growth as they can not only harness the available talent pool, but also create a delivery model that makes economic sense.

  3. Domestic pure-play service providers are diligently making the business case for onshoring. The ones that do this without demeaning the offshoring benefits are likely to be more successful in not only winning pursuits, but also in sharpening their own value proposition for buyers. In this regard, I liked Genesis10, Nexient, and Rural Sourcing’s approach that are playing on the strengths of onshoring rather than making unnecessary comparisons with offshoring.

  4. Economic development agencies (EDAs) are evolving in their thinking and go-to-market approach. Those who are serious about this sector, such as North Dakota Dept. of Commerce and Louisiana Economic Development (LED), have a more collaborative approach towards working with providers/enterprises. However, there is a lack of collaboration among economic development agencies for the common goal.

  5. Talent development continues to be an area of immense interest. Partnership with universities, training/re-skilling programs to create talent in places where people have limited opportunities, and hiring veterans and their spouses are all examples of initiatives to strategically develop the available talent for domestic sourcing. A great example of this is the partnership between IBM, LED, and LSU College of Engineering where State of Louisiana will invest in the institution to expand higher education programs in order to increase the annual computer science graduate output to support IBM’s delivery center in Baton Rouge.

  6. Tier-3 cities are the epicenter of activity in the domestic sourcing space, with maximum centers and headcount located in this cities. They are also the ones that will see maximum growth in the future, but we should watch for saturation trends.

  7. The buzz around robotic process automation (RPA) is getting stronger, especially in the context of domestic sourcing as onshore providers can compete with the offshore labor arbitrage model by harnessing the potential of RPA (where applicable).

  8. The role of educational institutions has to increase to make onshoring a compelling alternative in the eyes of both providers and buyers. EDAs can only promise sustainable talent pool, but not deliver it unless educational institutions show the flexibility and support at a sustained, tactical level – implying changing curriculum, adding industry interaction programs, etc. while still serving the overall mission.

  9. Agile methodology and its implications for working models for IT teams are a great blessing for the onshore model. However, agile can only be one of the selling points. Domain expertise, ability to ramp up/ramp down, technology expertise, and cost of delivery are all factors for evaluating a provider’s capabilities in the onshore context.

  10. The notion of “domestic sourcing = impact sourcing” is flawed. Beyond generating jobs for the underprivileged, domestic sourcing’s larger mandate is to create jobs for the unemployed educated people of the country. There are some domestic sourcing plays such as Onshore Outsourcing and Liberty Source that are doing impact sourcing in an onshore model.

Overall the event touched upon some very relevant topics from the domestic outsourcing perspective and is paving the way for developing a stronger ecosystem to support this sector. Kudos to the Ahilia team for organizing a great event! Last but not the least, in case you are interested in learning more about the domestic outsourcing landscape, you can download Everest Group’s full report here. You may also want to read Eric’s blog on tier-3 cities: John Mellencamp Named Honorary Everest Group Analyst of the Month.


Photo credit: Omni Royal Orleans

John Mellencamp Named Honorary Everest Group Analyst of the Month | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

“Well I was born in a small town
And I live in a small town
Prob’ly die in a small town
Oh, those small communities

All my friends are so small town
My parents live in the same small town
My job is so small town
Provides little opportunity

— John Mellencamp, Small Town (1985)

Turns out Mr. Mellencamp was a pretty good analyst when it comes to assessing global services employment opportunities in small communities. So much so, that I am officially naming him as “Honorary Everest Group Analyst of the Month.”

No, I am not smoking something.

We just completed a first of its kind analysis of the U.S. Domestic Outsourcing location landscape for RevAmerica and finally have the key facts the industry has been lacking. In short, although smaller communities are sometimes used for service delivery, the reality is that the vast majority of the market is concentrated in larger communities with populations measured in the 100,000s vs. 10,000s. In particular, tier-3 cities are the sweet spot…the largest number of centers, the largest employment, and the largest centers.

Defining the city tiers

In order to analyze approximately 250 metro cities, we segmented them into six groups – tier-1 through tier-5 plus rural. As indicated below, the city segments are characterized by differences in population size plus commercial and educational factors.

Location Definitions

Although only one dimension of a city’s potential for service delivery, it is easy and revealing to look at the differences in average population size of the city tiers. Each city tier is 20-40% of the population of the next larger city tier, which leads to a dramatic difference in the profile of cities that are 2-3 tiers different from each other.

Population of city tiers

It’s good to be a tier-3 city!

One of the most interesting findings from the research was the extent to which tier-3 cities dominate on almost every dimension. As shown in the exhibit below, they have the largest share of FTEs and delivery centers of all cities. Further, their centers are on average larger than any other city tier.

Distribution of FTEs and US delivery centers by city-tiers

Additionally, tier-3 cities have the largest portion of multi-function centers (some combination of IT, business process, and contact center) and are the centers which are expected to grow the most in coming years.

Given that tier-3 cities average one million in population, most are surprised that cities of this size are driving the growth of domestic outsourcing delivery – many would expect smaller cities to be the primary forces. So why are tier-3 cities favored?

In short, we believe this is due to three factors which work in combination with each other:

  • Sufficient cost savings: Relative to tier-1 cities, tier-3 cities offer 15-20% savings; moving to tier-4 cities may only offer 5% more savings and in many cases is either cost neutral or even higher cost than a tier-3 city.
  • Enough talent: With nearly one million in population, the installed base of experienced talent is sizeable. Further, most tier-3 cities have large colleges which produce fresh talent for the entry labor force. Combined with the life style benefits of a larger city (airport, entertainment, shopping, etc.), tier-3 cities have the ability to both keep talent and to attract talent from other cities – either smaller or larger cities. Not everyone would want to live in New York, NY; similarly, many people could not imagine living in a town of Midland, MI (a town of roughly 50,000). However, many people could be comfortable in a city of one million.
  • Accessible: Although the idea of a remote, small city may seem attractive in order to capture an isolated labor pool, this doesn’t hold up well when assessed in detail. First, even small communities have competition for talent plus limited talent pools – costs can quickly spiral up. Second, the practical logistics of transit to these small cities creates an inconvenience that most organizations wish to avoid (especially for IT service delivery, requiring more cross-center collaboration). Most tier-3 cities are connected by direct flights to other major business centers within two to three hours of flight time.

In other words, tier-3 cities have an attractive mix of cost savings and talent, while still being comparatively easy from an operational perspective. This is broadly true, but less true for pure contact center work which can more easily operate at scale in tier-4 cities and even some tier-5 cities due to the broad labor pool which can fill contact center roles.

So, would Mr. Mellencamp’s small town have been a viable service delivery location? He is from Seymour, Indiana, with a population of about 16,000 – clearly a rural community by our definition. Highly unlikely many organizations could operate an IT or business process center of 200 FTEs in Seymour, although a smaller contact center could be viable. So, yes, there might be jobs…but little opportunity…

Also check out my co-presenter Sakshi Garg’s top 10 takeways from RevAmerica.


Photo credit: Flickr

Hadoop and OpenStack – Is the Sheen Really Wearing off? | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

Despite Hadoop’s and OpenStack’s adoption, our recent discussions with enterprises and technology providers revealed two prominent trends:

  1. Big Data will need more than a Hadoop: Along with NoSQL technologies, Hadoop has really taken the Big Data bull by the horns. Indications of a healthy ecosystem are apparent when you see that leading vendors such as MapR is witnessing a 100% booking growth, Cloudera is expecting to double itself, and Hortonworks is almost doubling itself. However, the large vendors that really drive the enterprise market/mindset and sell multiple BI products – such as IBM, Microsoft, and Teradata – acknowledge that Hadoop’s quantifiable impact is as of yet limited. Hadoop’s adoption continues on a project basis, rather than as a commitment toward improved business analytics. Broader enterprise class adoption remains muted, despite meaningful investments and technology vendors’ focus.

  2. OpenStack is difficult, and enterprises still don’t get it: OpenStack’s vision of making every datacenter a cloud is facing some hurdles. Most enterprises find it hard to develop OpenStack-based cloud themselves. While this helps cloud providers pitch their OpenStack offerings, adoption is far from enterprise class. The OpenStack foundation’s survey indicates that approximately 15 percent of organizations utilizing OpenStack are outside the typical ICT industry or academia. Moreover, even cloud service providers, unless really dedicated to the OpenStack cause, are reluctant to meaningfully invest in it. Although most have an OpenStack offering or are planning to launch one, their willingness to push it to clients is subdued.

Why is this happening?

It’s easy to blame these challenges on open source and contributors’ lack of coherent strategy or vision. However, that just simplifies the problem. Both Hadoop and OpenStack suffer from lack of needed skills and applicability. For example, a few enterprises and vendors believe that Hadoop needs to become more “consumerized” to enable people with limited knowledge of coding, querying, or data manipulation to work with it. The current esoteric adoption is driving these users away. The fundamental promise of new-age technologies making consumption easier is being defeated. Despite Hortonworks’ noble (and questioned) attempt to create an “OpenStack type” alliance in Open Data Platform, things have not moved smoothly. While Apache Spark promises to improve Hadoop consumerization with fast processing and simple programming, only time will tell.

OpenStack continues to struggle with a “too tough to deploy” perception within enterprises. Beyond this, there are commercial reasons for the challenges OpenStack is witnessing. Though there are OpenStack-only cloud providers (e.g., Blue Box and Mirantis), most other cloud service providers we have spoken with are half-heartedly willing to develop and sell OpenStack-based cloud services. Cloud providers that have offerings across technologies (such as BMC, CloudStack, OpenStack, and VMware) believe they have to create sales incentives and possibly hire different engineering talent to create cloud services for OpenStack. Many of them believe this is not worth the risk, as they can acquire an “OpenStack-only” cloud provider if real demand arises (as I write the news has arrived that IBM is acquiring Blue Box and Cisco is acquiring Piston Cloud).

Now what?

The success of both Hadoop and OpenStack will depend on simplification in development, implementation, and usage. Hadoop’s challenges lie both in the way enterprises adopt it and in the technology itself. Targeting a complex problem is a de facto approach for most enterprises, without realizing that it takes time to get the data clearances from business. This impacts business’ perception about the value Hadoop can bring in. Hadoop’s success will depend not on point solutions developed to store and crunch data, but on the entire value chain of data creation and consumption. The entire process needs to be simplified for more enterprises to adopt it. Hadoop and the key vendors need to move beyond Web 2.0 obsession to focus on other enterprises. With the increasing focus on real-time technologies, Hadoop should get a further leg up. However, it needs to provide more integration with existing enterprise investments, rather than becoming a silo. While in its infancy, the concept of “Enterprise Data Hub” is something to note, wherein the entire value chain of Big Data-related technologies integrate together to deliver the needed service.

As for OpenStack, enterprises do not like that they currently require too much external support to adopt it in their internal clouds. If the drop in investments is any indication, this will not take OpenStack very far. Cloud providers want the enterprises to consume OpenStack-based cloud services. However, enterprises really want to understand the technology to which they are making a long-term commitment, and are cautious of anything that requires significant reskill or has the potential to become a bottleneck in their standardization initiatives. OpenStack must address these challenges. Though most enterprise technologies are tough to consume, the market is definitely moving toward easier deployments and upgrades. Therefore, to really make OpenStack an enterprise-grade offering, its deployment, professional support, knowledge management, and requisite skills must be simplified.

What do you think about Hadoop and OpenStack? Feel free to reach out to me on [email protected].


Photo credit: Flickr

The Future of IBM’s Watson and Cognitive Computing | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

I had the privilege of being at IBM and seeing first hand Watson working on powerful use cases. I must say, even now after a few days of reflecting on it, I think I’m even more impressed with its power and capability than when I was at IBM and saw Watson in use. If, like me, you spend two hours with Watson, you will get a glimpse at our future. It’s highly likely that within five to 10 years all of us will use some kind of cognitive computing to assist us in our daily lives. But I believe there is a major challenge.

Just a quick refresh: Watson is cognitive computing, a form of artificial intelligence. Previously I did not understand the way it will be deployed; it will augment human decision making, not replace people. That’s not to say that an individual assisted by Watson won’t be able to do the work of many more individuals. At least at this stage of development cognitive computing makes humans more capable and smarter.

For example, I saw Watson working as a companion to an oncology doctor, helping him perform more thorough diagnostics. In the situation I observed, the oncologist was able to cut the diagnosis and testing process from six days down to two hours. That doctor was far more effective because Watson can explore many more options and present hypotheses and data to the doctor and medical team than they could have explored on their own (plus it would have taken far more time for them to do it). In addition, it’s not hard to believe that the team would be more likely to do a better diagnostic with Watson as companion than they could achieve through traditional techniques.

With all that being said, I think Big Blue faces a major challenge with Watson at the moment: Watson is a solution looking for a problem.

As I understand it, IBM invested over a billion dollars in Watson’s development. On TV we saw Watson defeat a chess Grandmaster and then win on “Jeopardy.” However, now Watson needs to make the journey to operate in the real world of business problems.

These use cases and applications are still undefined and will emerge over time. It is, in fact, the challenge of problem definition and incremental adoption that stands in the way of progress. It’s easy to imagine that there are limitless applications for Watson; but for Watson to take off quickly, we need to identify big issues with large payoffs. Without these game-changing applications we will wait for several years for cogitative computing to make the contribution that it is clearly capable of.

To recover its billion-dollar investment and create a market for cognitive computing, IBM has every incentive to hasten the adoption. However, it has yet to identify the break-through problems that will drive rapid adoption. It is all very well to believe that the power of the technology will inevitably drive adoption; but if cognitive computing is like other disruptive technologies, it will come slowly and in spurts.

Where does cognitive computing fit?

To hasten adoption, my best – and unsolicited – advice to IBM is to identify big business problems where Watson can make a structural change and drive massive benefits. Clearly, working as a companion to oncologists is such an area. And given that healthcare is 20 percent of the U.S. GDP that alone may be worth the journey.

But for enterprises beyond healthcare, I feel challenged as to what other big structural changes Watson and cognitive computing could provide.

I strongly suggest that you find a way to experience Watson’s power. It’s is so powerful that I, like IBM, am struggling with where we should take it.

As I’ve pondered its possibilities, I think underwriting and the claims process in the insurance sector holds tremendous opportunities. And within IT, I think the service desk and problem solving that IT departments contend with could be dramatically enhanced with this technology. With a cognitive computing tool as their companion, they could deliver a higher quality of service and greatly improve productivity. Clearly the area of security would benefit substantially as we find ways to keep the black hats out of our data.

I’m very interested in other points of view as to where we can put cognitive computing to work, so please add your comment below.


Photo credit: Wikipedia

The Dullest Business Process Could be a Winner in the Automation Contest | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

In my last post “The Automation Technology Starter Question,” I provided some guidelines on where to start on selecting business process automation technologies. In this post I cover the topic of what business process to automate when starting out on a Proof of Concept (PoC). This question is easier to answer for organizations that have specific requirements or pain points. Other organizations aiming to simply increase operational efficiency would have many more candidate processes. Whether your organization is in the first or the second group, the dullest of your dreariest business processes could be the winner of the automation contest. The difference between the two groups is the range of processes to choose from. The first group of organizations would have to find the process from among the known problem areas while the second group would need to do the same form a larger portfolio.

Definition of Dull

A dull process is highly repetitive – the sort that drives staff into a zombie state of mind on a daily basis leading to high attrition rates. This could include:

  • Simple but large data entry workloads such as order processing
  • More complex and still repetitive tasks such as those that involve checking a number of systems and updating the same pieces of data and status information in all of them, e.g. ,notifications of customer change of circumstance

The swivel chair is another good indicator for a process needing automation (if not deeper system integration) – when a crick in the neck from turning from one screen to another comes as part of the job.

Dull process examples

Most organizations have these, and I have no doubt that they will all be automated in the next few years. Examples include:

  • Processing of insurance information such as premium advice notes
  • Changing customers from one mobile/cell phone plan to another
  • Entering orders placed on the web into an order processing system.

Getting staff on-board

While the threat of job losses as a result of automation is real, there continues to be a shortage of skills and experience in many parts of the world. In many organizations, automation can enable staff to move upward or across to other and more interesting processes. Identifying opportunities for staff early will help to successfully implement change. Some of the most successful deployments include those that had their staff on-board from the beginning. These have seen automations become part of the team.

Again, when it comes to an automation PoC, let process dullness be your guide.

Automation Feeds Desire for Onshore Services | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

There’s a lot of rethinking going on in North American businesses in light of new technologies. In Everest Group’s conversations with clients and in round table discussions we’ve been holding in the industry, we find that these mature companies believe automation gives them the ability to bring their work back on shore.

After more than a decade of achieving value through the offshore labor arbitrage model, one would think that mature organizations that have built GICs or captives, or organizations with extensive use of third-party outsourcing providers, would be at peace with the model. We expected them to move to a model of arbitrage plus automation. But the level of peace and comfort with offshore arbitrage is much less than we expected, and companies are expressing their desire to use robotics automation to repatriate their work.

This is particularly the case in regulated industries with significant compliance requirements. This is where the desire to move work back on shore shows up first. The increasingly regulated financial services industry is especially burdened with complex regulations. These businesses receive a higher degree of scrutiny if operations are in offshore low-cost locations than if they are automated. It’s easier to demonstrate compliance in an automated environment than in an arbitrage labor environment.

Moreover, these companies believe life is easier in an onshore environment than in an offshore environment.

This is not to say the desire to move work back on shore is a sea change. But we are seeing the early stages of this movement.

I think this is a very interesting development. Our hitherto assumption that the market had overcome its xenophobic fears is not correct. It’s quite possible that the steady blast of negative press in the media and the nationalistic pressure from consumers may be starting to play a role in this re-examination.


Photo credit: Flickr

How Service Providers Can Illuminate Clients’ Path to Transformation | Sherpas in Blue Shirts

By | Sherpas in Blue Shirts

One of the biggest issues facing executives today is that they see the need to change their organization through automation, analytics, or other big ideas that are clearly vetted, but they struggle to drive the change. Their organization is reluctant or frightened to change, much like horses in a steeplechase race that shy at jumping the fences. Consequently, service providers are frustrated. They see potential for their business, but they’re unable to move from project to program. How can a provider help clients to jump the obstacles instead of shying away from them?

How can a provider illuminate the path forward to transformation and get the client on board for change? Let’s start by talking about what doesn’t work – white papers. No executive these days reads white papers. It just doesn’t happen. As I’ve blogged before, they’re too dense, too theoretical and too preachy.  And they’re nakedly self-interested.

So what are the tools that enable clients to jump the fence? At Everest Group we’ve been thinking about and researching this. Executives need simple yet rigorous, relentlessly objective instruments that they can use to challenge their organization. And once they examine an instrument, a clear path forward opens up behind it.

What would such an instrument look like? It might be a maturity index showing competitors’ degree of progress in the field. Everest Group’s PEAK Matrix™ is another example of a first step in instrumentation; it illuminates providers’ capabilities in the marketplace.

Every organization is unique, and they struggle with how to apply transformation or disruptive technologies to their uniqueness. A set of objective instruments allows executives to have a conversation in their organization and challenge their organization. The organization can then ask for help and imagine a roadmap – without a provider telling them what the roadmap should be.

Simple but rigorous instruments will illuminate the way to transformation and assist in the organization internalizing the path forward through the challenges.


Photo credit: Flickr