- Enterprise IT As-A-Service™
- Strategic Sourcing
- Business Transformation
- Service Optimization
- Service Provider Consulting
Service Delivery Automation (SDA) encompasses cognitive computing as well as RPA (robotic process automation). Software providers that provide SDA come to market with an enterprise licensing structure that basically requires the customer to license a number of agents for a specific length of time. But in using this licensing model, service providers unintentionally constrain adoption and open the door for competitors’ differentiation.
The problem is that many of the uses for SDA are cases where companies need variable amounts of agents and computers. For example, the client may have a million transactions to process. So it could buy a license for one agent to run through those transactions in a week; but to reduce cycle time requires buying the license for 10,000 virtual agents for one hour. At Everest Group, we clearly see clients becoming frustrated with this lack of licensing flexibility.
This situation calls for a consumption-based approach to SDA in which the customer pays the software provider handsomely, but the provider doesn’t constrain or force unnatural motions on its customers and doesn’t create unnecessary cycle-time issues.
This blog is a call to service providers to come up with a consumption-based pricing model for SDA services. I’m not advocating that providers give away their software or take less money for it. I am saying that the current pricing structure inhibits adoption and is a constraint on the growth of the industry. Employing a consumption-based pricing model doesn’t mean that the client won’t pay a fair compensation or that the provider won’t be profitable. To achieve this, providers need to create some kind of metering vehicle in which time or activities are metered and they need to link their pricing to this meter.
Photo credit: Flickr
Businesses now demand that IT departments dramatically change the velocity of the cycle time it takes to take ideas from concept to production – often from as long as 12-18 months to only four to six weeks. Organizations can’t achieve a change of this magnitude with just a change in methodology. To do this, they must move to DevOps – a disruptive phenomenon with immense implications for the enterprise IT ecosystem and the service providers that support it.
Put another way, many IT firms batch or create software releases once or twice a year in which they bring out updates to their enterprise platforms. Businesses now demand a cycle time of one to two times a month for updates. What they want and need is a continuous-release construct.
Methodology alone cannot create the conditions in which organizations can form ideas, build requirements, develop code, change a system and do integration testing in the new timeframes. Hence the DevOps revolution.
DevOps is the completion of the Agile methodology. It builds the enabling development tools, integrates test conditions, and integrates the IT stack so that when developers make code changes, they also configure the hardware environment and network environment at the same time.
To do this, an organization must have software-defined data centers and software-defined networks, and all of this must be available to be tested with automated test capabilities. By defining coding changes with network and system changes all at the same time and then testing them in one integrated environment, organizations can understand the implications and allocate work as desired. The net result is the ability to make the kind of cycle time shifts that businesses now demand.
DevOps enables IT departments to meet the cycle time requirements. But the implications for how organizations buy services and how providers sell services are profound. Basically the old ways don’t work as well because of the new mandate for velocity and time. This causes organizations to rethink the technology, test beds, and service providers; and then manage the environment on a more vertical basis that cuts across development, maintenance, and testing, and allows the full benefit of a software-defined environment.
Let’s examine pricing, for example. Historically, coding and testing are provided on a time-and-materials basis. The productivity unleashed in a DevOps environment enables achieving approximately a 50 percent improvement in efficiency or productivity. Therefore, it is as cannibalistic or as disruptive to the development and maintenance space as cloud is to the infrastructure environment.
Furthermore, organizations can only operate a DevOps environment if they have a software-defined hardware environment – aka a private or cloud environment. This forces production into ensuring they perform all future development in elastic cloud frames.
Enterprises today are reevaluating where they locate their talent. Having technical talent in a remote location with difficult time zone challenges complicates and slows down the process, working against the need for speed.
So DevOps is a truly disruptive phenomenon that will disrupt both the existing vendor ecosystem and also the software coding and tool frames. Testing, for example, has been a growth area for the services industry, but DevOps environment largely automate testing services.
Another disruption is that DevOps takes a vertical view of the IT life cycle. It starts to integrate the different functional layers, creating further disruption in how organizations purchase IT services.
DevOps offerings are a new development among service providers, but the services industry to date has been slow to adopt the movement. DevOps is an internal threat to their existing business and requires providers to rethink how they go to market.
One prediction I have made about the future of service delivery automation (SDA) is that increasingly enterprise software will have the technology embedded. This is particularly true of intelligent and cognitive type of tools. I expect these to become a common feature of enterprise software in the next 5-7 years.
We saw this kind of trend in the earlier days of business intelligence and reporting. The popularity of third-party tools saw the functionality built into enterprise software. As well as reports on activities, dashboards started to feature in applications giving instant views of what was going on in the enterprise. We do not have to look far to find such software today, for example, Blue Prism, includes analytics that report on operations and performance of its robots.
A current example of a more intelligent enterprise software is Oracle Policy Automation Cloud Service. This reads policies written in natural language. Then based on business rules and the policy, it decides what questions to ask the customer, performs eligibility checks, and produces a decision report.
Another example is HighSpot, an enterprise search tool that uses natural language processing for searches and machine learning for finding the most relevant information and ranking the results.
The availability of open source machine learning software libraries, such as Apache Mahout, and software tools from industry giants, such as Microsoft (Machine Learning Service on Azure), will accelerate the next generation of smarter enterprise software.
Some would say that intelligent enterprise software would be function-specific, but I believe some varieties will be able to do more than one thing within large software applications. The need for standardization of interfaces to these tools and the ability to interact with other intelligent applications will grow over time too. We could even see more automations crossing paths across workflows leading to more complex machine-based decision making.
The question is what impact will pervasive intelligence have on the outsourcing industry:
Intelligent enterprise software is here. And we are on the brink of it becoming pervasive and commonplace. As it does, I’ll continue to share my insights on its evolution.
Photo credit: Flickr
Automation has the essentials for introducing different kinds of business risks and risk at a different order of magnitude. The new risks manifest differently and have greater consequences than in a normal business process. The issue is the difference between type 1 and type 2 errors.
We at Everest Group have discussed with clients this impending shift of business processes to a far more automated landscape where type 2 errors are inadvertently introduced.
In a previous blog, I talked about automation bias and how people tend to blindly come to accept or believe whatever comes out of an automated tool. This makes the likelihood stronger that type 2 errors would occur.
On an industrialized services basis with broad-scale business processes, we must be aware of type 2 errors and guard against them. This is why many of the leading firms that are looking at adopting automation, cognitive computing, and robotics are considering implementing a Center of Excellence (CoE) to help the business understand the changes that accompany automation. A CoE can help educate employees to guard against automation bias and type 2 errors that could inadvertently be institutionalized in automated approaches to business processes.
Photo credit: Flickr
The irresistible force paradox asks, “What happens when an unstoppable force meets an immovable object?” I think it’s the opposite when it comes to the Internet of Things (IoT) and the already booming as-a-service economy: “What happens when an unstoppable force befriends an unstoppable object?”
Most of the discussion to date around the as-a-service economy has been focused on cloud services, SaaS, and the likes of Uber. At the heart of this economy are the fundamental premises that customers – either business or consumer – can “rent” rather than own the product or service, and can do so, on demand, when they need it, paying as they go.
Although wishing for the utopian as-a-service model may be a futile exercise, the IoT can initiate meaningful models for heavy investment industries and quite a few consumer-focused businesses, and as technologists we should continue to push the envelope.
Let’s step back and think about how the IoT can push the sharing economy to its potential. Can product manufacturers leverage IoT principles, and create a viable technical and commercial model where idle assets are not priced, or are priced at a lower rate, thus saving customers millions of dollars? This would, of course, require collaboration between customers and product manufacturers to enable insight into how, when, and how much a customer consumes the product. But consider the possibilities!
One example is the car-for-hire market. Could a customer’s wearable device communicate with a reserved car, notifying it of approximate wait time until it’s required, enabling the vehicle to be productively deployed somewhere else, in turn enabling the business to offer lower prices to the customer and reduce the driver’s idle time? I think the technology is there, and although the task is humongous and with uncertain returns, I am sure someone, (ZipCar?) will experiment with this model at scale in the near future.
Another example is the thousands of small healthcare labs that cannot afford to own a blood analyzer. Innovative manufacturers of these machines could leverage IoT principles to analyze the blood test patterns of individual labs, and offer them a subscription model by which they are charged per blood test executed, or offered a bundled price of $X per 100 blood tests (much like HP’s Instant Ink offering.)
The IoT has the potential to really bring upon us the power of a sharing economy. In the near-term, businesses face challenges in developing a viable commercial and support model. However, they must overcome this in order for society at-large to truly benefit from this once-in-a-lifetime opportunity. They must remember that most industry disruption these days comes from outside the industry. If they don’t cannibalize themselves, someone else will. Thus, as the traditional competitive strategy levers are fast losing relevance, the IoT most definitely should be an integral part of their strategy.
Photo credit: Flickr