Experts in the global services terrain

In another blog almost a year ago, I called for a consumption-based pricing mechanism for automation. Like the software industry has proved, I believe the concept of moving away from a traditional software license structure to a SaaS basis makes sense in the automation space. Instead of paying for a robot, a company would just pay for the usage of that robot or automated engine.

Even if a company doesn’t want to use a SaaS model, robots could be resident on its internal server but the company would still pay only for usage of the robot. A consumption-based model would reduce the time in the adoption cycle and overcomes the artificial constraints companies often have with licenses.

If I have only one robot, I would have to queue up work and wait for the robot to get to it. Why would I do that? Why not launch all the robots I need to get the work done immediately and compress my cycle time? And why not pay for that usage rather than contort myself to make use of a single robotic or cognitive instance? It doesn’t make any sense to the user. That inefficiency potentially reduces the usefulness of the technology. It also reduces the market share that a software provider can capture.

Automation Anywhere, one of the largest automation providers, has launched something called the Bot Farm, which provides exactly this capability. Its bot farm allows companies to buy robotic process automation (RPA) tools on a usage basis rather than on a capacity or license basis.

I believe Automation Anywhere will be rewarded in the marketplace by greater share and profitability. More importantly its customers will benefit because they can buy what they want, when they want it, without stranding large amounts of capital on tools that they only use part of the time.

Automation Anywhere’s model is an extremely important development in the marketplace, and it’s good to see that a major automation provider has moved to attack the constraints.

Tagged with:
 

For the last 30 years, companies have built shared service organizations, and IT shared services units have been the best known of these. IT shared services have created a lot of value for companies, delivering high-quality, low-cost IT capacity. But today they’re almost like zombies – resembling a body that’s alive but really isn’t. How did this happen, given the value that IT shared services have created for companies for many years?

The way IT shared services created value is through a series of functional disciplines such as infrastructure discipline, security discipline, applications maintenance discipline, applications development and project management discipline. IT shared services was tasked with and delivered high-quality, low-cost capacity in those disciplines.

The problem for IT shared services is that the world is waking up to digital transformation.

As the capabilities of the digital revolution become apparent, it’s clear that IT is integral to how companies or business units compete and win in customer experience and market share. However, the expectations for IT have shifted. Instead of providing access to a low-cost, high-quality utility or function, the focus now is to integrate technology into day-to-day business in a different and more compelling way – and doing it fast. And the innovations in cloud, infrastructure as a service, SaaS, etc. deal neatly and well with alignment and speed issues.

IT cost and reliability are still important, but businesses now focus on customer needs and experience and the speed at which IT can respond to those needs. Today, in the functional-disciplines model of IT shared services, projects often take a year to 18 months. This is unacceptable to business stakeholders focusing on customer experience and value.

The challenge of IT aligning with the business stakeholders and operating at speed is killing shared service organizations, as they are not designed to deal with alignment or speed issues.

So the question is: Can we operate IT shared services in today’s environment? There’s a line of thought that says, no, not in the way they are currently constructed. Rather than organizing by functional disciplines, organizations need to align technology services directly with the business units along service lines.

In many leading organizations today, business units are no longer buying IT services such as data center, application development, security or other functions from centralized IT. They are standing that down and, instead, aligning the technology end to end into the business units. This is an IT-as-a-Service model.

This model slices through layered IT organizations, reorganizing services according to business functionality. The result of the tight alignment between IT and the business is far more flexibility to move quickly to adopt new functionality and also scale IT consumption to actual usage. The model is very close to functionality on demand.

The IT-as-a-Service model is powerful. As it grows in responsibility, I believe it will completely disrupt IT shared services as we know it.

Tagged with:
 

At this point in time, every CIO knows that the cloud is a more agile, faster and cheaper place to be. But they have a tremendous amount of technical debt in legacy ecosystems, which, so far, has proved to be largely resistant to moving into cloud. There are a lot of companies, like AWS, that will help you — for free — evaluate your applications portfolio and help you calibrate which applications can move to the cloud. So it’s not that we lack the knowledge or the technology to migrate to the cloud. The problem is that the people that manage legacy ecosystems resist their moving to the cloud.

Organizations are passive-aggressive toward change. By that I mean that everyone gives lip service to migrating to the cloud, but they see their job as telling the CIO the problems you’re going to face (security or performance, for instance) and why you can’t do whatever action you want to do. They will eagerly tell you that you’ll have problems, you can’t do it, or it will be very expensive.

Read more at CIO online.

Tagged with:
 

Bimodal IT has become common parlance. It is the term for the underlying assumption that large enterprises require stability in their core IT systems, and should therefore not take the risk of migrating to new technology. Instead, the old systems should be maintained in their current condition – or a condition that is fundamentally the same with only minor adjustments – and only use current or next generation IT for new requirements.

While the logic seems to make sense on the surface, there is a major flaw in this argument. IT systems aren’t like fine wines that get better with time. IT systems become old and outdated, just like your own personal computer. Think about it – how many years did it take you to become totally frustrated with the computer you LOVED when you first got it? It was the best thing on the market, was lightning fast, and did all the cool new stuff.  But five years later, it has slowed to a crawl, the operating system isn’t compatible with the latest version of the programs you want to use, and no one has the components or software required to service it. There comes a point in time when it is more risky to hold on to the old than to migrate to the new.

Now let’s test this argument to determine whether it makes as much sense for the enterprise as it does the technology forward consumer.

Delta Airlines’ entire system was shut down due, at least in part, to some of their core IT systems not switching over to backup when Georgia Power encountered a switchgear failure that is the equivalent of blowing a fuse. The result? More than 650 flights were initially cancelled, with thousands more likely at risk of being cancelled later in the day. Of course, the impact of an outage of this magnitude is not limited to a single day. Somehow, all of the passengers still have to get where they were going, and will need to fit in on other flights. There is no way Delta will be able to accommodate all those people, so other airlines will have the opportunity to pitch in and serve those stranded by Delta.

The outage began at 2:30 am, and Delta resumed flight by 9:00 am. In just six and a half hours, Delta lost millions of dollars, ran many of its passengers off to other airlines, and served a major blow to its hard-fought battle to build a reputation as the most reliable U.S.-based international carrier. All in just six and a half hours.

This should force enterprises to look at their IT systems and ask just how stable they really are. Their systems, assuming they are similar to most major airlines, were likely built in the 1990’s – more than 20 years ago. Given the rapid pace of technology changes, a 20 year old system is practically ancient. Who has the skills to service systems that old? How well does the system integrate with new technology? What level of customization and patching together has occurred over the past 20 years to keep the system relevant? How has the system been able to address the vulnerabilities and performance issues resolved by newer technology? In other words, what is the underlying risk associated with an IT system that is a couple of decades old, and is it reasonable to expect it to get better?

Bimodal IT might make sense in some instances when the cost of the risk of obsolescence is low. It can also make sense as a short-term strategy. However, nothing is stable forever, and the increased complication of maintaining old systems adds risk that slowly creeps up over time.

Yes, transitions to newer technology are both costly and risky. But so is having an unexpected total business shutdown due to the inability of an ancient system to continue to perform as it has in the past. Transitions to newer technology can be planned and managed, and seem to be much less of a gamble in the long-run.

Tagged with:
 

One of the prevailing myths in the services industry is that more automation means higher profits for the service providers. The theory: as they introduce automation they reduce the number of FTEs per revenue dollar; therefore, they would get higher profits. The reality: it’s not true in the market overall.

Higher profits from automation may be the case in specific situations where the contract was left assuming that labor would be used and the work was performed on an outcome basis. But this is seldom the case.

At this point in the market, we’ve had three years in which automation has been systematically introduced into providers’ service offerings. But there seems to be no correlation, no industry-wide increase in profitability in these service areas. That’s disturbing for the providers.

Here’s what it means: Taken as a whole, the market is adjusting and the customers are capturing the full benefit of automation, not the providers. Customers are dispelling the myth that more automation equals higher provider profits.

Tagged with:
 
Page 1 of 15312345...Last »