Tag

risk mitigation

Automation Introduces New Business Risks | Sherpas in Blue Shirts

By | Blog

Automation has the essentials for introducing different kinds of business risks and risk at a different order of magnitude. The new risks manifest differently and have greater consequences than in a normal business process. The issue is the difference between type 1 and type 2 errors.

  • Type 1 error. This is a normal error such as making a mathematical error on an invoice. The consequences are that you would under-bill or over-bill a client. Once you reconcile the error, you may have lost a revenue opportunity or may have to rebate the client for the difference in overcharging.
  • Type 2 error. An example of this situation is that you under bill all your clients. The consequence is often 10X or more the impact of a type 1 error.

We at Everest Group have discussed with clients this impending shift of business processes to a far more automated landscape where type 2 errors are inadvertently introduced.

In a previous blog, I talked about automation bias and how people tend to blindly come to accept or believe whatever comes out of an automated tool. This makes the likelihood stronger that type 2 errors would occur.

On an industrialized services basis with broad-scale business processes, we must be aware of type 2 errors and guard against them. This is why many of the leading firms that are looking at adopting automation, cognitive computing, and robotics are considering implementing a Center of Excellence (CoE) to help the business understand the changes that accompany automation. A CoE can help educate employees to guard against automation bias and type 2 errors that could inadvertently be institutionalized in automated approaches to business processes.


Photo credit: Flickr

Automation Bias | Sherpas in Blue Shirts

By | Blog

We’re at an inflection point in the ITO and BPO services world where we’re about to see a new level of technology: automation. On the whole, automation is a good thing. But there are some significant aspects we should be aware of. One is automation bias. And it’s dangerous.

When we move to automation, whether it’s cognitive computing or replacement of repetitive tasks, the people who are in the process become dependent on the automation. In fact, not only do they become dependent, they start to believe that whatever comes from the computer is truth. They take it for granted that the results are accurate. This is automation bias.

As a simple example, when you use a calculator, you quickly start to trust whatever the calculator results are. We have blind trust in automated tools.

Why is automation bias so dangerous?

A computer will slavishly do what it’s told to do or will run down the same cognitive analysis it has done in the past. When the world changes, the computer may not recognize that the world has changed. Change can come from one of the data sources having made a change. Or it could be an upstream or downstream change in a business process. Although people in the business process should recognize the change, automation bias may cause them not to recognize it because they believe that everything coming out of the computer is correct. This is a significant business risk.

The fact is automated tools are fallible. We all know that the world constantly changes, and automation bias presents the risk that the computer won’t recognize the change.

We’re on the verge of taking robotics and automation at a scale we have never done before. This will dramatically change how we perform business processes and how we run data centers. Organizations going down the automation path need to be aware of automation bias and build safeguards against it.


Photo credit: Flickr