Explainable AI? Why Should Humans Decide Robots are Right or Wrong? | Sherpas in Blue Shirts

I have been a vocal proponent of making artificial intelligence (AI) systems more white box – able to “explain” why and how they came to a particular decision – than they are today. Therefore, I am heartened to see increasing debate around this. For example, Eric Brynjolfsson and Andrew McAfee, the well-known authors of the book, “The Second Machine Age,” have increasingly spoken about this.

However, sometimes I debate with myself on various aspects of explainable AI.

What are we explaining? If we have a white box AI system, a human can understand “why” the system made a certain decision. But who decides whether or not the decision was correct? For example, in the movie “I, Robot,” the lead character (played by actor Will Smith) thought the humanoid robot should have saved a child instead of him. Was the robot wrong? The robot certainly did not think so. If humans make the “right or wrong” decision for robots, doesn’t that defeat the self-learning purpose of AI?

Why are we explaining? Humans seek explanation when they lack understanding of something or confidence in the capabilities of other humans. Similarly, we seek explanation from artificial intelligence because, at some level, we aren’t sure about the capabilities of these systems. (Of course, there’s also humans’ control freak characteristic.) Why? Because these are “systems” and systems have problems. But humans also have “problems,” and what each individual person considers “right” is defined by their own value system, surroundings, and situations. What’s right for one person may be wrong for another. Should this contextual “rightness or wrongness” also apply to AI systems?

Who are we explaining? Should an AI system’s outcome be analyzed by humans or by another AI system? Why do we believe we have what it takes to assess the outcome, and who gave us the right to do so? Can we become more trustworthy, and believe that an AI system can assess another? Perhaps, but this defeats the very debate on explainable AI.

Who do we hold responsible for the decisions made by Artificial Intelligence?

The mother lode of complexity here is around responsibility. Due to human’s beliefs that AI systems won’t be “responsible” in their decisions – based on individuals’ biased definition of what is right or wrong – regulations are being developed to hold humans responsible for the decisions of AI systems. Of course, these humans will want an explanation from the AI system.

Regardless of which side of the argument you are on, or even if you pivot daily, there’s one thing I believe we can agree on…if we end up making better machines than better humans through artificial intelligence, we have defeated ourselves.

Subscribe to our monthly newsletter to get the latest expert insights and research.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.