Marc Andreessen’s famous quote about software eating the world popped up often in the last couple of years. However, the fashionable and fickle technology industry is now relying on artificial intelligence to drive similar interest. Most people following AI would agree that there is a tremendous value society can derive from the technology. AI will impact most of our lives in more ways than we can think of today. In fact, it often is hard to argue against the value AI can potentially create for society. Indeed, with the increasing noise and real development around AI, there are murmurs that AI may replace software as the default engagement model.
Artificial intelligence may replace software
Think about it. When we use our phone or Amazon Alexa to do a voice search, we simply speak, hardly using the app or software in the traditional sense. A chatbot can become a single interface for multiple software programs that allow us to pay our electric, phone, and credit card bills.
Therefore, artificial intelligence replacing software as the next technology shift is quite possible. However, can we rely on AI? Or, more precisely, can we always rely upon it? A particularly concerning issue is that of bias. Indeed, there have been multiple debates around the bias an AI system can introduce.
But can AI be unbiased?
It’s true that humans have biases. As a result, we’ve established checks and balances, such as superiors and laws, to discover and mitigate them. But how would an AI system determine if the answer it is providing is neutral and bereft of bias? It can’t, and because of their extreme complexity, it’s almost impossible to explain why and how an AI system arrived at a particular decision or conclusion. For example, a couple of years ago Google’s algorithms classified people of a certain demography in a derogatory manner.
It is certainly possible that the people who design AI systems may introduce their own biases into them. Worse, however, is that AI systems may over a period of time start developing their own biases. And even worse, they cannot even be questioned or “retaught” the correct way to arrive at a conclusion.
AI and ethics
There have already been instances in which AI systems gave results for which they weren’t even designed. Now think about this in a business environment. For example, many enterprises will leverage an AI system to screen the resumes of potential candidates. How can the businesses be sure their system isn’t rejecting good candidates due to some machine bias?
A case of this type could be considered an acceptable, genuine mistake, and it could be argued that the system isn’t doing it deliberately. However, what happens if these mistakes eventually turn into unethicality? We can pardon mistakes but we shouldn’t do the same with unethical decisions. Taking it one step further, given that these systems ideally learn on their own, will their unethicality become manifold as the time progress?
How far-fetched it is that the AI systems become so habitually unethical that users become frustrated? What are the chances that humanity stops further developing AI systems when it realizes that it’s not possible to create AI systems without biases? While every technology brings a level of evil with the good, AI’s negative aspects could multiply very fast, and mostly without explanation. If these apprehensions scare developers away, society and business could lose AI’s tremendous potential positive improvements. That would be even more unfortunate.
As the adoption of AI systems increases, we will likely witness more cases of wrong or unethical behavior. This will fundamentally question and push regulators and developers to put boundaries around these systems. But therein is a paradox: developing systems that learn on their own, while putting boundaries around that learning – quite a contradiction. However, we must overcome these challenges to exploit the true potential of AI.
What do you think?