Outsourcing Contracts
EXPLORE MORE ON THIS TOPIC
Join Shirley Hung, Vice President at Everest Group, and Paul Blake, Director of Product Marketing at GEP, as they discuss the advantages of digital category management adoption. They will explore:
Thursday, March 11, 2021, at 10 am CST, 11 am EST, 4 pm GMT, 9:30 pm IST
Live, virtual event
Shirley Hung
Vice President
Everest Group
Paul Blake
Director of Product Marketing
GEP
AI adoption is on the rise among enterprises. In fact, the research we conducted for our AI Services State of the Market Report 2021 found that as of 2019, 72% of enterprises had embarked on their AI journey. And they’re investing in various key AI domains, including computer vision, conversational intelligence, content intelligence, and various decision support systems.
However, the machine intelligence that surrounds us today belongs to the Narrow AI domain. That means it’s equipped to tackle only a specified task. For example, Google Assistant is trained to respond to queries, while a facial recognition system is trained to recognize faces. Even seemingly complex applications of AI – like self-driving cars – fall under the Narrow AI domain.
Narrow AI can process a vast array of data and complete the given task more efficiently; however, it can’t replicate human intelligence, their ability to reason, humans’ ability to make judgments, or be context aware.
This is where General AI steps in. General AI takes the quest to replicate human intelligence meaningfully ahead by equipping machines with the ability to understand their surroundings and context.
Exhibit 1: Evolution of AI
Researchers came up with Deep Neural Networks (DNN), a popular AI structure that tries to mimic the human brain. DNNs work with many labeled datasets to perform their function. For example, if you want the DNN to identify apples in an image, you need to provide it with enough apple images for it to clean the pattern to define the general characteristics of an apple. It can then identify apples in any image. But, can DNNs – or more appropriately, General AI – be imaginative?
Generative Adversarial Networks (GAN) bring us close to the concept of General AI by equipping machines to be “seemingly” creative and imaginative. Let’s look at how this concept works.
Exhibit 2: GAN working block diagram
GANs work with two neural networks to create and refine data. The first neural network is placed in a generator that maps back from the output to create the input data used to create the output. A discriminator has the second network, which is a classifier. It provides a score between 0 to 1. A score of 0.4 means that the probability of the generated image being like the real image is 0.4. If the obtained score is close to zero, it goes back to the generator to create a new image, and the cycle continues until a satisfactory result is obtained.
The goal of the generator is to fool the discriminator into believing that the image being sent is indeed authentic, and the discriminator is the authority equipped to catch whether the image sent is fake or real. The discriminator acts as a teacher and guides the generator to create a more realistic generated image to pass as the real one.
The GAN concept is being touted as one of the most advanced AI/ML developments in the last 30 years. What can it help your business do, other than create an image of an apple?
But, GANs do come with their own set of problems:
Despite these problems, enterprises should be keen to adopt GANS as they have the potential to disrupt the business landscape and create immense competitive stance opportunities across various verticals. For example:
If you’ve utilized GANs in your enterprise or know about more use cases where GANs can be advantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!
AI in Healthcare and Pharma
VIEW THE REPORT PAGE
AI in Supply Chain Management
Contingent workforce managementVISIT THE REPORT PAGE
AI Trailblazers in HR
VIEW THE REPORT PAGE
AI in Banking and Capital Markets
VIEW THE REPORT PAGE
AI use cases
VIEW THE REPORT PAGE
I recently came across a news article that said doctors will NOT be held responsible for a wrong decision or recommendation made based on the recommendations of an artificial intelligence (AI) system. That’s shocking and disturbing at so many levels! Think of the multitude of AI-based decision making possible in banking and financial services, the public sector, and many other industries and the worrying implications wrong decisions could have on the lives of people and society.
One of the never-ending debates for AI adoption continues to be the ethicality and explainability concerns with the systems’ black box decision making. There are multiple dimensions to this issue:
Enterprises, particularly the highly regulated ones, have hit a roadblock in their AI adoption journey and scalability plans considering the consequence of wrong decisions with AI. In fact, one in every three AI use cases fail to reach a substantial scalable level due to explainability concerns.
While the issue may not be a concern for all AI-based use cases, it is usually a roadblock for scenarios with high complexity and high criticality, which lead to irrevocable decisions.
In fact, Hanna Wallach, a senior principal researcher at Microsoft Research in New York City, stated, “We cannot treat these systems as infallible and impartial black boxes. We need to understand what is going on inside of them and how they are being used.”
Last year, Singapore released its Model AI Governance Framework, which provides readily implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google released an end-to-end framework for an internal audit of AI systems. There are many other similar efforts by opponents and proponents of AI alike; however, a feasible solution is still out of sight.
Technology majors and service providers have also made meaningful investments to address the issue, including Accenture (AI fairness Toolkit), HCL (Enterprise XAI Framework), PwC (Responsible AI), and Wipro (ETHICA). Many XAI-centric niche firms that focus only on addressing the explainability conundrum, particularly for the highly regulated industries like healthcare and public sector, also exist. Ayasdi, Darwin AI, KenSci, and Kyndi deserve a mention.
The solution focus varies from enabling enterprises to compare the fairness and performance of multiple models to enabling users to set their ethicality bars. It’s interesting to note that all of these offer bolt-on solutions that enable an explanation of the decision in a human interpretable format, but they’re not embedded explainability-based AI products.
Considering this is an artificial form of intelligence, let’s take a step back and analyze how humans make such complex decisions:
These are behaviors that today’s AI systems are not trained to adopt. A disproportionate focus on speed and cost has led to neglecting the human element that ensures accuracy and acceptance. And instead of addressing accuracy as a characteristic, we add another layer of complexity in the AI systems with explainability.
And even if the AI system is able to explain how and why it made a wrong decision, what good does that do anyway? Who is willing to put money in an AI system that makes wrong decisions but explains them really well? What we need is an AI system that makes the right decisions, so it does not need to explain them.
AI systems of the future need to be designed with these humane elements embedded in their nature and functionality. This may include, pointing out edge cases, “discussing” and “debating” complex cases with other experts (humans or other AI systems), embedding the element of EQ in decision making, and at times even handing a decision back to humans when it encounters a new scenario where the probability of wrong decision making is higher.
But until we get there, a practical way for organizations to address these explainability challenges is to adopt a hybrid human-in-the-loop approach. Such an approach relies on subject matter experts (SMEs), such as ethicists, data scientists, regulators, domain experts, etc. to
In this approach, instead of relying on a large training data set to build the model, the machine learning system is built iteratively with regular inputs from experts.
In the long run, enterprises need to build a comprehensive governance structure for AI adoption and data leverage. Such a structure will have to institute explainability norms that factor in criticality of machine decisions, required expertise, and checks throughout the lifecycle of any AI implementation. Humane intelligence and not artificial intelligence systems are required in the world of the future.
We would be happy to hear your thoughts on approaches to AI and XAI. Please reach out to [email protected] for a discussion.
©2023 Everest Global, Inc. Privacy Notice Terms of Use Do Not Sell My Information
"*" indicates required fields