Tag: automation

Digital Levers for Successful Category Management: AI, Automation and Analytics | Webinar

Join Shirley Hung, Vice President at Everest Group, and Paul Blake, Director of Product Marketing at GEP, as they discuss the advantages of digital category management adoption. They will explore:

  • Adoption rates for AI/ML, automation, analytics — and how they are being used
  • The “nice-to-have” capabilities that are now “must-have” tools and methods
  • Strategies to obtain deeper, more detailed supplier intelligence, instantly
  • Ways to easily isolate and track non-compliant maverick spend
  • How to improve pricing strategies and boost your negotiation leverage with smarter data
  • The advantages of low- or no-code digital platforms for business users and IT teams

When

Thursday, March 11, 2021, at 10 am CST, 11 am EST, 4 pm GMT, 9:30 pm IST

Where

Live, virtual event

Presenters

Shirley Hung
Vice President
Everest Group

Paul Blake
Director of Product Marketing
GEP

 

Leap Towards General AI with Generative Adversarial Networks | Blog

AI adoption is on the rise among enterprises. In fact, the research we conducted for our AI Services State of the Market Report 2021 found that as of 2019, 72% of enterprises had embarked on their AI journey. And they’re investing in various key AI domains, including computer vision, conversational intelligence, content intelligence, and various decision support systems.

However, the machine intelligence that surrounds us today belongs to the Narrow AI domain. That means it’s equipped to tackle only a specified task. For example, Google Assistant is trained to respond to queries, while a facial recognition system is trained to recognize faces. Even seemingly complex applications of AI – like self-driving cars – fall under the Narrow AI domain.

Where Narrow AI falters

Narrow AI can process a vast array of data and complete the given task more efficiently; however, it can’t replicate human intelligence, their ability to reason, humans’ ability to make judgments, or be context aware.

This is where General AI steps in. General AI takes the quest to replicate human intelligence meaningfully ahead by equipping machines with the ability to understand their surroundings and context.

Exhibit 1: Evolution of AI

Evolution of AI 

The pursuit of General AI

Researchers came up with Deep Neural Networks (DNN), a popular AI structure that tries to mimic the human brain. DNNs work with many labeled datasets to perform their function. For example, if you want the DNN to identify apples in an image, you need to provide it with enough apple images for it to clean the pattern to define the general characteristics of an apple. It can then identify apples in any image. But, can DNNs – or more appropriately, General AI – be imaginative?

Enter GANs

Generative Adversarial Networks (GAN) bring us close to the concept of General AI by equipping machines to be “seemingly” creative and imaginative. Let’s look at how this concept works.

Exhibit 2: GAN working block diagram

Evolution of AI

GANs work with two neural networks to create and refine data. The first neural network is placed in a generator that maps back from the output to create the input data used to create the output. A discriminator has the second network, which is a classifier. It provides a score between 0 to 1. A score of 0.4 means that the probability of the generated image being like the real image is 0.4. If the obtained score is close to zero, it goes back to the generator to create a new image, and the cycle continues until a satisfactory result is obtained.

The goal of the generator is to fool the discriminator into believing that the image being sent is indeed authentic, and the discriminator is the authority equipped to catch whether the image sent is fake or real. The discriminator acts as a teacher and guides the generator to create a more realistic generated image to pass as the real one.

Applications around GAN

The GAN concept is being touted as one of the most advanced AI/ML developments in the last 30 years. What can it help your business do, other than create an image of an apple?

  1. Creating synthetic data for scaled AI deployments: Obtaining quality data to train AI algorithms has been a key concern for AI deployments across enterprises. Even BigTech vendors such as Google, which is considered the home of data, struggle with it. So Google launched “Project Nightingale” in partnership with Ascension, which created concerns around misuse of medical data. Regulations to ensure data privacy safeguard people’s interests but create a major concern for AI. The data to train AI models vanishes. This is where a GAN shines. It can create synthetic data, which helps in training AI models
  2. Translations: Another use case where GANs are finding applications is in translations. This includes image to image translation, text to image, and semantic image to photo translation
  3. Content generation: GANs are also being used in the gaming industry to create cartoon characters and creatures. In fact, Google launched a pilot to utilize a GAN to create images; this will help gaming developers be more creative and productive

Two sides to a coin

But, GANs do come with their own set of problems:

  • A significant problem in productionizing a GAN is attaining a symphony between the generator and the discriminator. Too strong or too weak a discriminator could lead to undesirable results. If it is too weak, it will pass all generated images as authentic, which defeats the purpose of GAN. And if it is too strong, no generated image would be able to fool the discriminator
  • The amount of computing power required to run a GAN is way more significant as compared to a generic AI model, thus limiting its use by enterprises
  • GANs, specifically cyclic and pix2pix types, are known for their capabilities across face synthesis, face swap, and facial attributes and expressions. This can be utilized to create doctored images and videos (deep fakes) that usually pass as authentic have become an attractive point for malicious actors. For example, a politician expressing grief over pandemic victims could be doctored using GAN to show a sinister smile on the politician’s face during the press briefing. Just imagine the amount of backlash and public uproar that would generate. And that is just a simple example of the destructive power of GANs

Despite these problems, enterprises should be keen to adopt GANS as they have the potential to disrupt the business landscape and create immense competitive stance opportunities across various verticals. For example:

  • A GAN can help the fashion and design industry create new and unique designs for high-end luxury items. It can also create imaginary fashion models, thus making it unnecessary to hire photographers and fashion models
  • Self-driving cars need millions of miles of road to gather data to test their detection capabilities using computer vision. All the time spent gathering the road data can be cut short through synthetic data generated through GAN. That, in turn, can enable faster time to market

If you’ve utilized GANs in your enterprise or know about more use cases where GANs can be advantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!

Advancing from Artificial Intelligence to Humane Intelligence | Blog

I recently came across a news article that said doctors will NOT be held responsible for a wrong decision or recommendation made based on the recommendations of an artificial intelligence (AI) system. That’s shocking and disturbing at so many levels! Think of the multitude of AI-based decision making possible in banking and financial services, the public sector, and many other industries and the worrying implications wrong decisions could have on the lives of people and society.

One of the never-ending debates for AI adoption continues to be the ethicality and explainability concerns with the systems’ black box decision making. There are multiple dimensions to this issue:

  1. Definitional ambiguity – Trustworthy, fair and ethical, and repeatable – these are the different characteristics of AI systems in the context of explainability. Most enterprises cite explainability as a concern, but most don’t really know what it means or the degree to which it is required.
  2. Misplaced ownership – While they can be trained, re-trained, tested, and course corrected, no developer can guarantee bias-free or accurate decision making. So, in case of a conflict, who should be held responsible? The enterprise, the technology providers, the solution developers, or another group?
  3. Rising expectations – AI systems are being increasingly trusted with highly complex, multi-stakeholder decision-making scenarios which are contextual, subjective, open to interpretation, and require emotional intelligence.

 

Enterprises, particularly the highly regulated ones, have hit a roadblock in their AI adoption journey and scalability plans considering the consequence of wrong decisions with AI. In fact, one in every three AI use cases fail to reach a substantial scalable level due to explainability concerns.

While the issue may not be a concern for all AI-based use cases, it is usually a roadblock for scenarios with high complexity and high criticality, which lead to irrevocable decisions.

Advancing from Artificial Intelligence to Humane Intelligence

In fact, Hanna Wallach, a senior principal researcher at Microsoft Research in New York City, stated, “We cannot treat these systems as infallible and impartial black boxes. We need to understand what is going on inside of them and how they are being used.”

Progress so far

Last year, Singapore released its Model AI Governance Framework, which provides readily implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google released an end-to-end framework for an internal audit of AI systems. There are many other similar efforts by opponents and proponents of AI alike; however, a feasible solution is still out of sight.

Technology majors and service providers have also made meaningful investments to address the issue, including Accenture (AI fairness Toolkit), HCL (Enterprise XAI Framework), PwC (Responsible AI), and Wipro (ETHICA). Many XAI-centric niche firms that focus only on addressing the explainability conundrum, particularly for the highly regulated industries like healthcare and public sector, also exist. Ayasdi, Darwin AI, KenSci, and Kyndi deserve a mention.

The solution focus varies from enabling enterprises to compare the fairness and performance of multiple models to enabling users to set their ethicality bars. It’s interesting to note that all of these offer bolt-on solutions that enable an explanation of the decision in a human interpretable format, but they’re not embedded explainability-based AI products.

The missing link  

Considering this is an artificial form of intelligence, let’s take a step back and analyze how humans make such complex decisions:

  • Bias-free does not exist in the real world: The first thing to appreciate is that humans are not free from biases, and biases by their nature are subjective and open to interpretation.
  • Progressive decision-making approach: A key difference between humans and the machines making such decisions is the fact that even with all processes in place, humans seek help, pursue guidance in case of confusion, and discuss edge cases that are more prone to wrong decision making. Complex decision making is seldom left to one individual alone; rather, it’s a hierarchy of decision makers in play, adding knowledge on top of previous insights to build a decision tree.
  • Emotional Quotient (EQ): Humans have emotions, and even though most decisions require pragmatism, it’s the EQ in human decisions that explains the outcomes in many situations.

Advancing from Artificial Intelligence to Humane Intelligence

These are behaviors that today’s AI systems are not trained to adopt. A disproportionate focus on speed and cost has led to neglecting the human element that ensures accuracy and acceptance. And instead of addressing accuracy as a characteristic, we add another layer of complexity in the AI systems with explainability.

And even if the AI system is able to explain how and why it made a wrong decision, what good does that do anyway? Who is willing to put money in an AI system that makes wrong decisions but explains them really well? What we need is an AI system that makes the right decisions, so it does not need to explain them.

AI systems of the future need to be designed with these humane elements embedded in their nature and functionality. This may include, pointing out edge cases, “discussing” and “debating” complex cases with other experts (humans or other AI systems), embedding the element of EQ in decision making, and at times even handing a decision back to humans when it encounters a new scenario where the probability of wrong decision making is higher.

But until we get there, a practical way for organizations to address these explainability challenges is to adopt a hybrid human-in-the-loop approach. Such an approach relies on subject matter experts (SMEs), such as ethicists, data scientists, regulators, domain experts, etc. to

  • Improve learning models’ outcomes over time
  • Check for biases and discrepancies
  • Ensure compliance

In this approach, instead of relying on a large training data set to build the model, the machine learning system is built iteratively with regular inputs from experts.

Advancing from Artificial Intelligence to Humane Intelligence

In the long run, enterprises need to build a comprehensive governance structure for AI adoption and data leverage. Such a structure will have to institute explainability norms that factor in criticality of machine decisions, required expertise, and checks throughout the lifecycle of any AI implementation. Humane intelligence and not artificial intelligence systems are required in the world of the future.

We would be happy to hear your thoughts on approaches to AI and XAI. Please reach out to [email protected] for a discussion.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.