Tag: detection

In AI We Trust, Thanks to AI Checking Software | Blog

The increasing popularity and uptake of Artificial Intelligence (AI) is giving rise to concerns about its risks, explainability, and fairness in the decisions that it makes. One big area of concern is bias in the algorithms that are used in AI for decision making. Another risk is the probabilistic approach to handling decisions and the potential for unpredictable outcomes based on AI self-learning. These concerns are justified, given the implicit ethical and business risks, for example, impact on people’s lives and livelihood, or bad business decisions based on AI recommendations that were founded on partial data.

The good news is that the software industry is starting to address these concerns. For example, last year, vendors including Google, IBM, and Microsoft announced tools (either released or in development) for detecting bias in AI, and recently, there were more announcements.

IBM

Last year IBM brought out:

  • Adversarial Robustness 360 Toolbox (ART), a Python library available on GitHub, to make machine learning models more robust against adversarial threats such as inputs that are manipulated to derive desired outputs
  • AI Fairness 360, an open-source toolkit with metrics that identify bias in datasets and machine learning models, and algorithms to mitigate them

Last month, IBM further augmented its offerings with the release of AI Explainability 360, an open source toolkit of algorithms to support the understanding and explainability of machine learning models. It is a companion to the other toolkits.

Cognitive Scale

Cognitive Scale recently unveiled the beta of Cortex Certifai, software that automatically detects and scores vulnerabilities in black box AI models without having access to the internals of the model. Certifai is a Kubernetes application and runs as a native cloud service on Amazon, Azure, Google, and Redhat clouds. Cognitive Scale also unveiled the AI Trust Index. Developed in collaboration with AI Global, it will provide composite risk scores for automated black-box decision making models. This is an interesting development that could grow to become a badge of honour for AI software, and a differentiator for those with the most trusted rating.

The Reality of Bias

While these announcements and those made last year are good news, there are aspects of AI training that will be difficult to address because bias is all around us in real life. For example, public data would show AI that there are many more male CEOs and board members than female ones, leading it to possibly conclude that male candidates are more suitable for shortlisting for a non-executive director vacancy than women. Or public data could lead AI to increase mortgage or auto loan risk factors for individuals living in a particular zip code or postcode to unreasonably high levels.

It is the encoding and application of these kinds of biases automatically at scale that is worrying. Regulations in some countries address some of the issues, but not all countries do. Besides, the potential for new threats and risks is high.

There is still a lot more for us to understand when it comes to making AI fair and explainable. This is a complex and growing field. As demand for AI grows, we will see more demand for solutions to check AI as well.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.