Experts and enterprises around the world have talked a lot about the disturbing concept of AI being used to build and test AI systems, and challenge decisions made by those systems. I wrote a blog on this topic a while back.
Disquieting as it is, our AI research makes it clear that AI for AI with increasingly minimal human intervention has moved from concept to reality.
Here are four key reasons this is the case.
Software is Becoming Non-deterministic and Intelligent
Before AI emerged, organizations focused on production support to optimize the environment after the software was released. But those days are going to be over soon, if they aren’t already. The reality is that today’s increasingly dynamic software and Agile/DevOps-oriented environments require tremendous automation and feedback loops from the trenches. Developers and operations teams simply cannot capture and analyze the enormous volume of needed insights. They must leverage AI intelligence to do so, and to enable an ongoing interaction channel with the operating environment.
Testing AI Biases and Outcomes is not Easy
Unlike traditional software with defined boundary conditions, AI systems have very different edge scenarios. And AI systems need to negate/test all edge scenarios to make sense of their environment. But, as there can be millions of permutations and combinations, it’s extremely difficult to manually assure or use traditional automation to test AI systems for data biases and outcomes. Uncomfortable as it may be, AI-layered systems must be used to test AI systems.
The Autonomous Vehicle Framework is Being Mirrored in Technology Systems
The L0-L5 autonomous vehicle framework proposed by SAE International is becoming an inspiration for technology developers. Not surprisingly, they want to leverage AI to build intelligent applications that can have autonomous environments and release. Some are even pushing AI to build the software itself. While this is still in its infancy, our research suggests that developers’ productivity will improve by 40 percent if AI systems are meaningfully leveraged to build software.
The Open Source Ecosystem is Becoming Indispensable
Although enterprises used to take pride in building boundary walls to protect their IP and using best of breed tools, open source changed all that. Most enterprises realize that their developers cannot build an AI system on their own, and now rely on open source repositories. And our research shows that 20-30 percent of an AI system can be developed by leveraging already available code. However, scanning these repositories and zeroing in on the needed pieces of code aren’t tasks for the faint hearted given their massive size. Indeed, even the smartest developers need help from an AI intelligent system.
There’s little question that using AI systems to build, test, and fight AI systems is disconcerting. That’s one of the key reasons that enterprises that have already adopted AI systems haven’t yet adopted AI to build, test, and secure them. But it’s an inevitability that’s already knocking at their doors. And they will quickly realize that reliance on a “human in the loop” model, though well intentioned, has severe limitations not only around the cost of governance, but also around the sheer intelligence, bandwidth, and foresight required by humans to govern AI systems.
Rather than debating its merit or becoming overwhelmed with the associated risks, enterprises need to build a governing framework for this new reality. They must work closely with technology vendors, cloud providers, and AI companies to ensure their business does not suffer in this new, albeit uncomfortable, environment.
Has your enterprise started leveraging AI to build, test, or fight AI systems? If so, please share your experiences with me at [email protected].