Trailblazers AI in Healthcare and Pharma | Market Insights™
AI in Healthcare and Pharma
VIEW THE REPORT PAGE
AI in Healthcare and Pharma
VIEW THE REPORT PAGE
AI in Supply Chain Management
‘AI spend in India is currently $200 million and growing at 40% year-on-year’
Research firm Everest Group in its latest report predicts that AI services are expected to accelerate 32 per cent from $25 billion in 2019 to $95 billion by 2024. Nearly 72 per cent of global enterprises have embarked on their AI journey, says the report – Artificial Intelligence Services – State of Market Report 2020: Scale the AI Summit Through Democratization.
AI Trailblazers in HR
VIEW THE REPORT PAGE
AI Use Cases in HR
VIEW THE REPORT PAGE
AI in Banking and Capital Markets
VIEW THE REPORT PAGE
AI use cases
VIEW THE REPORT PAGE
I recently came across a news article that said doctors will NOT be held responsible for a wrong decision or recommendation made based on the recommendations of an artificial intelligence (AI) system. That’s shocking and disturbing at so many levels! Think of the multitude of AI-based decision making possible in banking and financial services, the public sector, and many other industries and the worrying implications wrong decisions could have on the lives of people and society.
One of the never-ending debates for AI adoption continues to be the ethicality and explainability concerns with the systems’ black box decision making. There are multiple dimensions to this issue:
Enterprises, particularly the highly regulated ones, have hit a roadblock in their AI adoption journey and scalability plans considering the consequence of wrong decisions with AI. In fact, one in every three AI use cases fail to reach a substantial scalable level due to explainability concerns.
While the issue may not be a concern for all AI-based use cases, it is usually a roadblock for scenarios with high complexity and high criticality, which lead to irrevocable decisions.
In fact, Hanna Wallach, a senior principal researcher at Microsoft Research in New York City, stated, “We cannot treat these systems as infallible and impartial black boxes. We need to understand what is going on inside of them and how they are being used.”
Last year, Singapore released its Model AI Governance Framework, which provides readily implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google released an end-to-end framework for an internal audit of AI systems. There are many other similar efforts by opponents and proponents of AI alike; however, a feasible solution is still out of sight.
Technology majors and service providers have also made meaningful investments to address the issue, including Accenture (AI fairness Toolkit), HCL (Enterprise XAI Framework), PwC (Responsible AI), and Wipro (ETHICA). Many XAI-centric niche firms that focus only on addressing the explainability conundrum, particularly for the highly regulated industries like healthcare and public sector, also exist. Ayasdi, Darwin AI, KenSci, and Kyndi deserve a mention.
The solution focus varies from enabling enterprises to compare the fairness and performance of multiple models to enabling users to set their ethicality bars. It’s interesting to note that all of these offer bolt-on solutions that enable an explanation of the decision in a human interpretable format, but they’re not embedded explainability-based AI products.
Considering this is an artificial form of intelligence, let’s take a step back and analyze how humans make such complex decisions:
These are behaviors that today’s AI systems are not trained to adopt. A disproportionate focus on speed and cost has led to neglecting the human element that ensures accuracy and acceptance. And instead of addressing accuracy as a characteristic, we add another layer of complexity in the AI systems with explainability.
And even if the AI system is able to explain how and why it made a wrong decision, what good does that do anyway? Who is willing to put money in an AI system that makes wrong decisions but explains them really well? What we need is an AI system that makes the right decisions, so it does not need to explain them.
AI systems of the future need to be designed with these humane elements embedded in their nature and functionality. This may include, pointing out edge cases, “discussing” and “debating” complex cases with other experts (humans or other AI systems), embedding the element of EQ in decision making, and at times even handing a decision back to humans when it encounters a new scenario where the probability of wrong decision making is higher.
But until we get there, a practical way for organizations to address these explainability challenges is to adopt a hybrid human-in-the-loop approach. Such an approach relies on subject matter experts (SMEs), such as ethicists, data scientists, regulators, domain experts, etc. to
In this approach, instead of relying on a large training data set to build the model, the machine learning system is built iteratively with regular inputs from experts.
In the long run, enterprises need to build a comprehensive governance structure for AI adoption and data leverage. Such a structure will have to institute explainability norms that factor in criticality of machine decisions, required expertise, and checks throughout the lifecycle of any AI implementation. Humane intelligence and not artificial intelligence systems are required in the world of the future.
We would be happy to hear your thoughts on approaches to AI and XAI. Please reach out to [email protected] for a discussion.
Companies currently invest a lot of money in target markets to generate potential customers’ interest in products and services. But after they achieve a sale, they often frustrate customers by not providing effective customer service support. A poor customer experience can erode the company’s brand and reputation and destroy the company’s opportunities to increase revenue through new purchases by those existing customers. Obviously, these are significant problems, especially in today’s highly competitive environment with customers’ quick pace in buying decisions. Let us now explore the solution.
A sustained focus on digital, agility, and advanced technologies is likely to prepare enterprises for the future, especially following COVID-19. Many enterprise leaders consider IT infrastructure to be the bedrock of business transformation at a time when the service delivery model has become more virtual and cloud based. This reality presents an opportunity for GBS organizations that deliver IT infrastructure services to rethink their long-term strategies to enhance their capabilities, thereby strengthening their value propositions for their enterprises.
GBS setups with strong IT infra capabilities can lead enterprise transformation
Over the past few years, several GBS organizations have built and strengthened capabilities across a wide range of IT infrastructure services. Best-in-class GBS setups have achieved significant scale and penetration for IT infrastructure delivery and now support a wide range of functions – such as cloud migration and transformation, desktop support and virtualization, and service desk – with high maturity. In fact, some centers have scaled as high as 250-300 Full Time Equivalents (FTEs) and 35-45% penetration.
At the same time, these organizations are fraught with legacy issues that need to be addressed to unlock full value. Our research reveals that most enterprises believe that their GBS’ current IT infrastructure services model is not ready to cater to the digital capabilities necessary for targeted transformation. Only GBS organizations that evolve and strengthen their IT infrastructure capabilities will be well positioned to extend their support to newer or more enhanced IT infrastructure services delivery.
The need for an IT infrastructure revolution and what it will take
The push to transform IT infrastructure in GBS setups should be driven by a business-centric approach to global business services. To enable this shift, GBS organizations should consider a new model for IT infrastructure that focuses on improving business metrics instead of pre-defined IT Service Line Agreements (SLA) and Total Cost of Operations (TCO) management. IT infrastructure must be able to support changes ushered in by rapid device proliferation, technology disruptions, business expansions, and escalating cost pressures post-COVID-19 to showcase sustained value.
To transition to this IT infrastructure state, GBS organizations must proactively start to identify skills that have a high likelihood of being replaced / becoming obsolete, as well as emerging skills. They must also prioritize emerging skills that have a higher reskilling/upskilling potential. These goals can be achieved through a comprehensive program that proactively builds capabilities in IT services delivery.
In the exhibit below, we highlight the shelf life of basic IT services skills by comparing the upskilling/reskilling potential of IT services skills with their expected extent of replacement.
Exhibit: Analysis of the shelf life of basic IT services skills
In the near future, GBS organizations should leverage Artificial Intelligence (AI), analytics, and automation to further revolutionize their IT capabilities. The end goal is to transition to a self-healing, self-configuring system that can dynamically and autonomously adapt to changing business needs, thereby creating an invisible IT infrastructure model. This invisible IT infrastructure will be highly secure, require minimal oversight, function across stacks, and continuously evolve with changing business needs. By leveraging an automation-, analytics-, and AI-led delivery of infrastructure, operations, and services management, GBS organizations can truly enable enterprises to make decisions based on business imperatives.
If you’d like to know more about the key business transformation trends for enterprises in IT infrastructure, do read our report Exploring the Enterprise Journey Towards “Invisible” IT Infrastructure or reach out to us at [email protected] or [email protected].
©2023 Everest Global, Inc. Privacy Notice Terms of Use Do Not Sell My Information
"*" indicates required fields