Tag: artificial intelligence

Artificial Intelligence is Democratizing Mental Health | Sherpas in Blue Shirts

If I had a penny for every time Artificial Intelligence was mentioned during the recent NASSCOM India Leadership Forum, I could buy a lot of Bitcoins. Both hype and hope abound around AI and its impact on different industries’ business models.

Let’s take a look at AI the healthcare industry. Adoption is increasing, helping solve a number of problems for patients, doctors, and the industry overall. AI engines are helping doctors identify patterns in patient symptoms with data and analytics, improve diagnoses, pick the right treatments, and monitor care.

For instance, physicians can now plug diagnoses into IBM’s Watson for Oncology and receive treatment suggestions based on historical patient data and information from medical journals. Face2Gene combines facial recognition software with machine learning to identify facial dysmorphic features, helping clinicians diagnose rare genetic diseases.

Mental health treatment: Can AI be the cure?

Using AI to treat mental health issues is particularly fascinating. So far, AI has only been viewed as a means to help healthcare professionals provide better care. But can it eliminate a patient’s need to consult with a doctor altogether for mental health-focused moral counseling and empathetic support?

Consider this: AI engines today have the ability to listen, interpret, learn, plan, and problem solve. Early identification of mental health issues is possible through the analysis of a person’s facial features, writing patterns, tone of voice, word choice, and phrase length.

These are all decisive cues in learning what’s going on in a person’s mind, and can be used to predict or detect and monitor mental conditions such as psychosis, schizophrenia, mania, and depression.

AI as a panacea for mental health

The idea of end-to-end mental health treatment through AI with no human intervention is quite viable, and the prospect becomes even more enticing when you consider how the following factors could drive acceptance among patients:

AI Blog ExhibitThus, it’s not surprising that a few players have already begun to delve into this space. Woebot is a software chatbot that delivers a mood management program based on Cognitive Behavior Therapy (CBT). AI luminary Andrew Ng is on the company’s board of directors. Randomized controlled trials at Stanford University have shown that Woebot can help reduce symptoms of depression and anxiety in two weeks.

AI Blog MobileAnother example is Tess, a psychological AI that communicates via text, administers highly personalized psychotherapy, psycho-education, and delivers on-demand health-related reminders, when and where a mental health professional isn’t available. It can hold conversations with the patient through a variety of existing technology-based communications, including SMS, WhatsApp, and web browsers. More recently, Facebook started using AI to help predict when users may be suicidal.

There are even cases of highly specialized products:

  • An app called Karim counsels Syrian refugee children
  • Emma helps Dutch speakers with mild anxiety
  • MindBloom allows users to support and motivate each other

Are robo-doctors just around the corner?

While the hype crowd might have you believe that your next appointment will be with a droid, several open questions warrant healthy skepticism of mainstream AI adoption in mental healthcare:

  • There are privacy issues, with the possibility of user data being shared with various parties seeking to profit from it
  • Could training AI systems with biased data lead to them make biased decisions?
  • Will users even take advice from software as seriously as they would from a qualified professional?
  • Can the technology successfully cater to a universal population?

The ecosystem is trying to solve for these and other questions. While it might be too early to say that AI-based mental health treatment options can become mainstream currency, they clearly create significant value. As healthcare organizations and patients experiment with these use cases, there’s a sizable opportunity to reimagine the workflow and treatment paradigm.

Insurer of the Future Will Use Technology to Shift from ‘Insuring Loss’ to ‘Ensuring Protection’ | Press Release

Also, bundling of insurance with products and services across industries is on the rise, creating a new breed of ‘invisible insurer’

Everest Group reports that two major trends are shaping the future of insurance, driving insurers to redefine their business strategies and IT outsourcing engagements. First, insurers are moving from passively “insuring loss” and managing claims to proactively “ensuring protection” for customers. Second, insurers are bundling insurance with products and services across a broad spectrum of industries, with insurers themselves increasingly becoming invisible to the end customer.

To navigate these shifts, the insurer of the future will leverage next-generation technologies such as analytics, artificial intelligence (AI), blockchain and IoT.

For example:

  • Insurers are already using IoT to “ensure protection” by monitoring homes (smoke and carbon monoxide alarms), cars (mileage, driving patterns), persons (exercise, lifestyle patterns), and businesses (boilers and water systems).
  • Insurers will use digital channels to sell hyper-personalized products; for instance, an insurer’s mobile application might use geo-location data to offer travel or airline insurance to a customer who is at the airport.
  • Insurance will be bundled with products and services. Market leaders have already begun bundling insurance with remittance transfers and real estate purchases. As this trend expands across all industries, insurers will become “invisible” to the end user.
  • Insurers will tap the connected ecosystem to underwrite risk in near actual time. Analytics and AI will be used on real-time as well as historical data to assess risk, generate quotes, negotiate premiums and execute smart contracts.
  • Policy administration processes are already shifting from being highly manual and paper based to automated and digitized, and this will rise to another level as connected systems, blockchain and machine learning applications mature.
  • Machine learning will transform a highly manual, reactive claims management process to an automated, proactive one, and blockchain technology is showing transformative potential for use cases such as claims validation, fraud detection and prevention, and automated claims payments.

These are just a few ways technology will play a major role in the transformation of the insurance industry over the next decade as insurers shift focus from “protection” to “prevention” and as the bundling of insurance with products and services becomes more prevalent. Today, insurers have begun collaborating with IT service providers to build and evaluate proof-of-concepts, develop customized solution offerings, and test use cases in innovation labs and centers of excellence.

“We’re already seeing evidence of insurers beginning to embrace their future reality,” said Jimit Arora, partner and leader of the IT Services research practice at Everest Group. “For instance, insurers are moving from long-term, traditional IT outsourcing projects to short-term, digital projects. These projects allow insurers to adopt emerging technologies, reduce time-to-market, and improve customer experience. We’re also seeing a steep rise in demand for artificial intelligence, blockchain, IoT, and automation in the scope of insurance IT outsourcing contracts.”

Insurance constitutes 30-35 percent market share of the overall US$142 billion BFSI (banking, financial services and insurance) IT outsourcing industry. Over the past four years, the insurance ITO market size has grown by a CAGR of 5.3 percent, and going forward it is expected to grow at a steady rate of 4-6 percent. Digital services components were included in 46 percent of the 348 insurance ITO deals signed in 2016 and analyzed by Everest Group. The deals having automation in scope increased by nearly 250 percent while deals involving IoT, blockchain and AI nearly tripled.

These findings and more are explored in detail in Everest Group’s recently published report, “Insurer of the Future: Insurance ITO Annual Report 2018.” The report explores key trends in the insurance industry and their implications for application services outsourcing.

*** Download Complimentary Abstract ***

Why Invest in Artificial Intelligence (AI)? | Sherpas in Blue Shirts

“Facebook shuts down robots after they invent their own language.” This headline was splashed across myriad news outlets just a few weeks ago. And although the story itself made the event seem like just a normal science experiment, this type of alarming tone in media reports is becoming the norm and is sowing seeds of doubt, fear, and uncertainty among consumers and even some businesses.

However, behind the vendor hype and the media fear mongering, there are real, bona fide reasons for organizations to invest in artificial intelligence (AI).

Humans can perform various expert tasks with relevant training and experience. For example, a research analyst trained for and with experience in market research, can predict future market size and growth with considerable accuracy. Using machine learning, a system can be trained to perform the same task. Yet, with their enormous computational power, such expert systems/machines can beat humans’ speed, accuracy, and efficiency in this and many other tasks. This is the reason why many organizations are investing heavily in developing and creating AI-enabled systems.

Narrow AI

Have you ever encountered a situation where you’re talking to a customer service executive over chat, and wondered if you’re actually talking to a real human agent or a virtual agent/computer program?

I recently attended IPsoft’s Amelia 3.0 launch event. Amelia is an AI-powered virtual agent platform that uses advanced machine learning and deep learning techniques to get progressively better at performing tasks. In one of the more interesting demonstrations, Amelia went head-to-head with a real person in answering questions posed to it in natural language, by real-time processing of unstructured data from natural language documents such as Wikipedia pages. It was fascinating to see how Amelia could answer questions with considerable accuracy.

Such domain-specific expert systems that can simulate human-like capacities and even outperform human expertise in specific domains are called Narrow AI.

While most AI vendors typically focus on building Narrow AI systems for a specific purpose such as virtual agent capabilities, some large vendors such as IBM, under its Watson brand, offers multiple individual Narrow AI systems to cover a wide range of use cases.  For example, it is being used at several top cancer hospitals in the U.S. to help with cancer research by speeding up DNA analysis in cancer patients. In the finance sector, DBS bank in Singapore uses Watson to ensure proper advice and experience for customers of its wealth management business. And in retail, an online travel company has created a Discovery Engine that uses Watson to take in and analyze data to better link additional offers and customize preferences for individual consumers.

True, or General, AI

Artificial intelligence with multiple and broader capabilities is called True, or General, AI. When it comes to developing General AI, which has the ability to generalize and apply learnings to unlimited new domains or unexpected situations – something that humans often do – I think we are just scratching the surface. Primary barriers to achieving General AI are our lack of understanding of everything happening inside human brain and the technical feasibility of creating a system as sophisticated, complex, and vast as the human brain. As per a survey of 352 researchers published in 2017, there is a 50 percent probability that General AI will happen by around the year 2060.

Current lay of the land – A world of opportunities

Despite the many evolutional, ethical, and developmental challenges researchers and technology developers continue to face in making artificial intelligence more capable and powerful, I believe that even existing AI technology presents unique opportunities for organizations. It enables them to improve the customer experience and operational efficiency, enhance employee productivity, cut costs, accelerate speed-to-market, and develop more sophisticated products.

To help its clients understand the AI technology market better, Everest Group is researching this field with a lens on global services. Although early in our research, one fascinating use case is how AI is automating decision making with complete audit trail in the heavily regulated financial services industry. The research will be published in October, 2017 as part of our research program, Service Optimization Technologies (SOT), that focuses on technologies that are disrupting the global services space.

Explainable AI? Why Should Humans Decide Robots are Right or Wrong? | Sherpas in Blue Shirts

I have been a vocal proponent of making artificial intelligence (AI) systems more white box – able to “explain” why and how they came to a particular decision – than they are today. Therefore, I am heartened to see increasing debate around this. For example, Eric Brynjolfsson and Andrew McAfee, the well-known authors of the book, “The Second Machine Age,” have increasingly spoken about this.

However, sometimes I debate with myself on various aspects of explainable AI.

What are we explaining? If we have a white box AI system, a human can understand “why” the system made a certain decision. But who decides whether or not the decision was correct? For example, in the movie “I, Robot,” the lead character (played by actor Will Smith) thought the humanoid robot should have saved a child instead of him. Was the robot wrong? The robot certainly did not think so. If humans make the “right or wrong” decision for robots, doesn’t that defeat the self-learning purpose of AI?

Why are we explaining? Humans seek explanation when they lack understanding of something or confidence in the capabilities of other humans. Similarly, we seek explanation from artificial intelligence because, at some level, we aren’t sure about the capabilities of these systems. (Of course, there’s also humans’ control freak characteristic.) Why? Because these are “systems” and systems have problems. But humans also have “problems,” and what each individual person considers “right” is defined by their own value system, surroundings, and situations. What’s right for one person may be wrong for another. Should this contextual “rightness or wrongness” also apply to AI systems?

Who are we explaining? Should an AI system’s outcome be analyzed by humans or by another AI system? Why do we believe we have what it takes to assess the outcome, and who gave us the right to do so? Can we become more trustworthy, and believe that an AI system can assess another? Perhaps, but this defeats the very debate on explainable AI.

Who do we hold responsible for the decisions made by Artificial Intelligence?

The mother lode of complexity here is around responsibility. Due to human’s beliefs that AI systems won’t be “responsible” in their decisions – based on individuals’ biased definition of what is right or wrong – regulations are being developed to hold humans responsible for the decisions of AI systems. Of course, these humans will want an explanation from the AI system.

Regardless of which side of the argument you are on, or even if you pivot daily, there’s one thing I believe we can agree on…if we end up making better machines than better humans through artificial intelligence, we have defeated ourselves.

Software Eats World, AI Eats Software … Ethics Eats AI? | Sherpas in Blue Shirts

Marc Andreessen’s famous quote about software eating the world popped up often in the last couple of years. However, the fashionable and fickle technology industry is now relying on artificial intelligence to drive similar interest. Most people following AI would agree that there is a tremendous value society can derive from the technology. AI will impact most of our lives in more ways than we can think of today. In fact, it often is hard to argue against the value AI can potentially create for society. Indeed, with the increasing noise and real development around AI, there are murmurs that AI may replace software as the default engagement model.

Artificial intelligence may replace software

Think about it. When we use our phone or Amazon Alexa to do a voice search, we simply speak, hardly using the app or software in the traditional sense. A chatbot can become a single interface for multiple software programs that allow us to pay our electric, phone, and credit card bills.

Therefore, artificial intelligence replacing software as the next technology shift is quite possible. However, can we rely on AI? Or, more precisely, can we always rely upon it? A particularly concerning issue is that of bias. Indeed, there have been multiple debates around the bias an AI system can introduce.

But can AI be unbiased?

It’s true that humans have biases. As a result, we’ve established checks and balances, such as superiors and laws, to discover and mitigate them. But how would an AI system determine if the answer it is providing is neutral and bereft of bias? It can’t, and because of their extreme complexity, it’s almost impossible to explain why and how an AI system arrived at a particular decision or conclusion. For example, a couple of years ago Google’s algorithms classified people of a certain demography in a derogatory manner.

It is certainly possible that the people who design AI systems may introduce their own biases into them. Worse, however, is that AI systems may over a period of time start developing their own biases. And even worse, they cannot even be questioned or “retaught” the correct way to arrive at a conclusion.

AI and ethics

There have already been instances in which AI systems gave results for which they weren’t even designed. Now think about this in a business environment. For example, many enterprises will leverage an AI system to screen the resumes of potential candidates. How can the businesses be sure their system isn’t rejecting good candidates due to some machine bias?

A case of this type could be considered an acceptable, genuine mistake, and it could be argued that the system isn’t doing it deliberately. However, what happens if these mistakes eventually turn into unethicality? We can pardon mistakes but we shouldn’t do the same with unethical decisions. Taking it one step further, given that these systems ideally learn on their own, will their unethicality become manifold as the time progress?

How far-fetched it is that the AI systems become so habitually unethical that users become frustrated? What are the chances that humanity stops further developing AI systems when it realizes that it’s not possible to create AI systems without biases? While every technology brings a level of evil with the good, AI’s negative aspects could multiply very fast, and mostly without explanation. If these apprehensions scare developers away, society and business could lose AI’s tremendous potential positive improvements. That would be even more unfortunate.

As the adoption of AI systems increases, we will likely witness more cases of wrong or unethical behavior. This will fundamentally question and push regulators and developers to put boundaries around these systems. But therein is a paradox: developing systems that learn on their own, while putting boundaries around that learning – quite a contradiction. However, we must overcome these challenges to exploit the true potential of AI.

What do you think?

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.