Tag: artificial intelligence

Insurer of the Future Will Use Technology to Shift from ‘Insuring Loss’ to ‘Ensuring Protection’ | Press Release

Also, bundling of insurance with products and services across industries is on the rise, creating a new breed of ‘invisible insurer’

Everest Group reports that two major trends are shaping the future of insurance, driving insurers to redefine their business strategies and IT outsourcing engagements. First, insurers are moving from passively “insuring loss” and managing claims to proactively “ensuring protection” for customers. Second, insurers are bundling insurance with products and services across a broad spectrum of industries, with insurers themselves increasingly becoming invisible to the end customer.

To navigate these shifts, the insurer of the future will leverage next-generation technologies such as analytics, artificial intelligence (AI), blockchain and IoT.

For example:

  • Insurers are already using IoT to “ensure protection” by monitoring homes (smoke and carbon monoxide alarms), cars (mileage, driving patterns), persons (exercise, lifestyle patterns), and businesses (boilers and water systems).
  • Insurers will use digital channels to sell hyper-personalized products; for instance, an insurer’s mobile application might use geo-location data to offer travel or airline insurance to a customer who is at the airport.
  • Insurance will be bundled with products and services. Market leaders have already begun bundling insurance with remittance transfers and real estate purchases. As this trend expands across all industries, insurers will become “invisible” to the end user.
  • Insurers will tap the connected ecosystem to underwrite risk in near actual time. Analytics and AI will be used on real-time as well as historical data to assess risk, generate quotes, negotiate premiums and execute smart contracts.
  • Policy administration processes are already shifting from being highly manual and paper based to automated and digitized, and this will rise to another level as connected systems, blockchain and machine learning applications mature.
  • Machine learning will transform a highly manual, reactive claims management process to an automated, proactive one, and blockchain technology is showing transformative potential for use cases such as claims validation, fraud detection and prevention, and automated claims payments.

These are just a few ways technology will play a major role in the transformation of the insurance industry over the next decade as insurers shift focus from “protection” to “prevention” and as the bundling of insurance with products and services becomes more prevalent. Today, insurers have begun collaborating with IT service providers to build and evaluate proof-of-concepts, develop customized solution offerings, and test use cases in innovation labs and centers of excellence.

“We’re already seeing evidence of insurers beginning to embrace their future reality,” said Jimit Arora, partner and leader of the IT Services research practice at Everest Group. “For instance, insurers are moving from long-term, traditional IT outsourcing projects to short-term, digital projects. These projects allow insurers to adopt emerging technologies, reduce time-to-market, and improve customer experience. We’re also seeing a steep rise in demand for artificial intelligence, blockchain, IoT, and automation in the scope of insurance IT outsourcing contracts.”

Insurance constitutes 30-35 percent market share of the overall US$142 billion BFSI (banking, financial services and insurance) IT outsourcing industry. Over the past four years, the insurance ITO market size has grown by a CAGR of 5.3 percent, and going forward it is expected to grow at a steady rate of 4-6 percent. Digital services components were included in 46 percent of the 348 insurance ITO deals signed in 2016 and analyzed by Everest Group. The deals having automation in scope increased by nearly 250 percent while deals involving IoT, blockchain and AI nearly tripled.

These findings and more are explored in detail in Everest Group’s recently published report, “Insurer of the Future: Insurance ITO Annual Report 2018.” The report explores key trends in the insurance industry and their implications for application services outsourcing.

*** Download Complimentary Abstract ***

Why Invest in Artificial Intelligence (AI)? | Sherpas in Blue Shirts

“Facebook shuts down robots after they invent their own language.” This headline was splashed across myriad news outlets just a few weeks ago. And although the story itself made the event seem like just a normal science experiment, this type of alarming tone in media reports is becoming the norm and is sowing seeds of doubt, fear, and uncertainty among consumers and even some businesses.

However, behind the vendor hype and the media fear mongering, there are real, bona fide reasons for organizations to invest in artificial intelligence (AI).

Humans can perform various expert tasks with relevant training and experience. For example, a research analyst trained for and with experience in market research, can predict future market size and growth with considerable accuracy. Using machine learning, a system can be trained to perform the same task. Yet, with their enormous computational power, such expert systems/machines can beat humans’ speed, accuracy, and efficiency in this and many other tasks. This is the reason why many organizations are investing heavily in developing and creating AI-enabled systems.

Narrow AI

Have you ever encountered a situation where you’re talking to a customer service executive over chat, and wondered if you’re actually talking to a real human agent or a virtual agent/computer program?

I recently attended IPsoft’s Amelia 3.0 launch event. Amelia is an AI-powered virtual agent platform that uses advanced machine learning and deep learning techniques to get progressively better at performing tasks. In one of the more interesting demonstrations, Amelia went head-to-head with a real person in answering questions posed to it in natural language, by real-time processing of unstructured data from natural language documents such as Wikipedia pages. It was fascinating to see how Amelia could answer questions with considerable accuracy.

Such domain-specific expert systems that can simulate human-like capacities and even outperform human expertise in specific domains are called Narrow AI.

While most AI vendors typically focus on building Narrow AI systems for a specific purpose such as virtual agent capabilities, some large vendors such as IBM, under its Watson brand, offers multiple individual Narrow AI systems to cover a wide range of use cases.  For example, it is being used at several top cancer hospitals in the U.S. to help with cancer research by speeding up DNA analysis in cancer patients. In the finance sector, DBS bank in Singapore uses Watson to ensure proper advice and experience for customers of its wealth management business. And in retail, an online travel company has created a Discovery Engine that uses Watson to take in and analyze data to better link additional offers and customize preferences for individual consumers.

True, or General, AI

Artificial intelligence with multiple and broader capabilities is called True, or General, AI. When it comes to developing General AI, which has the ability to generalize and apply learnings to unlimited new domains or unexpected situations – something that humans often do – I think we are just scratching the surface. Primary barriers to achieving General AI are our lack of understanding of everything happening inside human brain and the technical feasibility of creating a system as sophisticated, complex, and vast as the human brain. As per a survey of 352 researchers published in 2017, there is a 50 percent probability that General AI will happen by around the year 2060.

Current lay of the land – A world of opportunities

Despite the many evolutional, ethical, and developmental challenges researchers and technology developers continue to face in making artificial intelligence more capable and powerful, I believe that even existing AI technology presents unique opportunities for organizations. It enables them to improve the customer experience and operational efficiency, enhance employee productivity, cut costs, accelerate speed-to-market, and develop more sophisticated products.

To help its clients understand the AI technology market better, Everest Group is researching this field with a lens on global services. Although early in our research, one fascinating use case is how AI is automating decision making with complete audit trail in the heavily regulated financial services industry. The research will be published in October, 2017 as part of our research program, Service Optimization Technologies (SOT), that focuses on technologies that are disrupting the global services space.

Explainable AI? Why Should Humans Decide Robots are Right or Wrong? | Sherpas in Blue Shirts

I have been a vocal proponent of making artificial intelligence (AI) systems more white box – able to “explain” why and how they came to a particular decision – than they are today. Therefore, I am heartened to see increasing debate around this. For example, Eric Brynjolfsson and Andrew McAfee, the well-known authors of the book, “The Second Machine Age,” have increasingly spoken about this.

However, sometimes I debate with myself on various aspects of explainable AI.

What are we explaining? If we have a white box AI system, a human can understand “why” the system made a certain decision. But who decides whether or not the decision was correct? For example, in the movie “I, Robot,” the lead character (played by actor Will Smith) thought the humanoid robot should have saved a child instead of him. Was the robot wrong? The robot certainly did not think so. If humans make the “right or wrong” decision for robots, doesn’t that defeat the self-learning purpose of AI?

Why are we explaining? Humans seek explanation when they lack understanding of something or confidence in the capabilities of other humans. Similarly, we seek explanation from artificial intelligence because, at some level, we aren’t sure about the capabilities of these systems. (Of course, there’s also humans’ control freak characteristic.) Why? Because these are “systems” and systems have problems. But humans also have “problems,” and what each individual person considers “right” is defined by their own value system, surroundings, and situations. What’s right for one person may be wrong for another. Should this contextual “rightness or wrongness” also apply to AI systems?

Who are we explaining? Should an AI system’s outcome be analyzed by humans or by another AI system? Why do we believe we have what it takes to assess the outcome, and who gave us the right to do so? Can we become more trustworthy, and believe that an AI system can assess another? Perhaps, but this defeats the very debate on explainable AI.

Who do we hold responsible for the decisions made by Artificial Intelligence?

The mother lode of complexity here is around responsibility. Due to human’s beliefs that AI systems won’t be “responsible” in their decisions – based on individuals’ biased definition of what is right or wrong – regulations are being developed to hold humans responsible for the decisions of AI systems. Of course, these humans will want an explanation from the AI system.

Regardless of which side of the argument you are on, or even if you pivot daily, there’s one thing I believe we can agree on…if we end up making better machines than better humans through artificial intelligence, we have defeated ourselves.

Software Eats World, AI Eats Software … Ethics Eats AI? | Sherpas in Blue Shirts

Marc Andreessen’s famous quote about software eating the world popped up often in the last couple of years. However, the fashionable and fickle technology industry is now relying on artificial intelligence to drive similar interest. Most people following AI would agree that there is a tremendous value society can derive from the technology. AI will impact most of our lives in more ways than we can think of today. In fact, it often is hard to argue against the value AI can potentially create for society. Indeed, with the increasing noise and real development around AI, there are murmurs that AI may replace software as the default engagement model.

Artificial intelligence may replace software

Think about it. When we use our phone or Amazon Alexa to do a voice search, we simply speak, hardly using the app or software in the traditional sense. A chatbot can become a single interface for multiple software programs that allow us to pay our electric, phone, and credit card bills.

Therefore, artificial intelligence replacing software as the next technology shift is quite possible. However, can we rely on AI? Or, more precisely, can we always rely upon it? A particularly concerning issue is that of bias. Indeed, there have been multiple debates around the bias an AI system can introduce.

But can AI be unbiased?

It’s true that humans have biases. As a result, we’ve established checks and balances, such as superiors and laws, to discover and mitigate them. But how would an AI system determine if the answer it is providing is neutral and bereft of bias? It can’t, and because of their extreme complexity, it’s almost impossible to explain why and how an AI system arrived at a particular decision or conclusion. For example, a couple of years ago Google’s algorithms classified people of a certain demography in a derogatory manner.

It is certainly possible that the people who design AI systems may introduce their own biases into them. Worse, however, is that AI systems may over a period of time start developing their own biases. And even worse, they cannot even be questioned or “retaught” the correct way to arrive at a conclusion.

AI and ethics

There have already been instances in which AI systems gave results for which they weren’t even designed. Now think about this in a business environment. For example, many enterprises will leverage an AI system to screen the resumes of potential candidates. How can the businesses be sure their system isn’t rejecting good candidates due to some machine bias?

A case of this type could be considered an acceptable, genuine mistake, and it could be argued that the system isn’t doing it deliberately. However, what happens if these mistakes eventually turn into unethicality? We can pardon mistakes but we shouldn’t do the same with unethical decisions. Taking it one step further, given that these systems ideally learn on their own, will their unethicality become manifold as the time progress?

How far-fetched it is that the AI systems become so habitually unethical that users become frustrated? What are the chances that humanity stops further developing AI systems when it realizes that it’s not possible to create AI systems without biases? While every technology brings a level of evil with the good, AI’s negative aspects could multiply very fast, and mostly without explanation. If these apprehensions scare developers away, society and business could lose AI’s tremendous potential positive improvements. That would be even more unfortunate.

As the adoption of AI systems increases, we will likely witness more cases of wrong or unethical behavior. This will fundamentally question and push regulators and developers to put boundaries around these systems. But therein is a paradox: developing systems that learn on their own, while putting boundaries around that learning – quite a contradiction. However, we must overcome these challenges to exploit the true potential of AI.

What do you think?

AI: Democratization or Dumbing of Creativity? Pick your Cause | Sherpas in Blue Shirts

The earlier assumption that artificial intelligence (AI) would impact “routine” jobs first is not holding ground anymore. Indeed, we might be deluding ourselves by thinking that the time in which AI could be the most used interface to engage with technology systems is far in the future. But I’m getting ahead of myself. Let’s look at what’s happening today in the creative sector.

IBM Watson was used last year to create a 20th Century Fox movie trailer. Adobe Sensei is putting the digital experience in the hands of non-professionals. Google is leveraging AI with Autodraw to “help everyone create anything visual, fast.” Think about anyone, any one of us, taking a picture and then telling photo editing software what to do. No need to work with complex brushes, paints, or understanding of color patterns.

AI and creative talent

This is scary for creative people such as graphic designers, digital artists, and others who may consider artificial intelligence a job killing replacement of their skills, despite pundits’ claims that it will “augment” human capabilities. Antagonists proclaim that AI systems will at best reduce the overhead with which creative people deal. But removing overhead is just the first step; the next step is surely the creative. More so, AI systems can ingest so much data, and will increasingly rely on unsupervised learning to correlate so many behavioral traits to create compelling creative content that a human creative artist cannot possibly fathom or understand. Thus, these artists will begin leveraging AI systems to create exceptional, “unthinkable” user experiences that have no “baseline” of reference, but soon may be replaced by these very systems.

AI and businesses

Businesses have always struggled to hire highly skilled creative professionals, and have paid through the nose to secure and retain them. That they will be able to leverage artificial intelligence to take charge of creativity to drive messages and communication to their end users, rather than relying on creative experts, will make them extremely pleased. As the intent of most AI systems is to enable non-specialists to perform many tasks by “hiding what is under the hood,” businesses might not need as many specialist human creative skills.

However, despite this seeming upside of AI systems, their under the hood nature will create problems of accountability. The complexity of their deep learning and neural networks will become such that even the teams developing the systems won’t be able to provide answers to specific decisions they make. Thus, if something goes wrong, where should the blame be placed? With the creation team, or with the AI system itself? The system won’t care – after all, it’s just a technology – and the team members will argue that the system learned by itself, far beyond what was coded, and they cannot be held accountable for its misdeeds. This is scary for businesses.

AI and the impact beyond business

Imagine the impact this will have on the society. Although you can track back how any other technological system arrives at an answer, AI systems that are now supposed to run not only social infrastructure but also much our entire life won’t be accountable to anyone! This is scary for everyone.

I don’t want to add to the alarmist scare going on in the industry, but I cannot be blind to successful use cases I witness daily. People will argue that AI has a long way to go before it becomes a credible alternative to human-based creativity. But, the reality is that the road may not be as long as is generally perceived.

Request a briefing with our experts to discuss the 2022 key issues presented in our 12 days of insights.

Request a briefing with our experts to discuss our 2022 key issues

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.