Category: Automation

Leap Towards General AI with Generative Adversarial Networks | Blog

AI adoption is on the rise among enterprises. In fact, the research we conducted for our AI Services State of the Market Report 2021 found that as of 2019, 72% of enterprises had embarked on their AI journey. And they’re investing in various key AI domains, including computer vision, conversational intelligence, content intelligence, and various decision support systems.

However, the machine intelligence that surrounds us today belongs to the Narrow AI domain. That means it’s equipped to tackle only a specified task. For example, Google Assistant is trained to respond to queries, while a facial recognition system is trained to recognize faces. Even seemingly complex applications of AI – like self-driving cars – fall under the Narrow AI domain.

Where Narrow AI falters

Narrow AI can process a vast array of data and complete the given task more efficiently; however, it can’t replicate human intelligence, their ability to reason, humans’ ability to make judgments, or be context aware.

This is where General AI steps in. General AI takes the quest to replicate human intelligence meaningfully ahead by equipping machines with the ability to understand their surroundings and context.

Exhibit 1: Evolution of AI

Evolution of AI 

The pursuit of General AI

Researchers came up with Deep Neural Networks (DNN), a popular AI structure that tries to mimic the human brain. DNNs work with many labeled datasets to perform their function. For example, if you want the DNN to identify apples in an image, you need to provide it with enough apple images for it to clean the pattern to define the general characteristics of an apple. It can then identify apples in any image. But, can DNNs – or more appropriately, General AI – be imaginative?

Enter GANs

Generative Adversarial Networks (GAN) bring us close to the concept of General AI by equipping machines to be “seemingly” creative and imaginative. Let’s look at how this concept works.

Exhibit 2: GAN working block diagram

Evolution of AI

GANs work with two neural networks to create and refine data. The first neural network is placed in a generator that maps back from the output to create the input data used to create the output. A discriminator has the second network, which is a classifier. It provides a score between 0 to 1. A score of 0.4 means that the probability of the generated image being like the real image is 0.4. If the obtained score is close to zero, it goes back to the generator to create a new image, and the cycle continues until a satisfactory result is obtained.

The goal of the generator is to fool the discriminator into believing that the image being sent is indeed authentic, and the discriminator is the authority equipped to catch whether the image sent is fake or real. The discriminator acts as a teacher and guides the generator to create a more realistic generated image to pass as the real one.

Applications around GAN

The GAN concept is being touted as one of the most advanced AI/ML developments in the last 30 years. What can it help your business do, other than create an image of an apple?

  1. Creating synthetic data for scaled AI deployments: Obtaining quality data to train AI algorithms has been a key concern for AI deployments across enterprises. Even BigTech vendors such as Google, which is considered the home of data, struggle with it. So Google launched “Project Nightingale” in partnership with Ascension, which created concerns around misuse of medical data. Regulations to ensure data privacy safeguard people’s interests but create a major concern for AI. The data to train AI models vanishes. This is where a GAN shines. It can create synthetic data, which helps in training AI models
  2. Translations: Another use case where GANs are finding applications is in translations. This includes image to image translation, text to image, and semantic image to photo translation
  3. Content generation: GANs are also being used in the gaming industry to create cartoon characters and creatures. In fact, Google launched a pilot to utilize a GAN to create images; this will help gaming developers be more creative and productive

Two sides to a coin

But, GANs do come with their own set of problems:

  • A significant problem in productionizing a GAN is attaining a symphony between the generator and the discriminator. Too strong or too weak a discriminator could lead to undesirable results. If it is too weak, it will pass all generated images as authentic, which defeats the purpose of GAN. And if it is too strong, no generated image would be able to fool the discriminator
  • The amount of computing power required to run a GAN is way more significant as compared to a generic AI model, thus limiting its use by enterprises
  • GANs, specifically cyclic and pix2pix types, are known for their capabilities across face synthesis, face swap, and facial attributes and expressions. This can be utilized to create doctored images and videos (deep fakes) that usually pass as authentic have become an attractive point for malicious actors. For example, a politician expressing grief over pandemic victims could be doctored using GAN to show a sinister smile on the politician’s face during the press briefing. Just imagine the amount of backlash and public uproar that would generate. And that is just a simple example of the destructive power of GANs

Despite these problems, enterprises should be keen to adopt GANS as they have the potential to disrupt the business landscape and create immense competitive stance opportunities across various verticals. For example:

  • A GAN can help the fashion and design industry create new and unique designs for high-end luxury items. It can also create imaginary fashion models, thus making it unnecessary to hire photographers and fashion models
  • Self-driving cars need millions of miles of road to gather data to test their detection capabilities using computer vision. All the time spent gathering the road data can be cut short through synthetic data generated through GAN. That, in turn, can enable faster time to market

If you’ve utilized GANs in your enterprise or know about more use cases where GANs can be advantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!

Advancing from Artificial Intelligence to Humane Intelligence | Blog

I recently came across a news article that said doctors will NOT be held responsible for a wrong decision or recommendation made based on the recommendations of an artificial intelligence (AI) system. That’s shocking and disturbing at so many levels! Think of the multitude of AI-based decision making possible in banking and financial services, the public sector, and many other industries and the worrying implications wrong decisions could have on the lives of people and society.

One of the never-ending debates for AI adoption continues to be the ethicality and explainability concerns with the systems’ black box decision making. There are multiple dimensions to this issue:

  1. Definitional ambiguity – Trustworthy, fair and ethical, and repeatable – these are the different characteristics of AI systems in the context of explainability. Most enterprises cite explainability as a concern, but most don’t really know what it means or the degree to which it is required.
  2. Misplaced ownership – While they can be trained, re-trained, tested, and course corrected, no developer can guarantee bias-free or accurate decision making. So, in case of a conflict, who should be held responsible? The enterprise, the technology providers, the solution developers, or another group?
  3. Rising expectations – AI systems are being increasingly trusted with highly complex, multi-stakeholder decision-making scenarios which are contextual, subjective, open to interpretation, and require emotional intelligence.

 

Enterprises, particularly the highly regulated ones, have hit a roadblock in their AI adoption journey and scalability plans considering the consequence of wrong decisions with AI. In fact, one in every three AI use cases fail to reach a substantial scalable level due to explainability concerns.

While the issue may not be a concern for all AI-based use cases, it is usually a roadblock for scenarios with high complexity and high criticality, which lead to irrevocable decisions.

Advancing from Artificial Intelligence to Humane Intelligence

In fact, Hanna Wallach, a senior principal researcher at Microsoft Research in New York City, stated, “We cannot treat these systems as infallible and impartial black boxes. We need to understand what is going on inside of them and how they are being used.”

Progress so far

Last year, Singapore released its Model AI Governance Framework, which provides readily implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google released an end-to-end framework for an internal audit of AI systems. There are many other similar efforts by opponents and proponents of AI alike; however, a feasible solution is still out of sight.

Technology majors and service providers have also made meaningful investments to address the issue, including Accenture (AI fairness Toolkit), HCL (Enterprise XAI Framework), PwC (Responsible AI), and Wipro (ETHICA). Many XAI-centric niche firms that focus only on addressing the explainability conundrum, particularly for the highly regulated industries like healthcare and public sector, also exist. Ayasdi, Darwin AI, KenSci, and Kyndi deserve a mention.

The solution focus varies from enabling enterprises to compare the fairness and performance of multiple models to enabling users to set their ethicality bars. It’s interesting to note that all of these offer bolt-on solutions that enable an explanation of the decision in a human interpretable format, but they’re not embedded explainability-based AI products.

The missing link  

Considering this is an artificial form of intelligence, let’s take a step back and analyze how humans make such complex decisions:

  • Bias-free does not exist in the real world: The first thing to appreciate is that humans are not free from biases, and biases by their nature are subjective and open to interpretation.
  • Progressive decision-making approach: A key difference between humans and the machines making such decisions is the fact that even with all processes in place, humans seek help, pursue guidance in case of confusion, and discuss edge cases that are more prone to wrong decision making. Complex decision making is seldom left to one individual alone; rather, it’s a hierarchy of decision makers in play, adding knowledge on top of previous insights to build a decision tree.
  • Emotional Quotient (EQ): Humans have emotions, and even though most decisions require pragmatism, it’s the EQ in human decisions that explains the outcomes in many situations.

Advancing from Artificial Intelligence to Humane Intelligence

These are behaviors that today’s AI systems are not trained to adopt. A disproportionate focus on speed and cost has led to neglecting the human element that ensures accuracy and acceptance. And instead of addressing accuracy as a characteristic, we add another layer of complexity in the AI systems with explainability.

And even if the AI system is able to explain how and why it made a wrong decision, what good does that do anyway? Who is willing to put money in an AI system that makes wrong decisions but explains them really well? What we need is an AI system that makes the right decisions, so it does not need to explain them.

AI systems of the future need to be designed with these humane elements embedded in their nature and functionality. This may include, pointing out edge cases, “discussing” and “debating” complex cases with other experts (humans or other AI systems), embedding the element of EQ in decision making, and at times even handing a decision back to humans when it encounters a new scenario where the probability of wrong decision making is higher.

But until we get there, a practical way for organizations to address these explainability challenges is to adopt a hybrid human-in-the-loop approach. Such an approach relies on subject matter experts (SMEs), such as ethicists, data scientists, regulators, domain experts, etc. to

  • Improve learning models’ outcomes over time
  • Check for biases and discrepancies
  • Ensure compliance

In this approach, instead of relying on a large training data set to build the model, the machine learning system is built iteratively with regular inputs from experts.

Advancing from Artificial Intelligence to Humane Intelligence

In the long run, enterprises need to build a comprehensive governance structure for AI adoption and data leverage. Such a structure will have to institute explainability norms that factor in criticality of machine decisions, required expertise, and checks throughout the lifecycle of any AI implementation. Humane intelligence and not artificial intelligence systems are required in the world of the future.

We would be happy to hear your thoughts on approaches to AI and XAI. Please reach out to [email protected] for a discussion.

Is It Open Season for RPA Acquisitions? | Blog

Robotic Process Automation (RPA) is a key component of the automation ecosystem and has been a rapidly growing software product category, making it an interesting space for potential acquisitions for a while now. While acquisitions in the RPA market have been happening over the last several years, three major RPA acquisitions have taken place in quick succession over the past few months: Microsoft’s acquisition of Softomotive in May, IBM’s acquisition of WDG Automation in July, and Hyland’s acquisition of Another Monday in August.

These acquisitions highlight a broader trend in which smaller RPA vendors are being acquired by different categories of larger technology market players:

  • Big enterprise tech product vendors like Microsoft and SAP
  • Service providers such as IBM
  • Larger automation vendors like Appian, Blue Prism, and Hyland.

Recent RPA acquisitions timeline:

RPA Robotic Process Automation

Why is this happening?

The RPA product market has grown rapidly over the past few years, rising to about US$ 1.2 billion in software license revenues in 2019. The market seems to be consolidating, with some of the larger players continuing to gain market share. As in any such maturing market, mergers and acquisitions are a natural outcome. However, we see multiple factors in the current environment leading to this frenetic uptick in RPA acquisitions:

Acquirers’ perspective – In addition to RPA being a fast-growing market, new category acquirers – meaning big tech product vendors, service providers, and larger automation vendors – see potential in merging RPA capabilities with their own core products to provide more unified automation solutions. These new entrants will be able to build pre-packaged solutions combining RPA with other existing capabilities at lower cost. COVID-19 has created an urgency for broader automation in enterprises, and the ability to offer packaged solutions that provide a quick ROI can be a game-changer in this scenario. Additionally, the adverse impact of the pandemic on the RPA vendors’ revenues, which may have dropped their valuations down to more realistic levels, is making them more attractive for the acquiring parties.

Sellers’ perspective – There is now a general realization in the market that RPA alone is not going to cut it. RPA is the connective tissue, but you still need the larger services, big tech/Systems-of-Record and/or intelligent automation ecosystem to complete the picture. RPA vendors that don’t have the ability to invest in building this ecosystem will be looking to be acquired by larger players that offer some of these complementary capabilities. In addition, investor money may no longer be flowing as freely in the current environment, meaning that some RPA vendors will be looking for an exit.

What can we expect going forward?

The RPA and broader intelligent automation space will continue to evolve quickly, accelerated by the predictable rise in demand for automation and the changes brought on by the new entrants in the space. We expect to see the following trends in the short term:

  • More acquisitions – With the ongoing market consolidation, we expect more acquisitions of smaller automation players – including RPA, Intelligent Document Processing (IDP), process orchestration, Intelligent Virtual Agents (IVA), and process mining players – by the above-mentioned bigger categories as they seek to build more complete transformational solutions.
  • Services imperative – Scaling up automation initiatives is an ongoing challenge for enterprises, with questions lingering around bot license utilization and the ability to fill an automation pipeline. Services that can help overcome these challenges will become more critical and possibly even differentiating in the RPA space, whether the product vendors themselves or their partners provide them.
  • Evolution of the competitive landscape – We expect the market landscape to undergo considerable transformation:
    • In the attended RPA space, while there will be co-opetition among RPA vendors and the bigger tech players, the balance may end up being slightly tilted in favor of the big tech players. Consider, for instance, the potential impact if Microsoft were to provide attended RPA capabilities embedded with its Office products suite. Pure-play RPA vendors, however, will continue to encourage citizen development, as this can unearth low-hanging fruit that can serve as an entry point into the wider enterprise organization.
    • In the unattended RPA space, pure-play RPA vendors will likely have an advantage as they do not compete directly with big tech players and so can invest in solutions across different systems of record. Pure-play RPA vendors might focus their efforts here and form an ecosystem to link in missing components of intelligent automation to provide integrated offerings.

There are several open questions on how some of these dynamics will play out over time. You can expect a battle for the soul (and control) of automation, with implications for all stakeholders in the automation ecosystem. Questions remain:

  • How will enterprises approach automation evolution, by building internal expertise or utilizing external services?
  • How will the different approaches automation vendors are currently following play out – system of record-led versus platform versus best of breed versus packaged solutions?
  • Where will the balance between citizen-led versus centralized automation lie?

Only time will tell how this all plays out.

But in the meantime, we’d love to hear your thoughts. Please share them with us at [email protected], [email protected], and  [email protected].

GPT-3 Accelerates AI Progress, but the Path to AGI is Going to Be Bumpy | Blog

OpenAI recently released the third generation of Generative Pretrained Transformer or GPT-3, the largest neuro-linguistic programming (NLP) model ever built. It’s fundamentally a language model, a machine learning model that can look at part of a sentence and predict the next word. It’s been pre-trained on 175 billion parameters in an unsupervised manner and can be further fine-tuned to perform specific tasks. OpenAI is an AI research organization founded in 2015 by Elon Musk, Sam Altman, and other luminaries. It describes its mission as: to discover and enact the path to safe Artificial General Intelligence (AGI).

GPT-3 is breaking the internet

There’s been a lot of talk around the power, capabilities, and potential use cases of GPT-3 in the AI community. As the largest language model developed to date, it has the potential to advance AI as a domain. People have developed all sorts of uses – from mimicking Shakespeare, to writing prose, to designing web pages. It primarily stood out due to:

  1. Foraying into AGI. The language model isn’t trained to perform a specific task such as sentence completion or translation, which is normally the case with ANI, the most prevalent form of AI we have seen. Rather, GPT-3 can perform multiple tasks such as answering trivia questions, translating common languages, and solving anagrams, to name a few, in a manner that is indistinguishable from a human.
  1. Advancing the zero-shot/few-shot learning mechanism in model training. This mechanism is a setup in machine learning wherein the model predicts the answer only from the task description in natural language and/or maybe a few examples, implying that the algorithm can showcase accuracy without being extensively trained for a particular task. This capability opens the possibilities of building lean AI models that aren’t as data-intensive and don’t require humongous task-specific datasets for training.


So, this seems nifty – what next?

In addition to the flurry of standard NLP use cases that have been in existence for a while, which GPT-3 has advanced drastically, GPT-3 also has the potential to intercept the more technical and creative domains, which will lead to the democratization of such skills by making these capabilities available to non-technical people and putting business users in control, primarily by:

  • Furthering no-code/low-code by making code generation possible from natural language input. This is a step toward the eventual democratization of AI, making it accessible to a broader group of business users and has the potential to redefine job roles and the skill sets required to perform them.
  • Generating simple layouts and web templates to full-blown UI designs, using simple natural language input, potentially creating disruption in the design sphere. 
  • Shortening AI timelines to market. Automated Machine Learning (AutoML) creates machine learning architectures with limited human input. The confluence of GPT-3 and AutoML has the potential to drastically reduce the time it takes to bring AI solutions to production. It will take significantly less time and human intervention to train a system and build a solution, thereby reducing the amount of time needed to deploy an AI solution in the market.

GPT-3 is great, but we’re not in Space Odyssey yet

The massive language model is not without pitfalls. Its principal shortcoming is that, while it’s good with natural language tasks, is has no semantic understanding of the text. It is, by virtue of its training, just trying to complete a given sentence, no matter what the sentence means.

The second roadblock to mainstream adoption of the model is the fact that it’s riddled with societal biases in gender, race, and religion. This is because the model is trained on the internet, which brings its own set of challenges given the discourse around fake news and the post-truth world. Even OpenAI admits that its API models exhibit biases, and those can often be seen in the generated text. These biases need to be corrected before the model can be deployed in any real-world scenario.

These challenges certainly must be addressed before it can be deployed for actual, enterprise-grade use. That said, GPT-3 will potentially traverse the same trajectory that computer vision made at the start of the decade to eventually become ubiquitous in our lives.

What are your thoughts about GPT-3? Please share with us at [email protected] and [email protected].

Integrating Customer Support Call Centers With Artificial Intelligence | Blog

Companies currently invest a lot of money in target markets to generate potential customers’ interest in products and services. But after they achieve a sale, they often frustrate customers by not providing effective customer service support. A poor customer experience can erode the company’s brand and reputation and destroy the company’s opportunities to increase revenue through new purchases by those existing customers. Obviously, these are significant problems, especially in today’s highly competitive environment with customers’ quick pace in buying decisions. Let us now explore the solution.

Read more in my blog on Forbes

Is Your GBS Organization Ready for IT Infrastructure Evolution to Enable Business Transformation? | Blog

A sustained focus on digital, agility, and advanced technologies is likely to prepare enterprises for the future, especially following COVID-19. Many enterprise leaders consider IT infrastructure to be the bedrock of business transformation at a time when the service delivery model has become more virtual and cloud based. This reality presents an opportunity for GBS organizations that deliver IT infrastructure services to rethink their long-term strategies to enhance their capabilities, thereby strengthening their value propositions for their enterprises.

GBS setups with strong IT infra capabilities can lead enterprise transformation

Over the past few years, several GBS organizations have built and strengthened capabilities across a wide range of IT infrastructure services. Best-in-class GBS setups have achieved significant scale and penetration for IT infrastructure delivery and now support a wide range of functions – such as cloud migration and transformation, desktop support and virtualization, and service desk – with high maturity. In fact, some centers have scaled as high as 250-300 Full Time Equivalents (FTEs) and 35-45% penetration.

At the same time, these organizations are fraught with legacy issues that need to be addressed to unlock full value. Our research reveals that most enterprises believe that their GBS’ current IT infrastructure services model is not ready to cater to the digital capabilities necessary for targeted transformation. Only GBS organizations that evolve and strengthen their IT infrastructure capabilities will be well positioned to extend their support to newer or more enhanced IT infrastructure services delivery.

The need for an IT infrastructure revolution and what it will take

The push to transform IT infrastructure in GBS setups should be driven by a business-centric approach to global business services. To enable this shift, GBS organizations should consider a new model for IT infrastructure that focuses on improving business metrics instead of pre-defined IT Service Line Agreements (SLA) and Total Cost of Operations (TCO) management. IT infrastructure must be able to support changes ushered in by rapid device proliferation, technology disruptions, business expansions, and escalating cost pressures post-COVID-19 to showcase sustained value.

To transition to this IT infrastructure state, GBS organizations must proactively start to identify skills that have a high likelihood of being replaced / becoming obsolete, as well as emerging skills. They must also prioritize emerging skills that have a higher reskilling/upskilling potential. These goals can be achieved through a comprehensive program that proactively builds capabilities in IT services delivery.

In the exhibit below, we highlight the shelf life of basic IT services skills by comparing the upskilling/reskilling potential of IT services skills with their expected extent of replacement.

Exhibit: Analysis of the shelf life of basic IT services skills

Analysis of the shelf life of basic IT services skills

In the near future, GBS organizations should leverage Artificial Intelligence (AI), analytics, and automation to further revolutionize their IT capabilities. The end goal is to transition to a self-healing, self-configuring system that can dynamically and autonomously adapt to changing business needs, thereby creating an invisible IT infrastructure model. This invisible IT infrastructure will be highly secure, require minimal oversight, function across stacks, and continuously evolve with changing business needs. By leveraging an automation-, analytics-, and AI-led delivery of infrastructure, operations, and services management, GBS organizations can truly enable enterprises to make decisions based on business imperatives.

If you’d like to know more about the key business transformation trends for enterprises in  IT infrastructure, do read our report Exploring the Enterprise Journey Towards “Invisible” IT Infrastructure or reach out to us at [email protected] or [email protected].

Digital Experience Platform: A Key Lever for Insurer Differentiation and Growth | Blog

The challenge: insurers need an effective digital experience for their customers

As insurers cope with the impact of COVID-19 on multiple fronts, effective digital communication with their customers has become more important than ever. Transparent, relevant, and crisp customer communications are key differentiators. While insurers have historically relied on intermediaries to communicate with their customers, they have made significant movement toward direct-to-consumer communications; however, the pandemic has highlighted just how far they still have to go.

Insurers have to balance a wide variety of public-facing and back-office demands, challenging in any time, but especially so during a pandemic. On top of the ongoing work of claims intake and management, sales and distribution, new policy onboarding, business continuity, etc., insurers need to efficiently and effectively answer custom questions and solve problems, proactively communicate about pandemic-related initiatives (such as premium relaxations and rebates, relief programs, and flexibility around policy renewal) – essentially, support and service their policyholders in a time of great stress.

Insurers need to arm their agents with information and content, as well as a complete view of each customer, including content from a variety of sources and analytics to pull it all together and make sense of it. To do so, insurers need to collect and consolidate customer data from multiple sources, run AI-enabled analytics, and curate digital content assets for honest, empathetic, and relevant communication with customers. From there, they need to determine how to disseminate personalized content with an omnichannel approach to reach the customers in the ways best suited to their needs.

The potential answer: the Digital Experience Platform (DXP)

Digital experience is the key lever for insurers to pull to effectively communicate with clients and prospects, to offer the desired customer experience, and to meet the challenge posed by digital-native companies and InsurTechs. A successful digital experience strategy, driven by a fit-for-purpose product that can meet customer needs, impacts and enhances the entire customer journey, and is a strategic imperative for insurers. An effective digital experience platform can provide an omnichannel personalized experience to the end customer, offer a single view of the customer, innovate service delivery, and provide a smoother experience for agents.

At the center of the digital experience solution is the Digital Experience Platform (DXP). See Exhibit 1 for Everest Group’s vision of a DXP for insurers.

Everest Group’s vision of a DXP for Insurance

 

To orchestrate insurers’ digital content management needs, the DXP should offer

  • A configurable and structured content value chain with centralized content / digital asset management and templatized content creation
  • Responsive web design with an intuitive UI/UX for consistent omnichannel content delivery
  • Effective Search Engine Optimization (SEO) and Search Engine Marketing (SEM) to identify prospective customers

To meet customer expectations and maintain high levels of customer experience, the DXP needs to

  • Provide consistent and seamless omnichannel interactions
  • Map the customer journey and offer personalized experiences leveraging meaningful customer data analytics
  • Offers self-service portals to enable policy, billing, and claims management
  • Monitor customer feedback through internal metrics and external sources and identify and rectify pain points in the customer journey

To enable agents, brokers, and advisors to have meaningful customer interactions, the DXP must

  • Generate a unified, 360-degree view of the customer through internal and external data sources
  • Optimize workflow with a digital portal that provides self-service capabilities, simplified document management, a holistic view of the customer, relevant product information, and channels to communicate with underwriters to provide quick quotes
  • Provide sales support and leverage customer data to identify effective interaction levers, cross-/ up-sell opportunities, and win probability scores to help prioritize sales

To meet insurers’ technology needs, a DXP should offer

  • A robust Application Programmable Interface (API) framework for integration with the insurer’s systems
  • Internal and external data aggregation with AI-driven analytics to provide personalized experiences and enable agents
  • An app marketplace with insurance-focused apps and integration frameworks to enable product enhancements
  • Low-/no-code development features for minimal re/upskilling of the insurer’s workforce and enable automation to enhance operational efficiencies

The DXPs that can curate superior customer experiences, enable both customers and agents, and provide digital enablers to drive business impact lead the pack in Everest Group’s assessment of DXP vendor landscape for insurance, illustrated in exhibit 2.

Everest Group’s assessment of DXP vendor landscape for insurance

2

In our recently released report, Assessing Digital Experience Platforms in Insurance and Vendor Profiles 2020 – Building SUPER Insurance Experiences to Drive Differentiation and Growth, we take a closer look at digital experience trends in insurance and explore the impact of an effective digital experience strategy. The report includes a detailed analysis of 13 leading technology vendors on their DXPs’ capabilities and abilities to meet insurer needs.

Please feel free to reach out to [email protected] to share your experiences.

COVID-19 Business Crisis Proves Automation Matters | Blog

Consider what’s now happening at companies that made investments in automation and moving work to the cloud. They’re doing better than others in the COVID-19 pandemic. They’re more flexible under trying conditions. They’re more resilient to challenges. They are a bright spot in this awful crisis. The pandemic showed what companies invested in as preparation for challenges. Unfortunately, it also exposed companies that were less prepared. As I mentioned in my prior blog, the pandemic was like what Warren Buffet described as the tide going out, exposing naked swimmers. One fact that the COVID-19 crisis exposed is that automation matters.

Read my blog on Forbes

The Evolution of the Automation CoE Model – Why Many GBS Centers Are Adopting the Federated CoE Model | Blog

Automation CoEs in Global Business Services (GBS) centers or Shared Services Centers (SSCs) have evolved over time. Mature GBS adopters of automation have made conscious decisions around the structure and governance CoEs, evolving to extract maximum value from their automation initiatives. Some of the benefits they have hoped to gain from the evolution include:

  • Faster scaling
  • More efficient use of automation assets and components, such as licenses and reusable modules
  • Better talent leverage
  • Greater business impact

The typical CoE model evolution

CoE models generally evolve from siloed model to centralized and then to a federated:

Siloed model – kick starting the journey

Most GBS centers start their automation initiatives in silos or specific functions. In the early stages of their automation journeys, this approach enables them to gain a stronger understanding of capabilities and benefits of automation and also to achieve quick results.

However, this model has its limits, including suboptimal bot usage, low bargaining power with the vendor, lower reusability of modules and other IP, limited automation capabilities, and limited scale and scope.

The centralized model – building synergies

As automation initiatives evolve, enterprises and GBS organizations recognized the need to integrate these siloed efforts to realize more benefits, leading to the centralized model. This model enables benefits such introducing standard operating procedures (SOPs), better governance, higher reusability of automation assets and components, optimized usage of licenses and resources, and enforcement of best practices. This model also places a greater emphasis on a GBC-/enterprise-wide automation strategy, which is lacking in the siloed model.

However, this model, too, has limitations, suffering slow growth and rate of coverage across business units because the centralized model loses the flexibility, process knowledge, and ownership that individual business units bring to the bot development process.

The federated model – enabling faster scaling

The federated model addresses both of the other models’ limitations, enabling many best-in-class GBS centers to scale their automation initiatives rapidly. In this model, the CoE (the hub) handles support activities such as training resources, providing technology infrastructure and governance. Individual business units or functions (the spokes) are responsible for identifying and assessing opportunities and developing and maintaining bots. The model combines the benefits of decentralized bot-development with centralized governance.

The federated model has some limitations, such as reduced control for the CoE hub over the bot development and testing process, and, hence, over standardization, bot quality and module reusability. However, many believe the benefits outweigh the drawbacks.

The three CoE models are described in the figure below.

Automation Adoption in GBS centers and the Rise of the Federated CoE Model

The table shown below shows how the three models compare on various parameters.

Comparison of salient features benefits and limitations each CoE model

Why GBS organizations are migrating to the federated model

There are several reasons why GBS centers are moving to the federated model, as outlined below.

  • The federated model helps to better leverage subject matter expertise within a business unit. With bot development activity taking place within the BU, the federated model ensures better identification of automation opportunities, agile development, and reduced bot failures
  • The federated model leads to efficient resource usage. Centralization of support activities ensures: efficient use of resources, be they human, technology, reusable modules, licenses, etc.; standardization; and, clear guidance to individual business units
  • The federated model facilitates development and sharing of automation capabilities and best practices, which helps in the amassing of standardized IP and tacit knowledge important for rapid automation scaling

Federated model case study

A leading global hardware and technology firm’s GBS center adopted the federated CoE model, which houses the CoE hub, in 2017. In the three years since, it has grown to over 400 bots across more than 20 business units in a wide variety of locations, and saved more than $25 million from automation initiatives. The CoE hub has also successfully trained over 1,000 FTEs from technical and business backgrounds on bot development. As a result, firm-wide enthusiasm and involvement in the GBS center’s automation journey is high.

Transitioning to a federated CoE model has helped many GBS programs scale their automation initiatives rapidly. For more details, see our report, Scaling Up the Adoption of Automation Solutions – The Evolving Role of Global In-house Centers or reach out to Bharath M  or Param Dhar for more information on this topic.

The UK’s Perfect Storm: Brexit, EU Workers Returning Home, IR35 Changes, and Coronavirus | Blog

Businesses in the UK are facing a spate of challenges; there’s the specter of new Brexit-driven red tape on trade, a staffing shortage as some EU workers are returning to their home countries, and UK changes to the IR35 contract worker tax legislation, which is making it very difficult for companies to hire contractors. A Coronavirus pandemic could be the final straw that breaks businesses’ backs. Let’s face it – there is a perfect storm ahead.

With Brexit and the EU trade negotiations still going on, there is little certainty about the red tape that businesses will face in order to trade with each other across the English Channel. Yet, with the transition period set to end on 31 December 2020, there is little time for businesses to prepare for whatever the new trade requirements may ultimately be.

Because adherence to the as yet unclear regulations will increase businesses’ workloads, a natural response would be to hire more staff. But unemployment is at record low, and many skilled EU workers are leaving the UK and returning to their home countries. Furthermore, the UK Office of National Statistics (ONS) reports that EU immigration to the UK is at an all-time low.

The HMRC’s new IR35 rules, which come into effect in April 2020, are exacerbating the problem. Many companies have had to adopt no-contractor hiring policies and cannot fill temporary vacancies. They are already feeling the impact of the regulation. If they can’t hire staff or contractors, where are companies going to find resources to handle the extra workload of trade red tape?

Additionally, widespread cases of the Coronavirus could lead to prolonged periods of sick leave, further reducing the number of staff who are available to help with the increased workload of trading with the EU. While cases are still far and few between in the UK, the impact of the spread of the disease in China has been great. Empty offices and factories in Chinese cities and manufacturing heartlands are already leading to a shortage of parts for cars and other products that are much in demand in the UK.

Clearly, UK businesses are facing a perfect storm.

Investing in digital and Intelligent Automation (IA) technologies can help them tackle some red tape issues, particularly if they use IA for what I call Red Tape Automation (RTA). This could be automation of compliance form-filling and reporting requirements, weights and measure conversions, or making changes to transaction or product-related data and synchronizing them across multiple systems such as those used for sales and revenue to record value added taxes and other duties. Companies that trade with both EU and non-EU countries could automate the red tape for all of those, using rules engines to fill in the right forms and apply the correct rates.

IA is not a perfect solution, as people will be needed to implement technology, and there is a growing talent shortage. Nonetheless, UK businesses will be well served by investing in learning the art of the possible with IA. While the final details of any trade deals with the EU, or new deals with the rest of the world, will not be known for a while, knowing how to implement the requirements quickly using IA can help them weather the impending storm.

For more information about IA, please check out Everest Group’s Service Optimization Technologies research.

Visit our COVID-19 resource center to access all our COVD-19 related insights.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.