IBM announced this week that it is spinning off its legacy Managed Infrastructure business into a new public company, thus creating two independent companies. I highly endorse this move and, in fact, advocated it for years. IBM is a big, successful, proud organization. But it has been apparent for years that it faced significant challenges in trying to manage two very different businesses and operate within two very different operating models.
Robotic Process Automation (RPA) is a key component of the automation ecosystem and has been a rapidly growing software product category, making it an interesting space for potential acquisitions for a while now. While acquisitions in the RPA market have been happening over the last several years, three major RPA acquisitions have taken place in quick succession over the past few months: Microsoft’s acquisition of Softomotive in May, IBM’s acquisition of WDG Automation in July, and Hyland’s acquisition of Another Monday in August.
These acquisitions highlight a broader trend in which smaller RPA vendors are being acquired by different categories of larger technology market players:
Big enterprise tech product vendors like Microsoft and SAP
Service providers such as IBM
Larger automation vendors like Appian, Blue Prism, and Hyland.
Recent RPA acquisitions timeline:
Why is this happening?
The RPA product market has grown rapidly over the past few years, rising to about US$ 1.2 billion in software license revenues in 2019. The market seems to be consolidating, with some of the larger players continuing to gain market share. As in any such maturing market, mergers and acquisitions are a natural outcome. However, we see multiple factors in the current environment leading to this frenetic uptick in RPA acquisitions:
Acquirers’ perspective – In addition to RPA being a fast-growing market, new category acquirers – meaning big tech product vendors, service providers, and larger automation vendors – see potential in merging RPA capabilities with their own core products to provide more unified automation solutions. These new entrants will be able to build pre-packaged solutions combining RPA with other existing capabilities at lower cost. COVID-19 has created an urgency for broader automation in enterprises, and the ability to offer packaged solutions that provide a quick ROI can be a game-changer in this scenario. Additionally, the adverse impact of the pandemic on the RPA vendors’ revenues, which may have dropped their valuations down to more realistic levels, is making them more attractive for the acquiring parties.
Sellers’ perspective – There is now a general realization in the market that RPA alone is not going to cut it. RPA is the connective tissue, but you still need the larger services, big tech/Systems-of-Record and/or intelligent automation ecosystem to complete the picture. RPA vendors that don’t have the ability to invest in building this ecosystem will be looking to be acquired by larger players that offer some of these complementary capabilities. In addition, investor money may no longer be flowing as freely in the current environment, meaning that some RPA vendors will be looking for an exit.
What can we expect going forward?
The RPA and broader intelligent automation space will continue to evolve quickly, accelerated by the predictable rise in demand for automation and the changes brought on by the new entrants in the space. We expect to see the following trends in the short term:
Services imperative – Scaling up automation initiatives is an ongoing challenge for enterprises, with questions lingering around bot license utilization and the ability to fill an automation pipeline. Services that can help overcome these challenges will become more critical and possibly even differentiating in the RPA space, whether the product vendors themselves or their partners provide them.
Evolution of the competitive landscape – We expect the market landscape to undergo considerable transformation:
In the attended RPA space, while there will be co-opetition among RPA vendors and the bigger tech players, the balance may end up being slightly tilted in favor of the big tech players. Consider, for instance, the potential impact if Microsoft were to provide attended RPA capabilities embedded with its Office products suite. Pure-play RPA vendors, however, will continue to encourage citizen development, as this can unearth low-hanging fruit that can serve as an entry point into the wider enterprise organization.
In the unattended RPA space, pure-play RPA vendors will likely have an advantage as they do not compete directly with big tech players and so can invest in solutions across different systems of record. Pure-play RPA vendors might focus their efforts here and form an ecosystem to link in missing components of intelligent automation to provide integrated offerings.
There are several open questions on how some of these dynamics will play out over time. You can expect a battle for the soul (and control) of automation, with implications for all stakeholders in the automation ecosystem. Questions remain:
How will enterprises approach automation evolution, by building internal expertise or utilizing external services?
How will the different approaches automation vendors are currently following play out – system of record-led versus platform versus best of breed versus packaged solutions?
Where will the balance between citizen-led versus centralized automation lie?
OpenAI recently released the third generation of Generative Pretrained Transformer or GPT-3, the largest neuro-linguistic programming (NLP) model ever built. It’s fundamentally a language model, a machine learning model that can look at part of a sentence and predict the next word. It’s been pre-trained on 175 billion parameters in an unsupervised manner and can be further fine-tuned to perform specific tasks. OpenAI is an AI research organization founded in 2015 by Elon Musk, Sam Altman, and other luminaries. It describes its mission as: todiscover and enact the path to safe Artificial General Intelligence (AGI).
GPT-3 is breaking the internet
There’s been a lot of talk around the power, capabilities, and potential use cases of GPT-3 in the AI community. As the largest language model developed to date, it has the potential to advance AI as a domain. People have developed all sorts of uses – from mimicking Shakespeare, to writing prose, to designing web pages. It primarily stood out due to:
Foraying into AGI. The language model isn’t trained to perform a specific task such as sentence completion or translation, which is normally the case with ANI, the most prevalent form of AI we have seen. Rather, GPT-3 can perform multiple tasks such as answering trivia questions, translating common languages, and solving anagrams, to name a few, in a manner that is indistinguishable from a human.
Advancing the zero-shot/few-shot learning mechanism in model training. This mechanism is a setup in machine learning wherein the model predicts the answer only from the task description in natural language and/or maybe a few examples, implying that the algorithm can showcase accuracy without being extensively trained for a particular task. This capability opens the possibilities of building lean AI models that aren’t as data-intensive and don’t require humongous task-specific datasets for training.
So, this seems nifty – what next?
In addition to the flurry of standard NLP use cases that have been in existence for a while, which GPT-3 has advanced drastically, GPT-3 also has the potential to intercept the more technical and creative domains, which will lead to the democratization of such skills by making these capabilities available to non-technical people and putting business users in control, primarily by:
Furthering no-code/low-code by making code generation possible from natural language input. This is a step toward the eventual democratization of AI, making it accessible to a broader group of business users and has the potential to redefine job roles and the skill sets required to perform them.
Generating simple layouts and web templates to full-blown UI designs, using simple natural language input, potentially creating disruption in the design sphere.
Shortening AI timelines to market. Automated Machine Learning (AutoML) creates machine learning architectures with limited human input. The confluence of GPT-3 and AutoML has the potential to drastically reduce the time it takes to bring AI solutions to production. It will take significantly less time and human intervention to train a system and build a solution, thereby reducing the amount of time needed to deploy an AI solution in the market.
The massive language model is not without pitfalls. Its principal shortcoming is that, while it’s good with natural language tasks, is has no semantic understanding of the text. It is, by virtue of its training, just trying to complete a given sentence, no matter what the sentence means.
The second roadblock to mainstream adoption of the model is the fact that it’s riddled with societal biases in gender, race, and religion. This is because the model is trained on the internet, which brings its own set of challenges given the discourse around fake news and the post-truth world. Even OpenAI admits that its API models exhibit biases, and those can often be seen in the generated text. These biases need to be corrected before the model can be deployed in any real-world scenario.
These challenges certainly must be addressed before it can be deployed for actual, enterprise-grade use. That said, GPT-3 will potentially traverse the same trajectory that computer vision made at the start of the decade to eventually become ubiquitous in our lives.
Companies currently invest a lot of money in target markets to generate potential customers’ interest in products and services. But after they achieve a sale, they often frustrate customers by not providing effective customer service support. A poor customer experience can erode the company’s brand and reputation and destroy the company’s opportunities to increase revenue through new purchases by those existing customers. Obviously, these are significant problems, especially in today’s highly competitive environment with customers’ quick pace in buying decisions. Let us now explore the solution.
A sustained focus on digital, agility, and advanced technologies is likely to prepare enterprises for the future, especially following COVID-19. Many enterprise leaders consider IT infrastructure to be the bedrock of business transformation at a time when the service delivery model has become more virtual and cloud based. This reality presents an opportunity for GBS organizations that deliver IT infrastructure services to rethink their long-term strategies to enhance their capabilities, thereby strengthening their value propositions for their enterprises.
GBS setups with strong IT infra capabilities can lead enterprise transformation
Over the past few years, several GBS organizations have built and strengthened capabilities across a wide range of IT infrastructure services. Best-in-class GBS setups have achieved significant scale and penetration for IT infrastructure delivery and now support a wide range of functions – such as cloud migration and transformation, desktop support and virtualization, and service desk – with high maturity. In fact, some centers have scaled as high as 250-300 Full Time Equivalents (FTEs) and 35-45% penetration.
At the same time, these organizations are fraught with legacy issues that need to be addressed to unlock full value. Our research reveals that most enterprises believe that their GBS’ current IT infrastructure services model is not ready to cater to the digital capabilities necessary for targeted transformation. Only GBS organizations that evolve and strengthen their IT infrastructure capabilities will be well positioned to extend their support to newer or more enhanced IT infrastructure services delivery.
The need for an IT infrastructure revolution and what it will take
The push to transform IT infrastructure in GBS setups should be driven by a business-centric approach to global business services. To enable this shift, GBS organizations should consider a new model for IT infrastructure that focuses on improving business metrics instead of pre-defined IT Service Line Agreements (SLA) and Total Cost of Operations (TCO) management. IT infrastructure must be able to support changes ushered in by rapid device proliferation, technology disruptions, business expansions, and escalating cost pressures post-COVID-19 to showcase sustained value.
To transition to this IT infrastructure state, GBS organizations must proactively start to identify skills that have a high likelihood of being replaced / becoming obsolete, as well as emerging skills. They must also prioritize emerging skills that have a higher reskilling/upskilling potential. These goals can be achieved through a comprehensive program that proactively builds capabilities in IT services delivery.
In the exhibit below, we highlight the shelf life of basic IT services skills by comparing the upskilling/reskilling potential of IT services skills with their expected extent of replacement.
Exhibit: Analysis of the shelf life of basic IT services skills
In the near future, GBS organizations should leverage Artificial Intelligence (AI), analytics, and automation to further revolutionize their IT capabilities. The end goal is to transition to a self-healing, self-configuring system that can dynamically and autonomously adapt to changing business needs, thereby creating an invisible IT infrastructure model. This invisible IT infrastructure will be highly secure, require minimal oversight, function across stacks, and continuously evolve with changing business needs. By leveraging an automation-, analytics-, and AI-led delivery of infrastructure, operations, and services management, GBS organizations can truly enable enterprises to make decisions based on business imperatives.
The challenge: insurers need an effective digital experience for their customers
As insurers cope with the impact of COVID-19 on multiple fronts, effective digital communication with their customers has become more important than ever. Transparent, relevant, and crisp customer communications are key differentiators. While insurers have historically relied on intermediaries to communicate with their customers, they have made significant movement toward direct-to-consumer communications; however, the pandemic has highlighted just how far they still have to go.
Insurers have to balance a wide variety of public-facing and back-office demands, challenging in any time, but especially so during a pandemic. On top of the ongoing work of claims intake and management, sales and distribution, new policy onboarding, business continuity, etc., insurers need to efficiently and effectively answer custom questions and solve problems, proactively communicate about pandemic-related initiatives (such as premium relaxations and rebates, relief programs, and flexibility around policy renewal) – essentially, support and service their policyholders in a time of great stress.
Insurers need to arm their agents with information and content, as well as a complete view of each customer, including content from a variety of sources and analytics to pull it all together and make sense of it. To do so, insurers need to collect and consolidate customer data from multiple sources, run AI-enabled analytics, and curate digital content assets for honest, empathetic, and relevant communication with customers. From there, they need to determine how to disseminate personalized content with an omnichannel approach to reach the customers in the ways best suited to their needs.
The potential answer: the Digital Experience Platform (DXP)
Digital experience is the key lever for insurers to pull to effectively communicate with clients and prospects, to offer the desired customer experience, and to meet the challenge posed by digital-native companies and InsurTechs. A successful digital experience strategy, driven by a fit-for-purpose product that can meet customer needs, impacts and enhances the entire customer journey, and is a strategic imperative for insurers. An effective digital experience platform can provide an omnichannel personalized experience to the end customer, offer a single view of the customer, innovate service delivery, and provide a smoother experience for agents.
At the center of the digital experience solution is the Digital Experience Platform (DXP). See Exhibit 1 for Everest Group’s vision of a DXP for insurers.
Everest Group’s vision of a DXP for Insurance
To orchestrate insurers’ digital content management needs, the DXP should offer
A configurable and structured content value chain with centralized content / digital asset management and templatized content creation
Responsive web design with an intuitive UI/UX for consistent omnichannel content delivery
Effective Search Engine Optimization (SEO) and Search Engine Marketing (SEM) to identify prospective customers
To meet customer expectations and maintain high levels of customer experience, the DXP needs to
Provide consistent and seamless omnichannel interactions
Map the customer journey and offer personalized experiences leveraging meaningful customer data analytics
Offers self-service portals to enable policy, billing, and claims management
Monitor customer feedback through internal metrics and external sources and identify and rectify pain points in the customer journey
To enable agents, brokers, and advisors to have meaningful customer interactions, the DXP must
Generate a unified, 360-degree view of the customer through internal and external data sources
Optimize workflow with a digital portal that provides self-service capabilities, simplified document management, a holistic view of the customer, relevant product information, and channels to communicate with underwriters to provide quick quotes
Provide sales support and leverage customer data to identify effective interaction levers, cross-/ up-sell opportunities, and win probability scores to help prioritize sales
To meet insurers’ technology needs, a DXP should offer
A robust Application Programmable Interface (API) framework for integration with the insurer’s systems
Internal and external data aggregation with AI-driven analytics to provide personalized experiences and enable agents
An app marketplace with insurance-focused apps and integration frameworks to enable product enhancements
Low-/no-code development features for minimal re/upskilling of the insurer’s workforce and enable automation to enhance operational efficiencies
The DXPs that can curate superior customer experiences, enable both customers and agents, and provide digital enablers to drive business impact lead the pack in Everest Group’s assessment of DXP vendor landscape for insurance, illustrated in exhibit 2.
Everest Group’s assessment of DXP vendor landscape for insurance
Consider what’s now happening at companies that made investments in automation and moving work to the cloud. They’re doing better than others in the COVID-19 pandemic. They’re more flexible under trying conditions. They’re more resilient to challenges. They are a bright spot in this awful crisis. The pandemic showed what companies invested in as preparation for challenges. Unfortunately, it also exposed companies that were less prepared. As I mentioned in my prior blog, the pandemic was like what Warren Buffet described as the tide going out, exposing naked swimmers. One fact that the COVID-19 crisis exposed is that automation matters.
In the eighth episode of Digital Reality podcasts, Cecilia Edwards and Jimit Arora examine the impact of automation on an organization’s ability to continue operating during times of crisis, especially as it pertains to mitigating risk to human life and enabling safety. This blog is a summarized transcript of the podcast.
Cecilia Edwards: Welcome to the eighth episode of Digital Reality, Everest Group’s monthly podcast that moves beyond theory and beyond technology to discuss the realities of doing business in a digital-first world. I’m Cecilia Edwards…
Jimit Arora: …and I’m Jimit Arora. Each month we bring you a discussion that digs into the details of what it means, fundamentally, to execute a digital transformation that creates real business results.
This month, we are going to talk about the impact automation has on an organization’s ability to continue operating during times of crisis. We are all familiar with some of the top of mind benefits of automation, such as reducing costs, increasing productivity, ensuring high availability, increasing reliability, and optimizing performance. Another use case that is frequently deployed – but may not have been as top of mind before the COVID-19 crisis as it is today – is the protection of human life. When human life is at risk, availability naturally takes a back seat. Yet we have examples, both historic and current, where automation is being used to simultaneously address both of these objectives.
Cecilia, why don’t we start with an experience that is personal for you – and that is the form of automation deployed after the Challenger exploded back in 1986.
CE: You’re right, Jimit. This accident was an integral part of the start of my professional career. After graduating from college, I was commissioned as an officer in the US Air Force and worked on the Titan Space Launch Program at Vandenberg AFB in California. So, what does that have to do with the shuttle?
Back in the ‘80s we had grown accustomed to successful launches of space shuttles, and deeming them safe, we sent politicians, royalty, and civilians on missions. The 25th mission, the 10th for the Challenger, ended in disaster, blowing up just 73 seconds into flight and killing all on board, including a school teacher. This incident rightfully grounded the shuttle program for almost three years as the investigation and remediation procedures were put in place to ensure there would not be a repeat.
One of the functions the shuttle performed was to launch satellites, in particular large satellites, for the US Air Force. With the shuttle grounded, the Air Force was left with no means of completing its mission of putting several large satellites into orbit. Instead of waiting it out, the Air Force made a determination that injecting a satellite into orbit really didn’t require a human, and the Titan IV program was born.
The Titan IV was an unmanned space launch vehicle with flexibility to configure the payload to match that of the space shuttle. In other words, anything that could be launched on the shuttle could also be launched on the Titan IV.
The Air Force chose to automate the launch of the remaining satellites to resume the mission more quickly and to reduce the risk to human life going forward. People worked to prepare the satellite and rocket and then launch remotely, from a control room about 20 miles away. Future disasters would be costly only in dollars, not lives.
Jimit, what do you see as the lesson companies can take away from this story as they contemplate their automation strategies?
JA: Just because you have always done something a certain way doesn’t mean you shouldn’t challenge your assumptions. What might have been the only option at one point might be outdated; you should consider other technology solutions.
Assume that at some point something will go wrong. How long will your operations be shut down if the loss of human life is a part of an incident? Can you identify ways to eliminate or minimize the risk to people so your business can quickly recover?
Given the massive loss of jobs we are witnessing right now, on the surface, it seems a bit thoughtless to be considering navigation solutions for a crisis of the type we are experiencing that would result in the loss of additional jobs. It is actually a misconception that automation necessarily results in disadvantages for the displaced humans. That wasn’t the experience at Rio Tinto.
Rio Tinto is the world’s second-largest metals and mining corporation, producing iron ore, copper, diamonds, gold, coal, and uranium. I don’t know if you have ever seen pictures, but these mines are humongous holes in the ground; trucks are required to move the mined resources. It is a very dangerous job to drive these trucks and the mines are located in very remote locations. Not the ideal working conditions.
In 2008, Rio Tinto began deploying automation, in particular, a fleet of self-driving trucks to perform this dangerous task. This allowed them to run safer operations that were more efficient – self-driving trucks don’t get sleepy – and lower production costs.
But what was the impact on the people? They were able to reskill, upskill, and redeploy their people to safer and higher-value tasks. This technology has enabled them to create new career pathways and with investments and innovation in training, they believe their adaptability will be a key factor of their longer-term success.
CE: As other companies are thinking about increasing automation, here are some of the people considerations they should take into account:
Ensure you have people to work on the technology
Consider the implications of a blended workforce – you don’t necessarily need to get rid of people. Use the technology to make them more productive if you plan on having a blended automated and human workforce
Side note: when Henry Ford introduced the assembly line – a major new innovation of the time – instead of people losing their jobs, with the increased productivity, they were able to produce cars at a lower price point that led to an expanded market for cars. He ended up hiring more people to keep up with the demand.
Train people for their new, higher-value jobs
Digital Reality Check Points
JA: Humans ought to matter to business as much as costs do. We can take a few lessons from the past that provide a guidepost on how protecting your people is protecting your business. As we do every month, we’ll share three of these lessons, our Digital Reality Check Points, that you can apply to your business.
Understand the tasks in your business that are dangerous for humans to perform and determine whether there is an automation or other technology solution to protect them.
Plan for humans to work alongside any technology solutions you deploy.
Seek opportunities for your more efficient automated and human processes to be leveraged as an advantage in the marketplace.
Automation CoEs in Global Business Services (GBS) centers or Shared Services Centers (SSCs) have evolved over time. Mature GBS adopters of automation have made conscious decisions around the structure and governance CoEs, evolving to extract maximum value from their automation initiatives. Some of the benefits they have hoped to gain from the evolution include:
More efficient use of automation assets and components, such as licenses and reusable modules
Better talent leverage
Greater business impact
The typical CoE model evolution
CoE models generally evolve from siloed model to centralized and then to a federated:
Siloed model – kick starting the journey
Most GBS centers start their automation initiatives in silos or specific functions. In the early stages of their automation journeys, this approach enables them to gain a stronger understanding of capabilities and benefits of automation and also to achieve quick results.
However, this model has its limits, including suboptimal bot usage, low bargaining power with the vendor, lower reusability of modules and other IP, limited automation capabilities, and limited scale and scope.
The centralized model – building synergies
As automation initiatives evolve, enterprises and GBS organizations recognized the need to integrate these siloed efforts to realize more benefits, leading to the centralized model. This model enables benefits such introducing standard operating procedures (SOPs), better governance, higher reusability of automation assets and components, optimized usage of licenses and resources, and enforcement of best practices. This model also places a greater emphasis on a GBC-/enterprise-wide automation strategy, which is lacking in the siloed model.
However, this model, too, has limitations, suffering slow growth and rate of coverage across business units because the centralized model loses the flexibility, process knowledge, and ownership that individual business units bring to the bot development process.
The federated model – enabling faster scaling
The federated model addresses both of the other models’ limitations, enabling many best-in-class GBS centers to scale their automation initiatives rapidly. In this model, the CoE (the hub) handles support activities such as training resources, providing technology infrastructure and governance. Individual business units or functions (the spokes) are responsible for identifying and assessing opportunities and developing and maintaining bots. The model combines the benefits of decentralized bot-development with centralized governance.
The federated model has some limitations, such as reduced control for the CoE hub over the bot development and testing process, and, hence, over standardization, bot quality and module reusability. However, many believe the benefits outweigh the drawbacks.
The three CoE models are described in the figure below.
The table shown below shows how the three models compare on various parameters.
Why GBS organizations are migrating to the federated model
There are several reasons why GBS centers are moving to the federated model, as outlined below.
The federated model helps to better leverage subject matter expertise within a business unit. With bot development activity taking place within the BU, the federated model ensures better identification of automation opportunities, agile development, and reduced bot failures
The federated model leads to efficient resource usage. Centralization of support activities ensures: efficient use of resources, be they human, technology, reusable modules, licenses, etc.; standardization; and, clear guidance to individual business units
The federated model facilitates development and sharing of automation capabilities and best practices, which helps in the amassing of standardized IP and tacit knowledge important for rapid automation scaling
Federated model case study
A leading global hardware and technology firm’s GBS center adopted the federated CoE model, which houses the CoE hub, in 2017. In the three years since, it has grown to over 400 bots across more than 20 business units in a wide variety of locations, and saved more than $25 million from automation initiatives. The CoE hub has also successfully trained over 1,000 FTEs from technical and business backgrounds on bot development. As a result, firm-wide enthusiasm and involvement in the GBS center’s automation journey is high.
Businesses in the UK are facing a spate of challenges; there’s the specter of new Brexit-driven red tape on trade, a staffing shortage as some EU workers are returning to their home countries, and UK changes to the IR35 contract worker tax legislation, which is making it very difficult for companies to hire contractors. A Coronavirus pandemic could be the final straw that breaks businesses’ backs. Let’s face it – there is a perfect storm ahead.
With Brexit and the EU trade negotiations still going on, there is little certainty about the red tape that businesses will face in order to trade with each other across the English Channel. Yet, with the transition period set to end on 31 December 2020, there is little time for businesses to prepare for whatever the new trade requirements may ultimately be.
Because adherence to the as yet unclear regulations will increase businesses’ workloads, a natural response would be to hire more staff. But unemployment is at record low, and many skilled EU workers are leaving the UK and returning to their home countries. Furthermore, the UK Office of National Statistics (ONS) reports that EU immigration to the UK is at an all-time low.
The HMRC’s new IR35 rules, which come into effect in April 2020, are exacerbating the problem. Many companies have had to adopt no-contractor hiring policies and cannot fill temporary vacancies. They are already feeling the impact of the regulation. If they can’t hire staff or contractors, where are companies going to find resources to handle the extra workload of trade red tape?
Additionally, widespread cases of the Coronavirus could lead to prolonged periods of sick leave, further reducing the number of staff who are available to help with the increased workload of trading with the EU. While cases are still far and few between in the UK, the impact of the spread of the disease in China has been great. Empty offices and factories in Chinese cities and manufacturing heartlands are already leading to a shortage of parts for cars and other products that are much in demand in the UK.
Clearly, UK businesses are facing a perfect storm.
Investing in digital and Intelligent Automation (IA) technologies can help them tackle some red tape issues, particularly if they use IA for what I call Red Tape Automation (RTA). This could be automation of compliance form-filling and reporting requirements, weights and measure conversions, or making changes to transaction or product-related data and synchronizing them across multiple systems such as those used for sales and revenue to record value added taxes and other duties. Companies that trade with both EU and non-EU countries could automate the red tape for all of those, using rules engines to fill in the right forms and apply the correct rates.
IA is not a perfect solution, as people will be needed to implement technology, and there is a growing talent shortage. Nonetheless, UK businesses will be well served by investing in learning the art of the possible with IA. While the final details of any trade deals with the EU, or new deals with the rest of the world, will not be known for a while, knowing how to implement the requirements quickly using IA can help them weather the impending storm.