Category: Automation

Federated Learning: Privacy by Design for Machine Learning | Blog

With cyberattacks and data breaches at all-time highs, consumers are increasingly skeptical about sharing their data with enterprises, creating a dilemma for artificial intelligence (AI) that needs massive data to thrive. The nascent technology of federated learning offers an ideal growing alternative for healthcare, life sciences, banking, finance, manufacturing, advertising, and other industries to unleash the full potential of AI without compromising the privacy of individuals. To learn how you can have all the data you need while protecting consumers, read on.  

Privacy preservation with federated learning

The infinite number of massive data breaches that have stripped individuals of their privacy has made the public more aware of the need to protect their data. In the absence of strong governance and guidelines, people are more skeptical than ever about sharing their personal data with enterprises.

This new data-conscious paradigm poses a problem for artificial intelligence (AI) that thrives on huge amounts of data. Unless we can figure out a way to train machines on significantly smaller data sets, protecting the privacy and data of users will remain key obstacles to intelligent automation.

Federated learning (FL) is emerging as a solution to overcome this problem. Broadly speaking, Federated learning is a method of training machine learning models in a way that the user data does not leave its location, keeping it safe and private. This differs from the traditional centralized machine learning methods that require the data to be aggregated in a centralized location.

Federated learning is a mechanism of machine learning, wherein the process of learning takes place in a decentralized manner across a network of nodes/edge devices, and the results are aggregated in a central server to create a unified model. It essentially comprises decoupling the activity of model training with centralized data storage.

The Mechanism of Federated Learning

By training the same model across an array of devices, each with its own set of data, we get multiple versions of the model, which, when combined, create a more powerful and accurate global version for deployment and use.

In addition to training algorithms in a private and secure manner, Federated learning provides an array of other benefits such as:

  • Training across data silos
  • Training on heterogeneous data
  • Lower communication costs
  • Reduced liability

Federated learning applicability and use cases

Based on an Everest Group framework, we have found Federated learning is most suitable and being adopted at higher rates in industrials where data is an extremely critical asset that is present across different locations in a distributed fashion and privacy is paramount.

Federated learning is especially beneficial for industries that have strict data residency requirements. This makes the healthcare and life-sciences industries perfect candidates for its adoption. Federated learning can help facilitate multi-institution collaborations between medical institutions by helping them overcome regulatory hurdles that prevent them from sharing patient data by pooling data in a common location.

The next industry ripe for the adoption of Federated learning is the banking and financial sectors. For instance, it can be used to develop a more comprehensive and accurate fraud analytics solution that is based on data from multiple financial entities.

Another industry where we see high applicability of Federated learning is the manufacturing industry. By ensuring collaboration between different entities across the supply chain, using Federated learning techniques, there is a case to build a more powerful model that can help increase the overall efficiency across the supply chain.

Federated learning also might find increased use in interest-based advertising. With the decision to disable third-party cookies by major internet browsers, marketers are at a loss for targeted advertising and engagement. With Federated Learning, marketers can replace individual identifiers with cohorts or group-based identifiers. These cohorts are created by identifying people with common interests based on individual user data such as browsing habits stored on local machines.

An ecosystem on the rise

Since Google introduced the concept of Federated learning in 2016, there has been a flurry of activity. Given that this is a nascent technology, the ecosystem is currently dominated by big tech and open-source players. We see hyperscalers taking the lead with Microsoft and Amazon Web Services (AWS) making investments to activate Federated learning, followed by Nvidia and Lenovo who are looking at the market from a hardware perspective.

Another segment of players working in this arena are startups that are using Federated learning to build industry-specific solutions. AI companies such as Owkin and Sherpa.ai are pioneering this technology and have developed Federated learning frameworks that are currently operational at a few enterprises’ locations.

The adoption and need for Federated learning depend on the industry and vary with the use case. Everest Group has developed a comprehensive framework to help you assess and understand the suitability of Federated learning for your use-case in our Latest Primer for Federated Learning. The framework is built on four key parameters that include data criticality, privacy requirement, regulatory constraint, and data silo/ diversity.

Federated learning provides us with an alternative way to make AI work in a world without compromising the privacy of individuals.

If you are interested in understanding the suitability of federated learning for your enterprise, please share your thoughts with us at [email protected].

Recharge Your AI initiatives with MLOps: Start Experimenting Now | Blog

In this era of industrialization for Artificial Intelligence (AI), enterprises are scrambling to embed AI across a plethora of use cases in hopes of achieving higher productivity and enhanced experiences. However, as AI permeates through different functions of an enterprise, managing the entire charter gets tough. Working with multiple Machine Learning (ML) models in both pilot and production can lead to chaos, stretched timelines to market, and stale models. As a result, we see enterprises hamstrung to successfully scale AI enterprise-wide.

MLOps to the rescue

To overcome the challenges enterprises face in their ML journeys and ensure successful industrialization of AI, enterprises need to shift from the current method of model management to a faster and more agile format. An ideal solution that is emerging is MLOps – a confluence of ML and information technology operations based on the concept of DevOps.

According to our recently published primer on MLOps, Successfully Scaling Artificial Intelligence – Machine Learning Operations (MLOps), these sets of practices are aimed at streamlining the ML lifecycle management with enhanced collaboration between data scientists and operations teams. This close partnering accelerates the pace of model development and deployment and helps in managing the entire ML lifecycle.

Picture1 1

MLOps is modeled on the principles and practices of DevOps. While continuous integration (CI) and continuous delivery (CD) are common to both, MLOps introduces the following two unique concepts:

  • Continuous Training (CT): Seeks to automatically and continuously retrain the MLOps models based on incoming data
  • Continuous Monitoring (CM): Aims to monitor the performance of the model in terms of its accuracy and drift

We are witnessing MLOps gaining momentum in the ecosystem, with hyperscalers developing dedicated solutions for comprehensive machine learning management to fast-track and simplify the entire process. Just recently, Google launched Vertex AI, a managed AI platform, which aims to solve these precise problems in the form of an end-to-end MLOps solution.

Advantages of using MLOps

MLOps bolsters the scaling of ML models by using a centralized system that assists in logging and tracking the metrics required to maintain thousands of models. Additionally, it helps create repeatable workflows to easily deploy these models.

Below are a few additional benefits of employing MLOps within your enterprise:

Picture2

  • Repeatable workflows: Saves time and allows data scientists to focus on model building because of the automated workflows for training, testing, and deployment that MLOps provides. It also aids in creating reproducible ML workflows that accelerate fractionalization of the model
  • Better governance and regulatory compliance: Simplifies the process of tracking changes made to the model to ensure compliance with regulatory norms for particular industries or geographies
  • Improved model health: Helps continuously monitor ML models across different parameters such as accuracy, fairness, biasness, and drift to keep the models in check and ensure they meet thresholds
  • Sustained model relevance and RoI: Keeps the model relevant with regular training based on new incoming data so it remains relevant. This helps to keep the model up to date and provide a sustained Return on Investment (RoI)
  • Increased experimentation: Spurs experimentation by tracking multiple versions of models trained with different configurations, leading to improved variations
  • Trigger-based automated re-training: Helps set up automated re-training of the model based on fresh batches of data or certain triggers such as performance degradation, plateauing or significant drift

Starting your journey with MLOps

Implementing MLOps is complex because it requires a multi-functional and cross-team effort across the key elements of people, process, tools/platforms, and strategy underpinned by rigorous change management.

As enterprises embark on their MLOps journey, here are a few key best practices to pave the way for a smooth transition:

  • Build a cross-functional team – Engage team members from the data science, operations, and business front with clearly defined roles to work collaboratively towards a single goal
  • Establish common objectives – Set common goals for the cross-functional team to cohesively work toward, realizing that each of the teams that form an MLOps pod may have different and competing objectives
  • Construct a modular pipeline – Take a modular approach instead of a monolithic one when building MLOps pipelines since the components built need to be reusable, composable, and shareable across multiple ML pipelines
  • Select the right tools and platform – Choose from a plethora of tools that cater to one or more functions (management, modeling, deployment, and monitoring) or from platforms that cater to the end-to-end MLOps value chain
  • Set baselines for monitoring – Establish baselines for automated execution of particular actions to increase efficiency and ensure model health in addition to monitoring ML systems

When embarking on the MLOps journey, there is no one-size-fits-all approach. Enterprises need to assess their goals, examine their current ML tooling and talent, and also factor in the available time and resources to arrive at an MLOps strategy that best suits their needs.

For ML to keep pace with the agility of modern business, enterprises need to start experimenting with MLOps now.

Are you looking to scale AI within your enterprise with the help of MLOps? Please share your thoughts with us at [email protected].

Internet of Things Will Connect the Supply Chain in the “Next Normal” | Blog

Imagine a utopia where minimum human intervention is needed to run an entire shop floor. In this world, manufacturers have total control and visibility of all products, machines predict equipment failures and correct them, shelves count inventory, and customers check themselves out. While such a supply chain model seems improbable and far into the future, the likes of Amazon, Walmart, and Toyota, are already on their way to achieving this vision. At the center of their supply chain initiatives making this possible is the Internet of Things (IoT.)

The supply chain is considered the backbone of a successful enterprise.  However, firms find it increasingly challenging to establish a robust supply chain model. The disruptions caused by COVID-19 have further made matters worse as ‘disconnected enterprises’ struggle to gain complete supply chain visibility. The pandemic has established that supply chain disruptions and uncertainties will become more frequent going forward.

Supply chain challenges

The current supply chain landscape faces numerous challenges that need to be addressed.  These issues are illustrated below:

Challenges in Current Supply chain

 Future-proofing the supply chain using IoT

As enterprises strive to develop a resilient supply chain, IoT will occupy the center stage. An interconnected supply chain will bring together suppliers/vendors, logistic providers, manufacturers, wholesalers and retailers, and customers dispersed by geography. The technology ensures improved efficiency, better risk management, end-to-end visibility, and enhanced stakeholder experience.

A seamlessly connected supply chain provides advantages at every stage of the value chain for each of the stakeholders. The exhibit below showcases a connected supply chain ecosystem:

Connected ecosystem for supply chain

 Let’s look at how some companies are capturing the benefits IoT:

  • Real-time location tracking

Using real-time data (captured from GPS coordinates) tracking the movement of raw materials/finished goods, IoT technology aids firms in determining where and when products get delayed. This helps managers ensure route optimization and better plan the delivery schedule. IoT, in combination with blockchain, helps secure the products against fraud. For example, Novo Surgical leverages IoT for optimally tracking and tracing its ‘smart surgical instruments.’ This has reduced errors, decreased surgical instrument loss, increased visibility and efficiency, and improved forecasting of demand for the firm.

  • Equipment monitoring

Sensors on machines constantly collect information around the functioning of the machine, enabling managers to monitor them in real time. By analyzing parameters such as machine temperature, vibration, etc., manufacturers can better predict machine downtime and take necessary actions to mitigate this. For instance, Toyota partnered with Hitachi to leverage the vendor’s IoT platform and use the data collected to reduce unexpected machine failures and improve the reliability and efficiency of equipment.

  • Smart inventory management

IoT sensors in the warehouse assist in tracking the movement of individual items, providing an efficient way to monitor inventory levels and prevent pilferage. Smart shelves contain weight sensors that monitor the product weight to determine when products are out of stock. Walmart has been leveraging smart shelves in its retail stores to manage its products more efficiently and improve the shopping experience.

  • Warehouse management

IoT technology uses sensors that can monitor and adjust warehouse parameters such as humidity, temperature, pressure, and avoiding spoiling of items. Leading e-commerce players like Amazon and Alibaba have been pioneers in leveraging IoT to optimize warehouse management.

 Charting the journey for a connected supply chain

As enterprises aim to future-proof their supply chain, they will need a structured path following these five steps below:

  1. Develop a business case: Enterprises need to determine the current gaps in their supply chain and identify the extent of digitization of their supply chain to develop the business case for a connected supply chain.
  2. Secure buy-in from supply partners: Successful implementation of IoT in the supply chain requires the various partners to collaborate and adopt the technology together. Securing a buy-in from each member of the value chain – vendors/suppliers, OEM players, logistics operators, and retailers – is imperative for firms to realize the complete benefits. Compatibility of the technology platforms leveraged by the various supply partners is essential to develop a seamless supply chain.
  3. Invest in security: Invest in security and data protection initiatives early on to avoid supply chain breaches. Performing regular security and vulnerability assessments across the value chain and investing in next-generation technology-based security solutions is essential.
  4. Leverage other technologies: While IoT has a plethora of benefits across the supply chain, consider leveraging next-generation technologies such as blockchain, artificial intelligence, and edge computing in confluence with IoT to further enhance the capabilities of the use cases.
  5. Partner for implementation: To overcome concerns around skills and address data reconciliation challenges, consider partnering with IoT providers with expertise in the supply chain arena. Service/solution providers also are instrumental in bringing a security layer that can aid in addressing data security concerns and governance issues.

Since IoT is an interplay of multiple devices and machines, a successful IoT implementation requires firms to invest in sensors, cloud/edge infrastructure, IoT connectivity networks, data management and analytics solutions, and application development and management. Enterprises can accelerate their IoT supply chain journeys by partnering with solution providers with strong expertise in IoT products and services capabilities in the supply chain arena.

Are you embarking on your connected supply chain journey? Please share your thoughts and experiences with us at [email protected] and [email protected].

Democratization: Artificial Intelligence for the People and by the People | Blog

Enterprises have identified Artificial Intelligence (AI) as a quintessential enabling technology in the success of digital transformation initiatives to further increase operational efficiency, improve employee productivity, and deliver enhanced stakeholder experience. According to our recent research, Artificial Intelligence (AI) Services – State of the Market Report 2021 | Scale the AI Summit Through Democratization, more than 72 percent of enterprises have embarked on their AI journey. This increased AI adoption is leading us into an era of industrialization of AI.

Talent is the biggest impediment in scaling AI

Enterprises face many challenges in their AI adoption journey, including rising concerns of privacy, lack of proven Return on Investment (RoI) for AI initiatives, increasing need for explainability, and an extreme crunch for skilled AI talent. According to Everest Group’s recent assessment of the AI services market, 43 percent of the enterprises identified limited availability of skilled, mature, and niche AI talent as one of the biggest challenges they face in scaling their AI initiatives.

Lack of skilled AI talent

Enterprises face this talent crunch in using both the open-source ecosystem and hyperscalers’ AI platforms for the reasons below:

  • High demand for open-source source machine learning libraries such as TensorFlow, scikit-learn, and Keras due to their ability to let users leverage transfer learning
  • Low project readiness of certified talent across platforms such as SAP Leonardo, Salesforce Einstein, Amazon SageMaker, Azure Machine Learning, and Microsoft Cognitive Services due to lack of domain knowledge and industry contextualization

As per our research in the talent readiness assessment, a large demand-supply gap exists for AI technologies (approximately 25 to 30 percent), hindering an enterprise’s ability to attract and retain talent.

In addition to this technical talent crunch, another aspect where enterprises struggle to find the right talent is the non-technical facet of AI that includes roles such as AI ethicists, behavioral scientists, and cognitive linguists.

As more and more enterprises adopt AI, this talent challenge becomes more exacerbated at the same time the demand for AI technologies is skyrocketing. There is an ongoing tussle for AI talent between academic, big tech, and enterprises, and, so far, the big techs are coming out on top. They have been able to successfully recruit huge amounts of AI talent, leaving a drying pool for the rest of the enterprises to fish in.

Democratization to overcome the talent problem

We see democratization as a potential solution to overcome this expanding talent gap. As we define it, democratization is primarily concerned with making AI accessible to a wider set of users targeted specifically at non-technical business users. The principle behind the concept of democratization is “AI for all.”

Democratization has to do with educating business users in the basic concepts of data and AI and giving them access to the data and tools that can help build a larger database of AI use-cases, develop insights, and find AI-based solutions to their problems.

Enterprises can leverage Everest Group’s four-step democratization framework to help address talent gaps within the enterprise and empower its employees. Here are the steps to guide a successful democratization initiative:

  • Data democratization: The first step of AI democratization is enabling data access to business users throughout the organization. This will help familiarize them with the data structures and interpret and analyze the data
  • Data and AI literacy: The next step is embracing initiatives to help business users build general knowledge of AI, understand the implications of AI systems, and successfully interact with them
  • Self-service low-code/no-code tools: Organizations should also invest in tools that provide pre-built components and building blocks in a drag and drop fashion to help business users deploy ML models without having to write extensive code
  • Automation-enabled machine learning (ML): Lastly, enterprises should use automated machine learning (AutoML) for automating ML workflows that involve some or all of the steps involved in the model training process, such as feature engineering, feature selection, algorithm selection, and hyperparameter optimization

Following these steps, democratization can help reduce the barriers to entry for AI experimentation, accelerate enterprise-wide adoption, and speed up in-house innovation among its benefits.

Current state of democratization

The industry as a whole is now in the initial stages of AI democratization, which is heavily focused on data and AI literacy initiatives. Some of the more technologically advanced or well-versed enterprises have been early adopters. The exhibit below presents the current market adoption of the four key elements of democratization and a few industry examples:

current AI adoption

Democratization is essential

As part of their democratization efforts, enterprises must also focus on contextualization, change management, and governance to ensure responsible and successful democratization.

By doing this, companies will not only help solve the persistent AI talent crunch but also ensure faster time to market, empower business users, and increase employee productivity. Hence, democratization is an essential step to ensuring the sustainable, inclusive, and responsible adoption of AI.

What have you experienced in your democratization journey? Please share your thoughts with us at [email protected] or at [email protected].

Microsoft Goes All in on Industry Cloud and AI with $20 Billion Nuance Deal | Blog

Yesterday’s announcement of Microsoft’s acquisition of Nuance Communications signifies the big tech company’s serious intentions in the US healthcare market.

We’ve been writing about industry cloud and verticalization plays of big technology companies (nicknamed BigTech) for a while now. With the planned acquisition of Nuance Communications for US$19.7 billion, Microsoft has made its most definitive step in the healthcare and verticalization journey.

At a base level, what matters to Microsoft is that Nuance focuses on conversational AI. Over the years, it has become quite the phenomenon among physicians and healthcare providers – 77 percent of US hospitals are Nuance clients. Also, it is not just a healthcare standout – Nuance counts 85 percent of Fortune 100 organizations as customers. Among Nuance’s claims to fame in conversational AI is the fact that it powered the speech recognition engine in Apple’s Siri.

Why Did Microsoft Acquire Nuance?

The acquisition is attractive to Microsoft for the following reasons:

  1. Buy versus build: If Microsoft (under Satya Nadella) can trust itself to build a capability swiftly, it will never buy. Last year, when we wrote about Salesforce’s acquisition of Slack, we highlighted how Microsoft pulled out of its intent to acquire Slack in 2016 and launched Teams within a year. Could Microsoft have built and scaled a speech recognition AI offering?
  2. Conversational AI: Microsoft’s big three competitors – Amazon, Apple, and Google – have a significant head start in speech recognition, the only form of AI that has gone mainstream and is likely to be a US$30 billion market by 2025. Clearly, with mature competition, this was not going to be as easy as “Alexa! Cut slack, build Teams” for Nadella
  3. Healthcare: This is another battleground for which Microsoft has been building up an arsenal. As the US continues to expand on its $3 trillion spend on healthcare, Microsoft wants a share of this sizeable market. That is why it makes sense to peel the healthcare onion a bit more

 

What Role Does Microsoft Want to Play in Healthcare?

While other competitors (read Amazon, Salesforce, and Google) were busy launching healthcare-focused offerings in 2020, Microsoft was already helping healthcare providers use Microsoft Teams for virtual physician visits. Also, Microsoft and Nuance are not strangers, having partnered in 2019, to enable ambient listening capabilities for physician to EHR record keeping. Microsoft sees a clear opportunity in the US healthcare industry.

  • Everest Group estimates that technology services spending in US healthcare will grow at a CAGR of 7.5% for the next five years, adding an incremental US$25 billion to an already whopping $56 billion
  • The focus of Microsoft and its competitors is to disrupt the multi-billion ($40 billion by 2025) healthcare data (Electronic Medical Record) industry
  • Erstwhile EMR has been a major reason for physician burnout, which the likes of Nuance aim to solve
  • Cloud-driven offerings such as Canvas Medical and Amazon Comprehend Medical are already making Epic Systems and Cerner sit up and take notice

It is not without reason that Microsoft launched its cloud for healthcare last year and has followed it up by acquiring Nuance.

What Does it Mean for Healthcare Enterprises?

Under Nadella, Microsoft has developed a sophisticated sales model that takes a portfolio approach to clients. This has helped Microsoft build a strong positioning beyond its Office and Windows offerings even in healthcare. Most clients in healthcare are already exposed to its Power Apps portfolio and Intelligent Cloud (including Azure and cloud for healthcare) in some form. It is only a question of time (if the acquisition closes without issues) until Nuance becomes part of its suite of offerings for healthcare.

What Does it Mean for Service Providers?

As a rejoinder to our earlier point about head starts, this is where Microsoft has a lead over competitors. Our recent research with System Integrators (SI) ecosystem indicates that Microsoft is head and shoulders above its nearest competitors when it comes to leveraging the SI partnership channel to bring its offerings to enterprises. This can act as a significant differentiator when it comes to taking Nuance to healthcare customers as SI partners can expect favorable terms of engagement.

Partners' Perceptions

Lastly, this is not just about healthcare

While augmenting healthcare capabilities and clients is the primary trigger for this purchase, we believe Microsoft aims to go beyond healthcare to achieve the following objectives:

  • Take conversational AI to other industries: Clearly, healthcare is not the only industry warming up to conversational AI. Retail, financial services, and many other industries have scaled usage. Hence, it is not without reason that Mark Benjamin (Nuance’s CEO) will report to Scott Guthrie (Executive Vice President of Cloud & AI at Microsoft) and not Gregory Moore (Microsoft’s Corporate Vice President, Microsoft Health), indicating a broader push
  • Make cloud more intelligent: As mentioned above, Microsoft will pursue full-stack opportunities by combining Nuance’s offerings with its Power Apps and Intelligent Cloud suites. As a matter of fact, it plans to report Nuance’s performance as part of its Intelligent Cloud segment

Microsoft: $2 Trillion and Beyond

This announcement comes against the background of BigTech and platform companies making significant moves to industry-specific use cases, which will drive the next wave of client adoption and competitive differentiation. Microsoft’s turnaround and acceleration since Nadella took over as CEO in 2014 are commendable (see the image below). It is on the verge of becoming only the second company to achieve $2 trillion in market capitalization. This move is a bet on its journey beyond the $2 trillion.

Picture2 5

What do you make of its move? Please feel free to reach out to [email protected] and [email protected] to share your opinion.

Synthetic Data – Catalyst for AI innovation | Blog

With a connected world and connected humans, we are on track for a huge uptick in new data creation at an unprecedented level. IoT, digitization, and cloud have brought on the generation and storage of ZBs of data created each day. Data has become the new oil but with some caveats. The tap of this oil is controlled by a few organizations globally, making this data asset scarce and expensive. However, enterprises in their pursuit of digital transformation require this data to get insights for better decision-making.

Shortcut to access data

The next logical question arises as to how we can get hold of this data, which, if utilized to its full potential, has the power to transform enterprises. This is where synthetic data comes to play. It is the form of data that is created inorganically rather than being generated through actual interactions or events. It is usually formed by studying the characteristics and relations between different variables. A total of three types of synthetic data exist, which are shown below.

Picture1 1

Exhibit 1: Types of synthetic data

Why is it required now?

With the cultural shift towards insights-based decision making from gut-based decision making and the onset of data literacy initiatives, enterprises require apt insights, which further require the generation of huge amounts of data. There are a few instances highlighted below which make a strong case for synthetic data.

  1. GDPR mandates stringent regulations for data access which stipulates if a company can utilize it with user content. This makes it extremely difficult to share data creating hurdles to solve business problems
  2. AI models and algorithms require extensive labeled data for training purposes. In the case of self-driving cars, it needs to clock in millions of miles to test computer vision algorithms. This delays the go-to-market for such products
  3. New product development usually requires a lot of data testing before it is introduced in the market. Innovation becomes scarce if quality data from the field is not there

 Techniques to generate synthetic data

There are usually three strategies to generate synthetic data. These include some simplistic techniques as well as methods infused heavily with AI.

Picture2

Exhibit 2: Techniques to generate synthetic data

Sampling from distribution is simply drawing a lot of random numbers from a normal distribution. Agent-based modeling understands the behavior of the original data. Once the characteristics are defined, it creates new data keeping the behavior constraints in place. Generative Adversarial Network (GAN) models are synthetic data generation techniques usually used for creating image data. These networks have two DL models, one is a generator, and the other is known as a discriminator. For example, GAN can take random noises as its input. Then the generator generates output images, whereas the discriminator tries to find whether the output is fake or real. The more the image is closer to the real one, the output can be considered as real.

Applications across enterprises

An infinite source of data that mimics the real dataset can provide innumerable opportunities to create test scenarios during development.

Synthetic data acts as a beneficiary for enterprises across domains and industries, with some examples shown below.

  1. “Customer is king”: a tag line commonly used in the current environment wherein organizations strive to provide hyper-personalization to customers for better customer retention and to create upsell and cross-sell opportunities. Synthetic data helps enterprises get detailed analysis of each customer without worrying about the consent through GDPR. This data would have properties of real data and can be used for simulations
  1. Agile development and DevOps: Software testing and quality assurance often involve a long waiting period to get access to ‘real’ data. Artificially generated data can assist in eliminating this waiting period leading to reduced testing time and increased agility during development
  2. Research and product development: Synthetic data can be used to create an understanding of the format of real data that does not exist yet and build algorithms and preliminary models on top of it. It can also be used as a baseline for product development and reduce time to market
  3. Robotics: Companies often struggle to obtain quality real-life data sets to execute testing. Synthetic data helps in running thousands of simulations, thereby improving the robots and complementing expensive real-life testing
  4. Financial services: Important elements for any financial service enterprise are fraud protection and detection methods, which can be tested and evaluated for their effectiveness using synthetic data

Limitations of synthetic data

However, the use of synthetic data does not come without its own set of limitations.

  1. At its best, synthetic data imitates the real-life data sets but is not an exact replica. This can result in certain data points that are deviations or exceptions to the overall set, leading to skewed modeling outputs
  2. It is also not an easy task to assess the quality of the synthetic data set generated as it often depends on the complexity of the original data. As a result, the quality assessment parameters need to change in accordance with the variation in the original data point, meaning there can’t be a standard framework to be followed for each synthetic data set
  3. It is difficult for business users to trust the credibility of the synthetic data generated due to a lack of technological understanding leading to slow uptake. This is more so in certain industries such as the healthcare and food industry, where there are direct repercussions to human life

Way ahead

Despite these limitations, enterprises should be keen to adopt synthetic data as they have an opportunity to disrupt the business landscape by utilizing data and its benefits to full potential. It can prove to be the push that was required for AI/ML to penetrate across enterprises and gain more traction.

If you’ve utilized synthetic data in your enterprise or know about more areas where synthetic data can be advantageous and disadvantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!

The Evolution from Robotic Surgery to Digital Surgery | Blog

The robotic surgery market has surged over the last decade. According to an article published by the JAMA Network Open in early January 2020, robot-assisted surgical procedures accounted for 15.1 percent of all general surgeries in 2018, up from 1.8 percent in 2012. And the market has grown even more since 2018. For example, the utilization rate of Intuitive Surgical’s da Vinci robot in US hospitals has grown more than 400 percent in the last three years.

To capture their piece of the robotic surgery market pie, other MedTech giants, including Johnson & Johnson (J&J), Medtronic, Stryker, and Zimmer Biomet have turned to acquisitions and strategic partnerships. Stryker paid a whopping US$1.65 billion in 2013 to acquire Mako Surgical Corp. Zimmer Biomet bought Medtech for its Rosa Surgical Robot in 2016 for US$132 million. J&J and Medtronic acquired Orthotaxy and Mazor Robotics, respectively, in 2018. And J&J subsequently bought Auris Health and Verb Surgical in 2019.

With all this money being spent on robotic surgery company acquisitions, it is clear that the MedTech giants intended to fight head-on with one another to build the best surgical robot.

But building the best surgical robot does not assure market leadership.  Indeed, robotics is only one aspect of the digital surgery ecosystem. In order to excel in the robotic surgery space, companies need to build solutions that complement their surgical robots with digital technology tools and capabilities.

Transforming from robotic surgery to digital surgery

As you see in the following image, the digital surgery ecosystem consists of imaging, visualization, analytics, and interoperability technologies that enhance the capabilities of surgical robots, enabling companies to unlock the full array of potential benefits robotic surgery has to offer – better precision and control, enhanced surgical visibility, remote surgery, better clinician and patient experiences, etc.

Let’s take a quick look at the value each of the digital technologies can bring to robotic surgery.

  • AI/ML and data analytics will help the robots learn from past procedures and ensure better surgery planning and reasoning. They will also help support cognitive functions such as decision making, problem solving, and speech recognition. One real-world example of AI/ML is Stryker’s Mako robot, which learns from past procedures to ensure better positioning of surgical implants for stability
  • Strong network and connectivity will enable real-time data sharing of patient outcomes, best practices, and support remote surgery at a global level
  • Augmented Reality/Virtual Reality (AR/VR) and advanced visualization technologies enhance surgical visibility beyond the naked eye and improve anatomical education and surgeon training modalities through interactive simulations
  • Remote monitoring, sensors, and wearables can assist in intra-operative and post-operative surgical care through real-time data exchange for better clinical outcomes and reduced care costs

B1

 

Realizing the benefits of digital technologies, MedTech companies are starting to make investments in them to augment their surgical robots. For example, Medtronic in 2020 acquired Digital Surgery, a leader in surgical AI, data and analytics, and digital education and training to strengthen its robotic-assisted surgery platform. Similarly, in 2021, Stryker acquired Orthosensor to enhance its Mako surgical robotics systems with smart sensor technologies and wearables, and Zimmer partnered with Canary Medical to develop smart knee implants. MedTech companies are also starting to change their branding to reflect their move to digital. For example, J&J is positioning its new offerings as digital surgery platforms instead of robotic surgery platforms.

Building a single, connected next-generation digital surgery platform

Building specialized robots for different surgical procedures requires either a huge capital investment to acquire such individualized capabilities or extensive resources and time to develop them in-house. So, it’s neither feasible nor cost-effective to do so. Therefore, it would be ideal for MedTech organizations to focus on developing one robot that supports the entire breadth of surgical procedures.

With their history of robotic acquisitions over the last three years, MedTech giants should be looking at integrating multiple point solutions to build a single, connected next-generation digital surgery platform. The following image depicts our vision of a truly connected digital surgery ecosystem built around a digital surgery platform. It ensures interoperability among all types of surgical robots so they can continually learn and evolve by sharing best practices, surgical procedures, and associated patient data.

B2

J&J has already shared its vision and roadmap for building a next-generation digital surgery platform. It brings together robotics, visualization, advanced instrumentation, connectivity, and data analytics to enable its digital surgery platform to improve outcomes across a broad range of disease states. It has announced its plans to integrate its recently unveiled Ottava platform with the Monarch platform it gained from its 2019 acquisition of Auris Health to build a strong position in the digital surgery market.

With MedTech giants in the initial phase of building their next-generation connected digital surgery ecosystem, they will need to have the right fit of complementary digital technologies to truly scale their impact – alleviating surgeon workloads, driving productivity, enabling personalization, and better clinical outcomes. Service providers that bring niche talent and a balanced portfolio of engineering and digital services will be a partner of choice for MedTech giants in this journey.

Please share your views on robotic surgery and the digital surgery ecosystem with us at [email protected] and [email protected].

Leap Towards General AI with Generative Adversarial Networks | Blog

AI adoption is on the rise among enterprises. In fact, the research we conducted for our AI Services State of the Market Report 2021 found that as of 2019, 72% of enterprises had embarked on their AI journey. And they’re investing in various key AI domains, including computer vision, conversational intelligence, content intelligence, and various decision support systems.

However, the machine intelligence that surrounds us today belongs to the Narrow AI domain. That means it’s equipped to tackle only a specified task. For example, Google Assistant is trained to respond to queries, while a facial recognition system is trained to recognize faces. Even seemingly complex applications of AI – like self-driving cars – fall under the Narrow AI domain.

Where Narrow AI falters

Narrow AI can process a vast array of data and complete the given task more efficiently; however, it can’t replicate human intelligence, their ability to reason, humans’ ability to make judgments, or be context aware.

This is where General AI steps in. General AI takes the quest to replicate human intelligence meaningfully ahead by equipping machines with the ability to understand their surroundings and context.

Exhibit 1: Evolution of AI

Evolution of AI 

The pursuit of General AI

Researchers came up with Deep Neural Networks (DNN), a popular AI structure that tries to mimic the human brain. DNNs work with many labeled datasets to perform their function. For example, if you want the DNN to identify apples in an image, you need to provide it with enough apple images for it to clean the pattern to define the general characteristics of an apple. It can then identify apples in any image. But, can DNNs – or more appropriately, General AI – be imaginative?

Enter GANs

Generative Adversarial Networks (GAN) bring us close to the concept of General AI by equipping machines to be “seemingly” creative and imaginative. Let’s look at how this concept works.

Exhibit 2: GAN working block diagram

Evolution of AI

GANs work with two neural networks to create and refine data. The first neural network is placed in a generator that maps back from the output to create the input data used to create the output. A discriminator has the second network, which is a classifier. It provides a score between 0 to 1. A score of 0.4 means that the probability of the generated image being like the real image is 0.4. If the obtained score is close to zero, it goes back to the generator to create a new image, and the cycle continues until a satisfactory result is obtained.

The goal of the generator is to fool the discriminator into believing that the image being sent is indeed authentic, and the discriminator is the authority equipped to catch whether the image sent is fake or real. The discriminator acts as a teacher and guides the generator to create a more realistic generated image to pass as the real one.

Applications around GAN

The GAN concept is being touted as one of the most advanced AI/ML developments in the last 30 years. What can it help your business do, other than create an image of an apple?

  1. Creating synthetic data for scaled AI deployments: Obtaining quality data to train AI algorithms has been a key concern for AI deployments across enterprises. Even BigTech vendors such as Google, which is considered the home of data, struggle with it. So Google launched “Project Nightingale” in partnership with Ascension, which created concerns around misuse of medical data. Regulations to ensure data privacy safeguard people’s interests but create a major concern for AI. The data to train AI models vanishes. This is where a GAN shines. It can create synthetic data, which helps in training AI models
  2. Translations: Another use case where GANs are finding applications is in translations. This includes image to image translation, text to image, and semantic image to photo translation
  3. Content generation: GANs are also being used in the gaming industry to create cartoon characters and creatures. In fact, Google launched a pilot to utilize a GAN to create images; this will help gaming developers be more creative and productive

Two sides to a coin

But, GANs do come with their own set of problems:

  • A significant problem in productionizing a GAN is attaining a symphony between the generator and the discriminator. Too strong or too weak a discriminator could lead to undesirable results. If it is too weak, it will pass all generated images as authentic, which defeats the purpose of GAN. And if it is too strong, no generated image would be able to fool the discriminator
  • The amount of computing power required to run a GAN is way more significant as compared to a generic AI model, thus limiting its use by enterprises
  • GANs, specifically cyclic and pix2pix types, are known for their capabilities across face synthesis, face swap, and facial attributes and expressions. This can be utilized to create doctored images and videos (deep fakes) that usually pass as authentic have become an attractive point for malicious actors. For example, a politician expressing grief over pandemic victims could be doctored using GAN to show a sinister smile on the politician’s face during the press briefing. Just imagine the amount of backlash and public uproar that would generate. And that is just a simple example of the destructive power of GANs

Despite these problems, enterprises should be keen to adopt GANS as they have the potential to disrupt the business landscape and create immense competitive stance opportunities across various verticals. For example:

  • A GAN can help the fashion and design industry create new and unique designs for high-end luxury items. It can also create imaginary fashion models, thus making it unnecessary to hire photographers and fashion models
  • Self-driving cars need millions of miles of road to gather data to test their detection capabilities using computer vision. All the time spent gathering the road data can be cut short through synthetic data generated through GAN. That, in turn, can enable faster time to market

If you’ve utilized GANs in your enterprise or know about more use cases where GANs can be advantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!

Advancing from Artificial Intelligence to Humane Intelligence | Blog

I recently came across a news article that said doctors will NOT be held responsible for a wrong decision or recommendation made based on the recommendations of an artificial intelligence (AI) system. That’s shocking and disturbing at so many levels! Think of the multitude of AI-based decision making possible in banking and financial services, the public sector, and many other industries and the worrying implications wrong decisions could have on the lives of people and society.

One of the never-ending debates for AI adoption continues to be the ethicality and explainability concerns with the systems’ black box decision making. There are multiple dimensions to this issue:

  1. Definitional ambiguity – Trustworthy, fair and ethical, and repeatable – these are the different characteristics of AI systems in the context of explainability. Most enterprises cite explainability as a concern, but most don’t really know what it means or the degree to which it is required.
  2. Misplaced ownership – While they can be trained, re-trained, tested, and course corrected, no developer can guarantee bias-free or accurate decision making. So, in case of a conflict, who should be held responsible? The enterprise, the technology providers, the solution developers, or another group?
  3. Rising expectations – AI systems are being increasingly trusted with highly complex, multi-stakeholder decision-making scenarios which are contextual, subjective, open to interpretation, and require emotional intelligence.

 

Enterprises, particularly the highly regulated ones, have hit a roadblock in their AI adoption journey and scalability plans considering the consequence of wrong decisions with AI. In fact, one in every three AI use cases fail to reach a substantial scalable level due to explainability concerns.

While the issue may not be a concern for all AI-based use cases, it is usually a roadblock for scenarios with high complexity and high criticality, which lead to irrevocable decisions.

Advancing from Artificial Intelligence to Humane Intelligence

In fact, Hanna Wallach, a senior principal researcher at Microsoft Research in New York City, stated, “We cannot treat these systems as infallible and impartial black boxes. We need to understand what is going on inside of them and how they are being used.”

Progress so far

Last year, Singapore released its Model AI Governance Framework, which provides readily implementable guidance to private sector organizations seeking to deploy AI responsibly. More recently, Google released an end-to-end framework for an internal audit of AI systems. There are many other similar efforts by opponents and proponents of AI alike; however, a feasible solution is still out of sight.

Technology majors and service providers have also made meaningful investments to address the issue, including Accenture (AI fairness Toolkit), HCL (Enterprise XAI Framework), PwC (Responsible AI), and Wipro (ETHICA). Many XAI-centric niche firms that focus only on addressing the explainability conundrum, particularly for the highly regulated industries like healthcare and public sector, also exist. Ayasdi, Darwin AI, KenSci, and Kyndi deserve a mention.

The solution focus varies from enabling enterprises to compare the fairness and performance of multiple models to enabling users to set their ethicality bars. It’s interesting to note that all of these offer bolt-on solutions that enable an explanation of the decision in a human interpretable format, but they’re not embedded explainability-based AI products.

The missing link  

Considering this is an artificial form of intelligence, let’s take a step back and analyze how humans make such complex decisions:

  • Bias-free does not exist in the real world: The first thing to appreciate is that humans are not free from biases, and biases by their nature are subjective and open to interpretation.
  • Progressive decision-making approach: A key difference between humans and the machines making such decisions is the fact that even with all processes in place, humans seek help, pursue guidance in case of confusion, and discuss edge cases that are more prone to wrong decision making. Complex decision making is seldom left to one individual alone; rather, it’s a hierarchy of decision makers in play, adding knowledge on top of previous insights to build a decision tree.
  • Emotional Quotient (EQ): Humans have emotions, and even though most decisions require pragmatism, it’s the EQ in human decisions that explains the outcomes in many situations.

Advancing from Artificial Intelligence to Humane Intelligence

These are behaviors that today’s AI systems are not trained to adopt. A disproportionate focus on speed and cost has led to neglecting the human element that ensures accuracy and acceptance. And instead of addressing accuracy as a characteristic, we add another layer of complexity in the AI systems with explainability.

And even if the AI system is able to explain how and why it made a wrong decision, what good does that do anyway? Who is willing to put money in an AI system that makes wrong decisions but explains them really well? What we need is an AI system that makes the right decisions, so it does not need to explain them.

AI systems of the future need to be designed with these humane elements embedded in their nature and functionality. This may include, pointing out edge cases, “discussing” and “debating” complex cases with other experts (humans or other AI systems), embedding the element of EQ in decision making, and at times even handing a decision back to humans when it encounters a new scenario where the probability of wrong decision making is higher.

But until we get there, a practical way for organizations to address these explainability challenges is to adopt a hybrid human-in-the-loop approach. Such an approach relies on subject matter experts (SMEs), such as ethicists, data scientists, regulators, domain experts, etc. to

  • Improve learning models’ outcomes over time
  • Check for biases and discrepancies
  • Ensure compliance

In this approach, instead of relying on a large training data set to build the model, the machine learning system is built iteratively with regular inputs from experts.

Advancing from Artificial Intelligence to Humane Intelligence

In the long run, enterprises need to build a comprehensive governance structure for AI adoption and data leverage. Such a structure will have to institute explainability norms that factor in criticality of machine decisions, required expertise, and checks throughout the lifecycle of any AI implementation. Humane intelligence and not artificial intelligence systems are required in the world of the future.

We would be happy to hear your thoughts on approaches to AI and XAI. Please reach out to [email protected] for a discussion.

Is It Open Season for RPA Acquisitions? | Blog

Robotic Process Automation (RPA) is a key component of the automation ecosystem and has been a rapidly growing software product category, making it an interesting space for potential acquisitions for a while now. While acquisitions in the RPA market have been happening over the last several years, three major RPA acquisitions have taken place in quick succession over the past few months: Microsoft’s acquisition of Softomotive in May, IBM’s acquisition of WDG Automation in July, and Hyland’s acquisition of Another Monday in August.

These acquisitions highlight a broader trend in which smaller RPA vendors are being acquired by different categories of larger technology market players:

  • Big enterprise tech product vendors like Microsoft and SAP
  • Service providers such as IBM
  • Larger automation vendors like Appian, Blue Prism, and Hyland.

Recent RPA acquisitions timeline:

RPA Robotic Process Automation

Why is this happening?

The RPA product market has grown rapidly over the past few years, rising to about US$ 1.2 billion in software license revenues in 2019. The market seems to be consolidating, with some of the larger players continuing to gain market share. As in any such maturing market, mergers and acquisitions are a natural outcome. However, we see multiple factors in the current environment leading to this frenetic uptick in RPA acquisitions:

Acquirers’ perspective – In addition to RPA being a fast-growing market, new category acquirers – meaning big tech product vendors, service providers, and larger automation vendors – see potential in merging RPA capabilities with their own core products to provide more unified automation solutions. These new entrants will be able to build pre-packaged solutions combining RPA with other existing capabilities at lower cost. COVID-19 has created an urgency for broader automation in enterprises, and the ability to offer packaged solutions that provide a quick ROI can be a game-changer in this scenario. Additionally, the adverse impact of the pandemic on the RPA vendors’ revenues, which may have dropped their valuations down to more realistic levels, is making them more attractive for the acquiring parties.

Sellers’ perspective – There is now a general realization in the market that RPA alone is not going to cut it. RPA is the connective tissue, but you still need the larger services, big tech/Systems-of-Record and/or intelligent automation ecosystem to complete the picture. RPA vendors that don’t have the ability to invest in building this ecosystem will be looking to be acquired by larger players that offer some of these complementary capabilities. In addition, investor money may no longer be flowing as freely in the current environment, meaning that some RPA vendors will be looking for an exit.

What can we expect going forward?

The RPA and broader intelligent automation space will continue to evolve quickly, accelerated by the predictable rise in demand for automation and the changes brought on by the new entrants in the space. We expect to see the following trends in the short term:

  • More acquisitions – With the ongoing market consolidation, we expect more acquisitions of smaller automation players – including RPA, Intelligent Document Processing (IDP), process orchestration, Intelligent Virtual Agents (IVA), and process mining players – by the above-mentioned bigger categories as they seek to build more complete transformational solutions.
  • Services imperative – Scaling up automation initiatives is an ongoing challenge for enterprises, with questions lingering around bot license utilization and the ability to fill an automation pipeline. Services that can help overcome these challenges will become more critical and possibly even differentiating in the RPA space, whether the product vendors themselves or their partners provide them.
  • Evolution of the competitive landscape – We expect the market landscape to undergo considerable transformation:
    • In the attended RPA space, while there will be co-opetition among RPA vendors and the bigger tech players, the balance may end up being slightly tilted in favor of the big tech players. Consider, for instance, the potential impact if Microsoft were to provide attended RPA capabilities embedded with its Office products suite. Pure-play RPA vendors, however, will continue to encourage citizen development, as this can unearth low-hanging fruit that can serve as an entry point into the wider enterprise organization.
    • In the unattended RPA space, pure-play RPA vendors will likely have an advantage as they do not compete directly with big tech players and so can invest in solutions across different systems of record. Pure-play RPA vendors might focus their efforts here and form an ecosystem to link in missing components of intelligent automation to provide integrated offerings.

There are several open questions on how some of these dynamics will play out over time. You can expect a battle for the soul (and control) of automation, with implications for all stakeholders in the automation ecosystem. Questions remain:

  • How will enterprises approach automation evolution, by building internal expertise or utilizing external services?
  • How will the different approaches automation vendors are currently following play out – system of record-led versus platform versus best of breed versus packaged solutions?
  • Where will the balance between citizen-led versus centralized automation lie?

Only time will tell how this all plays out.

But in the meantime, we’d love to hear your thoughts. Please share them with us at [email protected], [email protected], and  [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.