digital surgery platform
VIEW THE FULL REPORT
digital surgery platform
VIEW THE FULL REPORT
With cyberattacks and data breaches at all-time highs, consumers are increasingly skeptical about sharing their data with enterprises, creating a dilemma for artificial intelligence (AI) that needs massive data to thrive. The nascent technology of federated learning offers an ideal growing alternative for healthcare, life sciences, banking, finance, manufacturing, advertising, and other industries to unleash the full potential of AI without compromising the privacy of individuals. To learn how you can have all the data you need while protecting consumers, read on.
The infinite number of massive data breaches that have stripped individuals of their privacy has made the public more aware of the need to protect their data. In the absence of strong governance and guidelines, people are more skeptical than ever about sharing their personal data with enterprises.
This new data-conscious paradigm poses a problem for artificial intelligence (AI) that thrives on huge amounts of data. Unless we can figure out a way to train machines on significantly smaller data sets, protecting the privacy and data of users will remain key obstacles to intelligent automation.
Federated learning (FL) is emerging as a solution to overcome this problem. Broadly speaking, Federated learning is a method of training machine learning models in a way that the user data does not leave its location, keeping it safe and private. This differs from the traditional centralized machine learning methods that require the data to be aggregated in a centralized location.
Federated learning is a mechanism of machine learning, wherein the process of learning takes place in a decentralized manner across a network of nodes/edge devices, and the results are aggregated in a central server to create a unified model. It essentially comprises decoupling the activity of model training with centralized data storage.
By training the same model across an array of devices, each with its own set of data, we get multiple versions of the model, which, when combined, create a more powerful and accurate global version for deployment and use.
In addition to training algorithms in a private and secure manner, Federated learning provides an array of other benefits such as:
Based on an Everest Group framework, we have found Federated learning is most suitable and being adopted at higher rates in industrials where data is an extremely critical asset that is present across different locations in a distributed fashion and privacy is paramount.
Federated learning is especially beneficial for industries that have strict data residency requirements. This makes the healthcare and life-sciences industries perfect candidates for its adoption. Federated learning can help facilitate multi-institution collaborations between medical institutions by helping them overcome regulatory hurdles that prevent them from sharing patient data by pooling data in a common location.
The next industry ripe for the adoption of Federated learning is the banking and financial sectors. For instance, it can be used to develop a more comprehensive and accurate fraud analytics solution that is based on data from multiple financial entities.
Another industry where we see high applicability of Federated learning is the manufacturing industry. By ensuring collaboration between different entities across the supply chain, using Federated learning techniques, there is a case to build a more powerful model that can help increase the overall efficiency across the supply chain.
Federated learning also might find increased use in interest-based advertising. With the decision to disable third-party cookies by major internet browsers, marketers are at a loss for targeted advertising and engagement. With Federated Learning, marketers can replace individual identifiers with cohorts or group-based identifiers. These cohorts are created by identifying people with common interests based on individual user data such as browsing habits stored on local machines.
Since Google introduced the concept of Federated learning in 2016, there has been a flurry of activity. Given that this is a nascent technology, the ecosystem is currently dominated by big tech and open-source players. We see hyperscalers taking the lead with Microsoft and Amazon Web Services (AWS) making investments to activate Federated learning, followed by Nvidia and Lenovo who are looking at the market from a hardware perspective.
Another segment of players working in this arena are startups that are using Federated learning to build industry-specific solutions. AI companies such as Owkin and Sherpa.ai are pioneering this technology and have developed Federated learning frameworks that are currently operational at a few enterprises’ locations.
The adoption and need for Federated learning depend on the industry and vary with the use case. Everest Group has developed a comprehensive framework to help you assess and understand the suitability of Federated learning for your use-case in our Latest Primer for Federated Learning. The framework is built on four key parameters that include data criticality, privacy requirement, regulatory constraint, and data silo/ diversity.
Federated learning provides us with an alternative way to make AI work in a world without compromising the privacy of individuals.
If you are interested in understanding the suitability of federated learning for your enterprise, please share your thoughts with us at [email protected].
In this era of industrialization for Artificial Intelligence (AI), enterprises are scrambling to embed AI across a plethora of use cases in hopes of achieving higher productivity and enhanced experiences. However, as AI permeates through different functions of an enterprise, managing the entire charter gets tough. Working with multiple Machine Learning (ML) models in both pilot and production can lead to chaos, stretched timelines to market, and stale models. As a result, we see enterprises hamstrung to successfully scale AI enterprise-wide.
To overcome the challenges enterprises face in their ML journeys and ensure successful industrialization of AI, enterprises need to shift from the current method of model management to a faster and more agile format. An ideal solution that is emerging is MLOps – a confluence of ML and information technology operations based on the concept of DevOps.
According to our recently published primer on MLOps, Successfully Scaling Artificial Intelligence – Machine Learning Operations (MLOps), these sets of practices are aimed at streamlining the ML lifecycle management with enhanced collaboration between data scientists and operations teams. This close partnering accelerates the pace of model development and deployment and helps in managing the entire ML lifecycle.
MLOps is modeled on the principles and practices of DevOps. While continuous integration (CI) and continuous delivery (CD) are common to both, MLOps introduces the following two unique concepts:
We are witnessing MLOps gaining momentum in the ecosystem, with hyperscalers developing dedicated solutions for comprehensive machine learning management to fast-track and simplify the entire process. Just recently, Google launched Vertex AI, a managed AI platform, which aims to solve these precise problems in the form of an end-to-end MLOps solution.
MLOps bolsters the scaling of ML models by using a centralized system that assists in logging and tracking the metrics required to maintain thousands of models. Additionally, it helps create repeatable workflows to easily deploy these models.
Below are a few additional benefits of employing MLOps within your enterprise:
Implementing MLOps is complex because it requires a multi-functional and cross-team effort across the key elements of people, process, tools/platforms, and strategy underpinned by rigorous change management.
As enterprises embark on their MLOps journey, here are a few key best practices to pave the way for a smooth transition:
When embarking on the MLOps journey, there is no one-size-fits-all approach. Enterprises need to assess their goals, examine their current ML tooling and talent, and also factor in the available time and resources to arrive at an MLOps strategy that best suits their needs.
For ML to keep pace with the agility of modern business, enterprises need to start experimenting with MLOps now.
Are you looking to scale AI within your enterprise with the help of MLOps? Please share your thoughts with us at [email protected].
Yesterday’s announcement of Microsoft’s acquisition of Nuance Communications signifies the big tech company’s serious intentions in the US healthcare market.
We’ve been writing about industry cloud and verticalization plays of big technology companies (nicknamed BigTech) for a while now. With the planned acquisition of Nuance Communications for US$19.7 billion, Microsoft has made its most definitive step in the healthcare and verticalization journey.
At a base level, what matters to Microsoft is that Nuance focuses on conversational AI. Over the years, it has become quite the phenomenon among physicians and healthcare providers – 77 percent of US hospitals are Nuance clients. Also, it is not just a healthcare standout – Nuance counts 85 percent of Fortune 100 organizations as customers. Among Nuance’s claims to fame in conversational AI is the fact that it powered the speech recognition engine in Apple’s Siri.
The acquisition is attractive to Microsoft for the following reasons:
While other competitors (read Amazon, Salesforce, and Google) were busy launching healthcare-focused offerings in 2020, Microsoft was already helping healthcare providers use Microsoft Teams for virtual physician visits. Also, Microsoft and Nuance are not strangers, having partnered in 2019, to enable ambient listening capabilities for physician to EHR record keeping. Microsoft sees a clear opportunity in the US healthcare industry.
It is not without reason that Microsoft launched its cloud for healthcare last year and has followed it up by acquiring Nuance.
Under Nadella, Microsoft has developed a sophisticated sales model that takes a portfolio approach to clients. This has helped Microsoft build a strong positioning beyond its Office and Windows offerings even in healthcare. Most clients in healthcare are already exposed to its Power Apps portfolio and Intelligent Cloud (including Azure and cloud for healthcare) in some form. It is only a question of time (if the acquisition closes without issues) until Nuance becomes part of its suite of offerings for healthcare.
As a rejoinder to our earlier point about head starts, this is where Microsoft has a lead over competitors. Our recent research with System Integrators (SI) ecosystem indicates that Microsoft is head and shoulders above its nearest competitors when it comes to leveraging the SI partnership channel to bring its offerings to enterprises. This can act as a significant differentiator when it comes to taking Nuance to healthcare customers as SI partners can expect favorable terms of engagement.
While augmenting healthcare capabilities and clients is the primary trigger for this purchase, we believe Microsoft aims to go beyond healthcare to achieve the following objectives:
This announcement comes against the background of BigTech and platform companies making significant moves to industry-specific use cases, which will drive the next wave of client adoption and competitive differentiation. Microsoft’s turnaround and acceleration since Nadella took over as CEO in 2014 are commendable (see the image below). It is on the verge of becoming only the second company to achieve $2 trillion in market capitalization. This move is a bet on its journey beyond the $2 trillion.
With a connected world and connected humans, we are on track for a huge uptick in new data creation at an unprecedented level. IoT, digitization, and cloud have brought on the generation and storage of ZBs of data created each day. Data has become the new oil but with some caveats. The tap of this oil is controlled by a few organizations globally, making this data asset scarce and expensive. However, enterprises in their pursuit of digital transformation require this data to get insights for better decision-making.
The next logical question arises as to how we can get hold of this data, which, if utilized to its full potential, has the power to transform enterprises. This is where synthetic data comes to play. It is the form of data that is created inorganically rather than being generated through actual interactions or events. It is usually formed by studying the characteristics and relations between different variables. A total of three types of synthetic data exist, which are shown below.
Exhibit 1: Types of synthetic data
With the cultural shift towards insights-based decision making from gut-based decision making and the onset of data literacy initiatives, enterprises require apt insights, which further require the generation of huge amounts of data. There are a few instances highlighted below which make a strong case for synthetic data.
There are usually three strategies to generate synthetic data. These include some simplistic techniques as well as methods infused heavily with AI.
Exhibit 2: Techniques to generate synthetic data
Sampling from distribution is simply drawing a lot of random numbers from a normal distribution. Agent-based modeling understands the behavior of the original data. Once the characteristics are defined, it creates new data keeping the behavior constraints in place. Generative Adversarial Network (GAN) models are synthetic data generation techniques usually used for creating image data. These networks have two DL models, one is a generator, and the other is known as a discriminator. For example, GAN can take random noises as its input. Then the generator generates output images, whereas the discriminator tries to find whether the output is fake or real. The more the image is closer to the real one, the output can be considered as real.
An infinite source of data that mimics the real dataset can provide innumerable opportunities to create test scenarios during development.
Synthetic data acts as a beneficiary for enterprises across domains and industries, with some examples shown below.
However, the use of synthetic data does not come without its own set of limitations.
Despite these limitations, enterprises should be keen to adopt synthetic data as they have an opportunity to disrupt the business landscape by utilizing data and its benefits to full potential. It can prove to be the push that was required for AI/ML to penetrate across enterprises and gain more traction.
If you’ve utilized synthetic data in your enterprise or know about more areas where synthetic data can be advantageous and disadvantageous, please write to us at [email protected] and [email protected]. We’d love to hear your experiences and ideas!
The robotic surgery market has surged over the last decade. According to an article published by the JAMA Network Open in early January 2020, robot-assisted surgical procedures accounted for 15.1 percent of all general surgeries in 2018, up from 1.8 percent in 2012. And the market has grown even more since 2018. For example, the utilization rate of Intuitive Surgical’s da Vinci robot in US hospitals has grown more than 400 percent in the last three years.
To capture their piece of the robotic surgery market pie, other MedTech giants, including Johnson & Johnson (J&J), Medtronic, Stryker, and Zimmer Biomet have turned to acquisitions and strategic partnerships. Stryker paid a whopping US$1.65 billion in 2013 to acquire Mako Surgical Corp. Zimmer Biomet bought Medtech for its Rosa Surgical Robot in 2016 for US$132 million. J&J and Medtronic acquired Orthotaxy and Mazor Robotics, respectively, in 2018. And J&J subsequently bought Auris Health and Verb Surgical in 2019.
With all this money being spent on robotic surgery company acquisitions, it is clear that the MedTech giants intended to fight head-on with one another to build the best surgical robot.
But building the best surgical robot does not assure market leadership. Indeed, robotics is only one aspect of the digital surgery ecosystem. In order to excel in the robotic surgery space, companies need to build solutions that complement their surgical robots with digital technology tools and capabilities.
As you see in the following image, the digital surgery ecosystem consists of imaging, visualization, analytics, and interoperability technologies that enhance the capabilities of surgical robots, enabling companies to unlock the full array of potential benefits robotic surgery has to offer – better precision and control, enhanced surgical visibility, remote surgery, better clinician and patient experiences, etc.
Let’s take a quick look at the value each of the digital technologies can bring to robotic surgery.
Realizing the benefits of digital technologies, MedTech companies are starting to make investments in them to augment their surgical robots. For example, Medtronic in 2020 acquired Digital Surgery, a leader in surgical AI, data and analytics, and digital education and training to strengthen its robotic-assisted surgery platform. Similarly, in 2021, Stryker acquired Orthosensor to enhance its Mako surgical robotics systems with smart sensor technologies and wearables, and Zimmer partnered with Canary Medical to develop smart knee implants. MedTech companies are also starting to change their branding to reflect their move to digital. For example, J&J is positioning its new offerings as digital surgery platforms instead of robotic surgery platforms.
Building specialized robots for different surgical procedures requires either a huge capital investment to acquire such individualized capabilities or extensive resources and time to develop them in-house. So, it’s neither feasible nor cost-effective to do so. Therefore, it would be ideal for MedTech organizations to focus on developing one robot that supports the entire breadth of surgical procedures.
With their history of robotic acquisitions over the last three years, MedTech giants should be looking at integrating multiple point solutions to build a single, connected next-generation digital surgery platform. The following image depicts our vision of a truly connected digital surgery ecosystem built around a digital surgery platform. It ensures interoperability among all types of surgical robots so they can continually learn and evolve by sharing best practices, surgical procedures, and associated patient data.
J&J has already shared its vision and roadmap for building a next-generation digital surgery platform. It brings together robotics, visualization, advanced instrumentation, connectivity, and data analytics to enable its digital surgery platform to improve outcomes across a broad range of disease states. It has announced its plans to integrate its recently unveiled Ottava platform with the Monarch platform it gained from its 2019 acquisition of Auris Health to build a strong position in the digital surgery market.
With MedTech giants in the initial phase of building their next-generation connected digital surgery ecosystem, they will need to have the right fit of complementary digital technologies to truly scale their impact – alleviating surgeon workloads, driving productivity, enabling personalization, and better clinical outcomes. Service providers that bring niche talent and a balanced portfolio of engineering and digital services will be a partner of choice for MedTech giants in this journey.