Category: Automation

Metaverse and ScienceTech: Will These Virtual and Real-world Markets Compete?

Metaverse is the buzz these days. While Metaverse provides an embodied virtual-reality experience, ScienceTech fuses technology and science to solve real problems of humanity. Who will win in the battle for relevance, investments, and talent? To learn more about these virtual and real-world market opportunities and what actions technology and service providers should take, read on.

While they once seemed far out, the Metaverse and ScienceTech are here now. As part of our continued Metaverse research, let’s explore these emerging technologies and whether they will collide or coexist.

ScienceTech brings together technology and science to improve the real world by enhancing living standards and improving equality. It combines technology with physical sciences, life sciences, earth sciences, anthropology, geography, history, mathematics, systems, logic, etc.

Meanwhile, the Metaverse is an emerging concept that uses next-generation advanced technologies such as Augmented Reality (AR)/Virtual Reality (VR), digital assets, spatial computing, and commerce to build an immersive, seamless experience.

Over the past few months, Metaverse has become a hot topic not only in technology circles but also among enterprises. As providers pump billions of dollars to create the landscape and value realization becomes clearer, Metaverse will grab increasing attention from enterprises, providers, and market influencers.

Its serious market potential can be seen by the collaboration of industry participants to define standards to interoperate Metaverse platforms and ecosystems. Everest Group is witnessing great interest in our Metaverse research and our recent webinar Web 3.0 and the Metaverse: Implications for Sourcing and Technology Leaders generated unprecedented client inquiries.

ScienceTech has been around for many years but has been mostly experimental with limited revenue and growth. Technology and service providers have been reluctant to meaningfully scale this business because of its complexity, significant investment requirements, and high risk of failure.

However, the pandemic has changed priorities for enterprises and individuals, making ScienceTech more critical to solving real-life problems. The cloud, an abundance of data, better manufacturing processes, and a plethora of affordable technologies have lowered the cost of enabling and building these offerings.

Competition between Metaverse and ScienceTech

Below are some of the areas where these two emerging fields could conflict:

  • Relevance

Many cynics have decried Metaverse as one more fantasy of BigTech trying to take people further away from reality. This cynicism has gained pace in light of the disruptive global pandemic. The make-believe happy world driven by a heavy dose of virtual reality takes the focus of humanity away from the pressing needs of our time.

While not well defined, ScienceTech is generally perceived as being different from pure play. Some of its ideas have been around for many years such as device miniaturization, autonomous systems, regenerative medicine, and biosimulation. The core defining principle of ScienceTech is that science researched, validated, and hypothesized themes are built through technology. The relevance of ScienceTech may appear far more pressing to many than the make-believe virtual world of Metaverse.

  • Investment

The interesting competition will be for investments. Last year, venture capitalists invested over US$30 billion in crypto-related start-ups. As the Web 3.0 and Metaverse tech landscape becomes more fragmented and crowded, investors may not want to put their money into sub-scaled businesses. This can help the ScienceTech space, which is not well understood by investors, but offers a compelling value proposition.

  • Talent

Technology talent is scarce and ScienceTech talent is even scarcer. Although Metaverse vendors will continue to attract talent because they can pay top dollar, ScienceTech vendors can offer more purpose and exciting technologies to niche talent. In the internet heydays, people bemoaned that bright minds were busy clicking links instead of solving world problems. Metaverse may have that challenge and ScienceTech can benefit from this perception. GenZ job seekers want to work in areas where they can impact and change the world, and ScienceTech can provide that forum.

What should technology and service providers do?

Both Metaverse providers and ScienceTech companies will thrive and share quite a few building blocks for technologies, namely, edge, cloud, Artificial Intelligence (AI), and data. Multiple technology and trends will not battle. Moreover, these two markets serve different purposes and Metaverse and ScienceTech will coexist. Technology and service providers will need to invest in both segments, and capture and shape the market demand.

Providers need to prioritize where to focus efforts, investments, partnerships, and leadership commitment. A different people strategy will be needed because skilling technology resources on science and vice-versa will not work. They will need to select specific focus areas and hire people from multiple science domains. The R&D group will have to change its constituents and focus on science-aligned technology rather than just Information and Communications Technology.

To be successful, providers also will have to find anchor clients to underwrite some offerings, collaborate to gain real-life industry knowledge, and engage with broader ecosystems such as academia, government, and industry bodies to build market-enabling forums.

To learn more about our Metaverse research and discuss your experiences in these emerging areas, contact [email protected] or contact us.

Visit our upcoming webinars and blogs to learn more about upcoming technologies and trends.

Low-code Market Realities: Understanding Common Myths to Avoid Costly Mistakes

Despite their growth, low-code platforms are still surrounded by much confusion. Many enterprises incorrectly believe that real developers don’t need low code, anyone can do it, and it’s only for simple problems. To debunk three common myths in the low-code market, read on.  

With its increasing importance, low-code platforms are also subject to several myths and misunderstandings. As with every evolving technology, enterprises have many questions about optimally using these platforms.

Based on our conversations with multiple enterprises confirming the lack of understanding about the low-code market, we tackle the common misperceptions below:

Myth #1: Low-code platforms are meant for use by citizen developers

The term low code generally evokes the impression of an HR manager who, tired of following up with the IT team multiple times, decides to create a leave approval workflow application. While this impression is not incorrect, professional developers and enterprise IT teams are key stakeholders in the low-code ecosystem as well.

Professional developers increasingly use low-code platforms to improve their efficiency. Some of these platforms can provide code quality alerts and Artificial Intelligence (AI)-powered recommendations, not to mention custom solutions that require minimal tuning.

The built-in DevOps capabilities in these platforms also encourage a culture shift from the commonly used waterfall model among users. For example, supply chain management software provider Nimbi significantly reduced developers in their team from 40 to 24 when they switched to OutSystems from traditional platforms.

We strongly believe central IT teams have a meaningful role in the ecosystem to provide effective oversight and governance, in addition to strategizing the use of the best low-code platforms at the enterprise level. In the absence of centralized governance, low-code platforms may proliferate across the organization leading to aggravation of the shadow IT issues and higher spend.

Myth #2: Low-code development does not require technical skills

As much as we may want to believe, low-code platforms are not a panacea to the ongoing talent crisis. Misleading promises by certain technology vendors have created a common impression that any user can develop any application using low-code platforms. However, low-code development does not imply zero technical skill requirement.

Most low-code platforms enable the extension of their capabilities through traditional programming languages like Java and C#. Off-the-shelf solutions have their limitations, and most applications need custom logic at some point. Typical job descriptions for low-code developer profiles outline technical qualifications like JavaScript, HTML5, and CSS3, alongside Continuous Integration (CI) and Continuous Delivery (CD) pipeline tools like Jenkins.

Thus, it is unrealistic to expect an army of business users to step in and take over all application development-related needs from the IT organization. Low-code development remains a role with a highly demanding skillset across various technologies.

Myth #3: Low code cannot be used for enterprise-grade development

Many enterprise leaders and service providers believe that low-code platforms are only suitable for small-scale department-level needs. However, our conversations indicate that low-code platforms are being rapidly adopted for critical applications used by millions of users. Here are some examples of how low code is solving complex IT problems around the world:

  • A large US commercial insurer has built its entire end-to-end multi-country comprehensive, business-critical application that manages claims, billing, and collection on Appian
  • One of the largest consumer goods companies in the world built a huge global application for financial management on Microsoft Power Platform

As we witness the adoption of low-code platforms garnering pace, a lot of myths and misunderstandings need to be cleared up about low code versus traditional development. Technology providers and service partners play a key role in helping their clients navigate the abundant options to orchestrate a carefully crafted low-code strategy and select the best low-code platforms.

At Everest Group, we are closely tracking the low-code market. For more insights, see our compendium report on various platform providers, the state of the low-code market report shedding light on the enterprise adoption journey, and a PEAK Matrix assessment comparing 14 leading players in the low-code market.

To share your thoughts and discuss our low-code market research, please reach out to [email protected], [email protected] or [email protected].

You can also attend our webinar, Building Successful Digital Product Engineering Businesses, to explore how enterprises are investing in next-generation technologies and talent and the most relevant skillsets for digital product engineering initiatives.

Is AI Emotion Detection Ready for Prime Time?

Artificial Intelligence (AI) solutions that aim to recognize human emotions can provide useful insights for hiring, marketing, and other purposes. But their use also raises serious questions about accuracy, bias, and privacy. To learn about three common barriers that need to be overcome for AI emotion detection to become more mainstream, read on.

By using machine learning to mimic human intelligence, AI can execute everything from minimal and repetitive tasks to those requiring more “human” cognition. Now, AI solutions are popping up that go as far as to interpret human emotion. In solutions where AI and human emotion intersect, does the technology help, or deliver more trouble than value?

While we are starting to see emotion detection using AI in various technologies, several barriers to adoption exist, and serious questions arise as to whether the technology is ready to be widely used. AI that aims to interpret or replace human interactions can be flawed because of underlying assumptions made when the machine was trained. Another concern is the broader question of why anyone would want to have this technology used on them. Is the relationship equal between the organization using the technology and the individual on whom the technology is being used? Concerns like these need to be addressed for this type of AI to take off.

Let’s explore three common barriers to emotion detection using AI:

Barrier #1: Is AI emotion detection ethical for all involved?

Newly launched AI-based solutions that track human sentiment for sales, human resources, instruction, and telehealth can help provide useful insights by understanding people’s reactions during virtual conversations.

While talking through the screens, the AI tracks the sentiment of the person, or people, who are taking the information in, including their reactions and feedback. The person being tracked could be a prospective customer, employee, student, patient, etc., where it’s beneficial for the person leading the virtual interaction to better understand how the individual receiving the information is feeling and what they could be thinking.

This kind of AI could be viewed as ethical in human resources, telehealth, or educational use cases because it could benefit both the person delivering the information and those receiving the information to track reactions, such as fear, concern, or boredom. In this situation, the software could help deliver a better outcome for the person being assessed. However, few other use cases are available where it is advantageous for everyone involved to have one person get a “competitive advantage” over another in a virtual conversation by using AI technology.

Barrier #2:  Can discomfort and feelings of intrusion with AI emotion detection be overcome?  

This brings us to the next barrier – why should anyone agree to have this software turned on during a virtual conversation? If someone knows of an offset in control during a virtual conversation, the AI software comes across as incredibly intrusive. If people need to agree to be judged by the AI software in some form or another, many could decline just because of its invasive nature.

People are becoming more comfortable with technology and what it can do for us; however, people still want to feel like they have control of their decisions and emotions.

Barrier #3: How do we know if the results of emotion detection using AI are accurate?

We put a lot of trust in the accuracy of technology today, and generally, we don’t always consider how technology develops its abilities. The results for emotion-detecting AI depend heavily on the quality of the inputs that are training the AI. For example, the technology must consider not only how human emotion varies from person to person but the vast differences in body language and non-verbal communication from one culture to another. Users also will want to consider the value and impact of the recommendations that come out of the analysis and if it drives the desired behaviors that were intended.

Getting accurate data from using this kind of AI software could help businesses better meet the needs of customers and employees, and health and education institutions deliver better services. AI can pick up on small nuances that may otherwise be missed entirely and be useful in job hiring and other decision making.

But inaccurate data could alter what would otherwise have been a genuine conversation. Until accuracy improves, users should focus on whether the analytics determine the messages correctly and if overall patterns exist that can be used for future interactions. While potentially promising, AI emotion detection may still have some learning to do before it’s ready for prime time.

Contact us for questions or to discuss this topic further.

Learn more about recent advances in technology in our webinar, Building Successful Digital Product Engineering Businesses. Everest Group experts will discuss the massive digital wave in the engineering world as smart, connected, autonomous, and intelligent physical and hardware products take center stage.

Resilience and Stability in the Workers’ Compensation Industry – This is the Right Time for Claims Transformation to Secure Future Growth | Blog

As the workers’ compensation industry emerges from the pandemic, leveraging digital technologies to transform claims handling and taking a customer-centric approach will help carriers maintain profitability. By using automation, analytics, and digitalization, players can differentiate themselves. To learn about the key workers’ compensation trends to pay attention to, read on.

The workers’ compensation industry has remained profitable through the pandemic, with claims severity remaining consistent and frequency continuing to decrease. But reduced net written premium, low-interest rates, and the economic slowdown are creating top-line pressures. Moreover, the sustainability of profits is not guaranteed.

As COVID-19 subsides and most industries return to normal workways, the workers’ compensation industry could face difficulties in holding on to gains if it doesn’t chart a dedicated plan to improve productivity, employee experience, and employer mandates to create market differentiation.

Process standardization and simplification are the need of the hour. The workers’ compensation industry must move from the “repent and repair” model to “prevent and prepare” by leveraging business intelligence through an end-to-end real-time data flow across processes to enable a more customer-centric approach toward claims handling.

Currently, efficiency is impacted by the lack of information that results in back-and-forth requests on multiple claims touchpoints. By integrating processes, carriers can obtain real-time data to design standard workflow for Straight-through Processing (STP), exception handling capabilities, fraud detection and claim reserves calculation, and reduce overall claims function cycle time.

Challenges to overcome for claims transformation 

In addition to concerns and uncertainty about the long-term effects of the pandemic, the workers’ compensation industry faces the challenge of outdated workflows with heritage issues such as:

  • Lack of information at each node in the claims management process that increases cycle time and leads to poor end-user experience
  • Paper-based processes that are roadblocks to enabling a virtual ecosystem consisting of digital payments, paperless documents, e-signature, e-inspection, and sundry processes
  • Cumbersome manual functions that should be automated
  • Lack of a framework for standardized processes and segregating functions, requiring customization such as objectionable and non-objectionable items having to follow the same workflow
  • Claims not being linked to risk assessment and reporting, which impacts new business and renewals and the mapping of profitable and loss-making segments
  • Inability to benchmark claims, assess claims performance, and understand market impact

To continue its growth, workers’ compensation industry players should look to implement digital transformation and optimize processes to reduce claims turnaround time. Carriers who focus on digital solutions and leverage data through automation and analytics will successfully pivot for the future.

Traditional claims versus digital claims

Picture1

Exhibit 1: Everest Group

Three workers’ compensation industry trends to watch

1 – The role of automation

Workers’ compensation claims consist of workflows requiring minimal manual intervention where automation can work as an enabler providing numerous benefits such as:

  • Enhanced user experience through conversational Artificial Intelligence (AI) for First Report of Injury (FROI)
  • Improved data validation and elimination of human error, enabling early-fraud detection and reduction in leakages
  • Automated claims routing for risk assessment through SOPs for STP, exceptions, and large claims
  • Auto approval of bills based on claim and treatment parameters can shorten handling time
  • Better claims capacity, improved backlog on open claims, speed to adjudication, and faster return-to-work solutions

Digital intake can remove friction and deliver a captivating experience for all stakeholders. By focusing on automation, workers’ compensation carriers will not only improve operational efficiency but also reduce operational costs – resulting in bottom-line benefits.

2 – Advancement in analytics

By adopting enhanced analytical capabilities, workers’ compensation carriers can increase their focus on the end-user experience and take a more proactive, prevention-based approach. Here are some ways this can be done:

  • Predictive and prescriptive analytics for insights on safety parameters will help prevent accidents and injuries
  • Predicting risk upon claim intake will smoothen the claim cycle for all stakeholders
  • Auto assignment of claims to an adjuster with relevant experience for backend issues
  • Individual and aggregated claims-based rules with persona-specific dashboards for different injury types
  • Implementing Key Performance Indicators (KPI) for assessing analytical potential

Advancement in analytics with proper predictive modeling techniques will reduce claims cost and improve claims severity, which, in turn, will deliver enhanced profitability.

3- Digital ecosystem and a safe workplace

The evolution of workers’ compensation claims in the future will depend largely on the assessment of intake efficiency, moving away from redundant processes, and instituting digital and data-led workflows. Technology usage will not only depend on the number of datasets with a carrier but also on the value generated to create new models for transforming the entire claims solution.

The cornerstone for transformation should be prevention and preparedness. With many organizations choosing to operate in a hybrid working model post-pandemic, it is imperative to assess the long-term impact of such changes. Internet of Things (IoT) and telematics can help achieve this through initiatives such as:

  • Smart workplaces with sensors, cameras, and other intelligent devices for continuous supervision
  • Digital collaborations for safety training and mechanisms with loss controllers
  • Wearable devices for loss prevention and control
  • Creating digital safety communities

With the pandemic pushing workforces to stay at home, telemedicine has gained popularity providing employees with medical consultation and reducing the away-from-work time. It is offering immediate care and a faster inquiry process by having expert medical professionals available. Telemedicine also helped promote claims advocacy and assisted in intake and triage through digital authorizations and workflow development for assigning priority to claim categories.

In the end, employers want stability and predictability of final claim costs. Regardless of how these macro trends affect the Workers’ Compensation industry, the focus should be on creating a safe workplace and taking all measures to prevent injuries and hazards.

To discuss workers’ compensation industry trends, reach out to Abhimanyu Awasthi at [email protected], Somya Bhadola at [email protected], or contact us.

Workplace leaders are also now able to focus more on creating an experience-centric workplace through digital technologies, delivering superior user performance and job satisfaction. Learn more in our webinar, Top Strategies for Creating an Employee-focused Digital Workplace.

The Rise of Machine Learning Operations: How MLOps Can Become the Backbone of AI-enabled Enterprises | Blog

We’ve seen enterprises developing and employing multiple disparate AI use cases. But to become a truly AI-enabled enterprise, many standalone use cases need to be developed, deployed, and maintained to solve different challenges across the organization. Machine Learning Operations or MLOps offers this promise to seamlessly leverage the power of AI without hassle.

Everest Group is launching its MLOps Products PEAK® Matrix Assessment 2022 to gain a better understanding of the competitive service provider landscape. Discover how you can be part of the assessment.

Learn how to participate

With the rise in digitization, cloud, and internet of things (IoT) adoption, our world generates petabytes of data every day that enterprises want to mine to gain business insights, drive decisions, and transform operations.

Artificial Intelligence (AI) and Machine Learning (ML) insights can help enterprises gain a competitive edge but come with developmental and operational challenges. Machine Learning Operations (MLOps) can provide a solution. Let’s explore this more.

While tools for analyzing historical data to gain business insights have become well-adopted and easier to use, using this information to make predictions or judgment calls is a different ball game altogether.

Tools that can deliver these capabilities based on programming languages such as Python, SAS, and R are known as data science or Machine Learning (ML). Popular deep learning frameworks include Tensorflow, Jupyter, and PyTorch.

Over the past decade, these tools have gained traction and have emerged as attractive options to develop predictive use cases by leveraging vast amounts of data to assist employees in making decisions and delivering consistent outcomes. As a result, enterprises can scale processes without proportionately increasing employee headcount.

Machine Learning varies from traditional IT initiatives as it does not take a one-size-fits-all approach. Earlier data-scientist implementation teams operated in silos, worked on different business processes, and leveraged disparate development tools and deployment techniques with limited adherence to IT policies.

While the benefits promised are real, replicating them across geographies, functions, customer segments, and distribution channels, all with their own nuances, called for a customized approach across these categories.

This led to the development of a plethora of specialized models that individual business teams had to be kept informed of as well as significant infrastructure and deployment costs.

Advances in ML have since driven software providers to offer approaches to democratize model development, making it possible to now create custom ML models for different processes and contexts.

MLOps to the rescue

In today’s world, developing multiple models that serve different purposes is less challenging. Enterprises who want to successfully become AI-enabled and deploy AI at scale need to equip individual business teams with model deployment and monitoring capabilities.

As a result, software vendors have started offering a DevOps-style approach to centralize and support the deployment requirements of a vast number of ML models, with individual teams focusing only on developing models best suited to their requirements.

This new rising methodology called MLOps is a structured approach to scaling ML across organizations that brings together skills, techniques, and tools used in data engineering and machine learning.

What’s needed to make it work

Technical Capabilities Required for MLOps

MLOps assists enterprises in decoupling the development and operational aspects in an ML model’s lifecycle by bringing DevOps-like capabilities into operationalizing ML models. Technology vendors are offering MLOps to enterprises in the form of licensable software with the following capabilities:

  • Model deployment: In this important stage, the ability to deploy models on any infrastructure is important. Other features include storing an ML model in a containerized environment and options to scale compute resources
  • Model monitoring: Tracking the performance of models in production is complex and requires a carefully designed performance measurement matrix. As soon as models start showing signs of declining prediction accuracy, they are sent to the development team for review/retraining
  • Collaboration and platform management: MLOps solutions offer platform-related features such as security, access control, version control, and performance measurement to enhance reusability and collaboration among various stakeholders, including data engineers, data scientists, ML engineers, and central IT functions

Additionally, MLOps vendors offer support for multiple Integrated Development Environments (IDEs) to promote the democratization of the model development process.

While various vendors offer built-in ML development capabilities within their solutions, connectors are being developed and integrated to support a wide array of ML model file formats.

Additionally, the overall ML lifecycle management ecosystem is rapidly converging, with vendors developing end-to-end ML lifecycle capabilities, either in-house or through partner integrations.

MLOps can promote rapid innovation through robust machine learning lifecycle management, increase productivity, speed, and reliability, and reduce risk – making it a methodology to pay attention to.

Everest Group is launching its MLOps Products PEAK® Matrix Assessment 2022 to gain a better understanding of the competitive landscape. Technology providers can now participate and receive a platform assessment.

Learn how to participate

To share your thoughts on this topic, contact [email protected] and [email protected].

Federated Learning: Privacy by Design for Machine Learning | Blog

With cyberattacks and data breaches at all-time highs, consumers are increasingly skeptical about sharing their data with enterprises, creating a dilemma for artificial intelligence (AI) that needs massive data to thrive. The nascent technology of federated learning offers an ideal growing alternative for healthcare, life sciences, banking, finance, manufacturing, advertising, and other industries to unleash the full potential of AI without compromising the privacy of individuals. To learn how you can have all the data you need while protecting consumers, read on.  

Privacy preservation with federated learning

The infinite number of massive data breaches that have stripped individuals of their privacy has made the public more aware of the need to protect their data. In the absence of strong governance and guidelines, people are more skeptical than ever about sharing their personal data with enterprises.

This new data-conscious paradigm poses a problem for artificial intelligence (AI) that thrives on huge amounts of data. Unless we can figure out a way to train machines on significantly smaller data sets, protecting the privacy and data of users will remain key obstacles to intelligent automation.

Federated learning (FL) is emerging as a solution to overcome this problem. Broadly speaking, Federated learning is a method of training machine learning models in a way that the user data does not leave its location, keeping it safe and private. This differs from the traditional centralized machine learning methods that require the data to be aggregated in a centralized location.

Federated learning is a mechanism of machine learning, wherein the process of learning takes place in a decentralized manner across a network of nodes/edge devices, and the results are aggregated in a central server to create a unified model. It essentially comprises decoupling the activity of model training with centralized data storage.

The Mechanism of Federated Learning

By training the same model across an array of devices, each with its own set of data, we get multiple versions of the model, which, when combined, create a more powerful and accurate global version for deployment and use.

In addition to training algorithms in a private and secure manner, Federated learning provides an array of other benefits such as:

  • Training across data silos
  • Training on heterogeneous data
  • Lower communication costs
  • Reduced liability

Federated learning applicability and use cases

Based on an Everest Group framework, we have found Federated learning is most suitable and being adopted at higher rates in industrials where data is an extremely critical asset that is present across different locations in a distributed fashion and privacy is paramount.

Federated learning is especially beneficial for industries that have strict data residency requirements. This makes the healthcare and life-sciences industries perfect candidates for its adoption. Federated learning can help facilitate multi-institution collaborations between medical institutions by helping them overcome regulatory hurdles that prevent them from sharing patient data by pooling data in a common location.

The next industry ripe for the adoption of Federated learning is the banking and financial sectors. For instance, it can be used to develop a more comprehensive and accurate fraud analytics solution that is based on data from multiple financial entities.

Another industry where we see high applicability of Federated learning is the manufacturing industry. By ensuring collaboration between different entities across the supply chain, using Federated learning techniques, there is a case to build a more powerful model that can help increase the overall efficiency across the supply chain.

Federated learning also might find increased use in interest-based advertising. With the decision to disable third-party cookies by major internet browsers, marketers are at a loss for targeted advertising and engagement. With Federated Learning, marketers can replace individual identifiers with cohorts or group-based identifiers. These cohorts are created by identifying people with common interests based on individual user data such as browsing habits stored on local machines.

An ecosystem on the rise

Since Google introduced the concept of Federated learning in 2016, there has been a flurry of activity. Given that this is a nascent technology, the ecosystem is currently dominated by big tech and open-source players. We see hyperscalers taking the lead with Microsoft and Amazon Web Services (AWS) making investments to activate Federated learning, followed by Nvidia and Lenovo who are looking at the market from a hardware perspective.

Another segment of players working in this arena are startups that are using Federated learning to build industry-specific solutions. AI companies such as Owkin and Sherpa.ai are pioneering this technology and have developed Federated learning frameworks that are currently operational at a few enterprises’ locations.

The adoption and need for Federated learning depend on the industry and vary with the use case. Everest Group has developed a comprehensive framework to help you assess and understand the suitability of Federated learning for your use-case in our Latest Primer for Federated Learning. The framework is built on four key parameters that include data criticality, privacy requirement, regulatory constraint, and data silo/ diversity.

Federated learning provides us with an alternative way to make AI work in a world without compromising the privacy of individuals.

If you are interested in understanding the suitability of federated learning for your enterprise, please share your thoughts with us at [email protected].

Recharge Your AI initiatives with MLOps: Start Experimenting Now | Blog

In this era of industrialization for Artificial Intelligence (AI), enterprises are scrambling to embed AI across a plethora of use cases in hopes of achieving higher productivity and enhanced experiences. However, as AI permeates through different functions of an enterprise, managing the entire charter gets tough. Working with multiple Machine Learning (ML) models in both pilot and production can lead to chaos, stretched timelines to market, and stale models. As a result, we see enterprises hamstrung to successfully scale AI enterprise-wide.

MLOps to the rescue

To overcome the challenges enterprises face in their ML journeys and ensure successful industrialization of AI, enterprises need to shift from the current method of model management to a faster and more agile format. An ideal solution that is emerging is MLOps – a confluence of ML and information technology operations based on the concept of DevOps.

According to our recently published primer on MLOps, Successfully Scaling Artificial Intelligence – Machine Learning Operations (MLOps), these sets of practices are aimed at streamlining the ML lifecycle management with enhanced collaboration between data scientists and operations teams. This close partnering accelerates the pace of model development and deployment and helps in managing the entire ML lifecycle.

Picture1 1

MLOps is modeled on the principles and practices of DevOps. While continuous integration (CI) and continuous delivery (CD) are common to both, MLOps introduces the following two unique concepts:

  • Continuous Training (CT): Seeks to automatically and continuously retrain the MLOps models based on incoming data
  • Continuous Monitoring (CM): Aims to monitor the performance of the model in terms of its accuracy and drift

We are witnessing MLOps gaining momentum in the ecosystem, with hyperscalers developing dedicated solutions for comprehensive machine learning management to fast-track and simplify the entire process. Just recently, Google launched Vertex AI, a managed AI platform, which aims to solve these precise problems in the form of an end-to-end MLOps solution.

Advantages of using MLOps

MLOps bolsters the scaling of ML models by using a centralized system that assists in logging and tracking the metrics required to maintain thousands of models. Additionally, it helps create repeatable workflows to easily deploy these models.

Below are a few additional benefits of employing MLOps within your enterprise:

Picture2

  • Repeatable workflows: Saves time and allows data scientists to focus on model building because of the automated workflows for training, testing, and deployment that MLOps provides. It also aids in creating reproducible ML workflows that accelerate fractionalization of the model
  • Better governance and regulatory compliance: Simplifies the process of tracking changes made to the model to ensure compliance with regulatory norms for particular industries or geographies
  • Improved model health: Helps continuously monitor ML models across different parameters such as accuracy, fairness, biasness, and drift to keep the models in check and ensure they meet thresholds
  • Sustained model relevance and RoI: Keeps the model relevant with regular training based on new incoming data so it remains relevant. This helps to keep the model up to date and provide a sustained Return on Investment (RoI)
  • Increased experimentation: Spurs experimentation by tracking multiple versions of models trained with different configurations, leading to improved variations
  • Trigger-based automated re-training: Helps set up automated re-training of the model based on fresh batches of data or certain triggers such as performance degradation, plateauing or significant drift

Starting your journey with MLOps

Implementing MLOps is complex because it requires a multi-functional and cross-team effort across the key elements of people, process, tools/platforms, and strategy underpinned by rigorous change management.

As enterprises embark on their MLOps journey, here are a few key best practices to pave the way for a smooth transition:

  • Build a cross-functional team – Engage team members from the data science, operations, and business front with clearly defined roles to work collaboratively towards a single goal
  • Establish common objectives – Set common goals for the cross-functional team to cohesively work toward, realizing that each of the teams that form an MLOps pod may have different and competing objectives
  • Construct a modular pipeline – Take a modular approach instead of a monolithic one when building MLOps pipelines since the components built need to be reusable, composable, and shareable across multiple ML pipelines
  • Select the right tools and platform – Choose from a plethora of tools that cater to one or more functions (management, modeling, deployment, and monitoring) or from platforms that cater to the end-to-end MLOps value chain
  • Set baselines for monitoring – Establish baselines for automated execution of particular actions to increase efficiency and ensure model health in addition to monitoring ML systems

When embarking on the MLOps journey, there is no one-size-fits-all approach. Enterprises need to assess their goals, examine their current ML tooling and talent, and also factor in the available time and resources to arrive at an MLOps strategy that best suits their needs.

For ML to keep pace with the agility of modern business, enterprises need to start experimenting with MLOps now.

Are you looking to scale AI within your enterprise with the help of MLOps? Please share your thoughts with us at [email protected].

Internet of Things Will Connect the Supply Chain in the “Next Normal” | Blog

Imagine a utopia where minimum human intervention is needed to run an entire shop floor. In this world, manufacturers have total control and visibility of all products, machines predict equipment failures and correct them, shelves count inventory, and customers check themselves out. While such a supply chain model seems improbable and far into the future, the likes of Amazon, Walmart, and Toyota, are already on their way to achieving this vision. At the center of their supply chain initiatives making this possible is the Internet of Things (IoT.)

The supply chain is considered the backbone of a successful enterprise.  However, firms find it increasingly challenging to establish a robust supply chain model. The disruptions caused by COVID-19 have further made matters worse as ‘disconnected enterprises’ struggle to gain complete supply chain visibility. The pandemic has established that supply chain disruptions and uncertainties will become more frequent going forward.

Supply chain challenges

The current supply chain landscape faces numerous challenges that need to be addressed.  These issues are illustrated below:

Challenges in Current Supply chain

 Future-proofing the supply chain using IoT

As enterprises strive to develop a resilient supply chain, IoT will occupy the center stage. An interconnected supply chain will bring together suppliers/vendors, logistic providers, manufacturers, wholesalers and retailers, and customers dispersed by geography. The technology ensures improved efficiency, better risk management, end-to-end visibility, and enhanced stakeholder experience.

A seamlessly connected supply chain provides advantages at every stage of the value chain for each of the stakeholders. The exhibit below showcases a connected supply chain ecosystem:

Connected ecosystem for supply chain

 Let’s look at how some companies are capturing the benefits IoT:

  • Real-time location tracking

Using real-time data (captured from GPS coordinates) tracking the movement of raw materials/finished goods, IoT technology aids firms in determining where and when products get delayed. This helps managers ensure route optimization and better plan the delivery schedule. IoT, in combination with blockchain, helps secure the products against fraud. For example, Novo Surgical leverages IoT for optimally tracking and tracing its ‘smart surgical instruments.’ This has reduced errors, decreased surgical instrument loss, increased visibility and efficiency, and improved forecasting of demand for the firm.

  • Equipment monitoring

Sensors on machines constantly collect information around the functioning of the machine, enabling managers to monitor them in real time. By analyzing parameters such as machine temperature, vibration, etc., manufacturers can better predict machine downtime and take necessary actions to mitigate this. For instance, Toyota partnered with Hitachi to leverage the vendor’s IoT platform and use the data collected to reduce unexpected machine failures and improve the reliability and efficiency of equipment.

  • Smart inventory management

IoT sensors in the warehouse assist in tracking the movement of individual items, providing an efficient way to monitor inventory levels and prevent pilferage. Smart shelves contain weight sensors that monitor the product weight to determine when products are out of stock. Walmart has been leveraging smart shelves in its retail stores to manage its products more efficiently and improve the shopping experience.

  • Warehouse management

IoT technology uses sensors that can monitor and adjust warehouse parameters such as humidity, temperature, pressure, and avoiding spoiling of items. Leading e-commerce players like Amazon and Alibaba have been pioneers in leveraging IoT to optimize warehouse management.

 Charting the journey for a connected supply chain

As enterprises aim to future-proof their supply chain, they will need a structured path following these five steps below:

  1. Develop a business case: Enterprises need to determine the current gaps in their supply chain and identify the extent of digitization of their supply chain to develop the business case for a connected supply chain.
  2. Secure buy-in from supply partners: Successful implementation of IoT in the supply chain requires the various partners to collaborate and adopt the technology together. Securing a buy-in from each member of the value chain – vendors/suppliers, OEM players, logistics operators, and retailers – is imperative for firms to realize the complete benefits. Compatibility of the technology platforms leveraged by the various supply partners is essential to develop a seamless supply chain.
  3. Invest in security: Invest in security and data protection initiatives early on to avoid supply chain breaches. Performing regular security and vulnerability assessments across the value chain and investing in next-generation technology-based security solutions is essential.
  4. Leverage other technologies: While IoT has a plethora of benefits across the supply chain, consider leveraging next-generation technologies such as blockchain, artificial intelligence, and edge computing in confluence with IoT to further enhance the capabilities of the use cases.
  5. Partner for implementation: To overcome concerns around skills and address data reconciliation challenges, consider partnering with IoT providers with expertise in the supply chain arena. Service/solution providers also are instrumental in bringing a security layer that can aid in addressing data security concerns and governance issues.

Since IoT is an interplay of multiple devices and machines, a successful IoT implementation requires firms to invest in sensors, cloud/edge infrastructure, IoT connectivity networks, data management and analytics solutions, and application development and management. Enterprises can accelerate their IoT supply chain journeys by partnering with solution providers with strong expertise in IoT products and services capabilities in the supply chain arena.

Are you embarking on your connected supply chain journey? Please share your thoughts and experiences with us at [email protected] and [email protected].

Democratization: Artificial Intelligence for the People and by the People | Blog

Enterprises have identified Artificial Intelligence (AI) as a quintessential enabling technology in the success of digital transformation initiatives to further increase operational efficiency, improve employee productivity, and deliver enhanced stakeholder experience. According to our recent research, Artificial Intelligence (AI) Services – State of the Market Report 2021 | Scale the AI Summit Through Democratization, more than 72 percent of enterprises have embarked on their AI journey. This increased AI adoption is leading us into an era of industrialization of AI.

Talent is the biggest impediment in scaling AI

Enterprises face many challenges in their AI adoption journey, including rising concerns of privacy, lack of proven Return on Investment (RoI) for AI initiatives, increasing need for explainability, and an extreme crunch for skilled AI talent. According to Everest Group’s recent assessment of the AI services market, 43 percent of the enterprises identified limited availability of skilled, mature, and niche AI talent as one of the biggest challenges they face in scaling their AI initiatives.

Lack of skilled AI talent

Enterprises face this talent crunch in using both the open-source ecosystem and hyperscalers’ AI platforms for the reasons below:

  • High demand for open-source source machine learning libraries such as TensorFlow, scikit-learn, and Keras due to their ability to let users leverage transfer learning
  • Low project readiness of certified talent across platforms such as SAP Leonardo, Salesforce Einstein, Amazon SageMaker, Azure Machine Learning, and Microsoft Cognitive Services due to lack of domain knowledge and industry contextualization

As per our research in the talent readiness assessment, a large demand-supply gap exists for AI technologies (approximately 25 to 30 percent), hindering an enterprise’s ability to attract and retain talent.

In addition to this technical talent crunch, another aspect where enterprises struggle to find the right talent is the non-technical facet of AI that includes roles such as AI ethicists, behavioral scientists, and cognitive linguists.

As more and more enterprises adopt AI, this talent challenge becomes more exacerbated at the same time the demand for AI technologies is skyrocketing. There is an ongoing tussle for AI talent between academic, big tech, and enterprises, and, so far, the big techs are coming out on top. They have been able to successfully recruit huge amounts of AI talent, leaving a drying pool for the rest of the enterprises to fish in.

Democratization to overcome the talent problem

We see democratization as a potential solution to overcome this expanding talent gap. As we define it, democratization is primarily concerned with making AI accessible to a wider set of users targeted specifically at non-technical business users. The principle behind the concept of democratization is “AI for all.”

Democratization has to do with educating business users in the basic concepts of data and AI and giving them access to the data and tools that can help build a larger database of AI use-cases, develop insights, and find AI-based solutions to their problems.

Enterprises can leverage Everest Group’s four-step democratization framework to help address talent gaps within the enterprise and empower its employees. Here are the steps to guide a successful democratization initiative:

  • Data democratization: The first step of AI democratization is enabling data access to business users throughout the organization. This will help familiarize them with the data structures and interpret and analyze the data
  • Data and AI literacy: The next step is embracing initiatives to help business users build general knowledge of AI, understand the implications of AI systems, and successfully interact with them
  • Self-service low-code/no-code tools: Organizations should also invest in tools that provide pre-built components and building blocks in a drag and drop fashion to help business users deploy ML models without having to write extensive code
  • Automation-enabled machine learning (ML): Lastly, enterprises should use automated machine learning (AutoML) for automating ML workflows that involve some or all of the steps involved in the model training process, such as feature engineering, feature selection, algorithm selection, and hyperparameter optimization

Following these steps, democratization can help reduce the barriers to entry for AI experimentation, accelerate enterprise-wide adoption, and speed up in-house innovation among its benefits.

Current state of democratization

The industry as a whole is now in the initial stages of AI democratization, which is heavily focused on data and AI literacy initiatives. Some of the more technologically advanced or well-versed enterprises have been early adopters. The exhibit below presents the current market adoption of the four key elements of democratization and a few industry examples:

current AI adoption

Democratization is essential

As part of their democratization efforts, enterprises must also focus on contextualization, change management, and governance to ensure responsible and successful democratization.

By doing this, companies will not only help solve the persistent AI talent crunch but also ensure faster time to market, empower business users, and increase employee productivity. Hence, democratization is an essential step to ensuring the sustainable, inclusive, and responsible adoption of AI.

What have you experienced in your democratization journey? Please share your thoughts with us at [email protected] or at [email protected].

Microsoft Goes All in on Industry Cloud and AI with $20 Billion Nuance Deal | Blog

Yesterday’s announcement of Microsoft’s acquisition of Nuance Communications signifies the big tech company’s serious intentions in the US healthcare market.

We’ve been writing about industry cloud and verticalization plays of big technology companies (nicknamed BigTech) for a while now. With the planned acquisition of Nuance Communications for US$19.7 billion, Microsoft has made its most definitive step in the healthcare and verticalization journey.

At a base level, what matters to Microsoft is that Nuance focuses on conversational AI. Over the years, it has become quite the phenomenon among physicians and healthcare providers – 77 percent of US hospitals are Nuance clients. Also, it is not just a healthcare standout – Nuance counts 85 percent of Fortune 100 organizations as customers. Among Nuance’s claims to fame in conversational AI is the fact that it powered the speech recognition engine in Apple’s Siri.

Why Did Microsoft Acquire Nuance?

The acquisition is attractive to Microsoft for the following reasons:

  1. Buy versus build: If Microsoft (under Satya Nadella) can trust itself to build a capability swiftly, it will never buy. Last year, when we wrote about Salesforce’s acquisition of Slack, we highlighted how Microsoft pulled out of its intent to acquire Slack in 2016 and launched Teams within a year. Could Microsoft have built and scaled a speech recognition AI offering?
  2. Conversational AI: Microsoft’s big three competitors – Amazon, Apple, and Google – have a significant head start in speech recognition, the only form of AI that has gone mainstream and is likely to be a US$30 billion market by 2025. Clearly, with mature competition, this was not going to be as easy as “Alexa! Cut slack, build Teams” for Nadella
  3. Healthcare: This is another battleground for which Microsoft has been building up an arsenal. As the US continues to expand on its $3 trillion spend on healthcare, Microsoft wants a share of this sizeable market. That is why it makes sense to peel the healthcare onion a bit more

 

What Role Does Microsoft Want to Play in Healthcare?

While other competitors (read Amazon, Salesforce, and Google) were busy launching healthcare-focused offerings in 2020, Microsoft was already helping healthcare providers use Microsoft Teams for virtual physician visits. Also, Microsoft and Nuance are not strangers, having partnered in 2019, to enable ambient listening capabilities for physician to EHR record keeping. Microsoft sees a clear opportunity in the US healthcare industry.

  • Everest Group estimates that technology services spending in US healthcare will grow at a CAGR of 7.5% for the next five years, adding an incremental US$25 billion to an already whopping $56 billion
  • The focus of Microsoft and its competitors is to disrupt the multi-billion ($40 billion by 2025) healthcare data (Electronic Medical Record) industry
  • Erstwhile EMR has been a major reason for physician burnout, which the likes of Nuance aim to solve
  • Cloud-driven offerings such as Canvas Medical and Amazon Comprehend Medical are already making Epic Systems and Cerner sit up and take notice

It is not without reason that Microsoft launched its cloud for healthcare last year and has followed it up by acquiring Nuance.

What Does it Mean for Healthcare Enterprises?

Under Nadella, Microsoft has developed a sophisticated sales model that takes a portfolio approach to clients. This has helped Microsoft build a strong positioning beyond its Office and Windows offerings even in healthcare. Most clients in healthcare are already exposed to its Power Apps portfolio and Intelligent Cloud (including Azure and cloud for healthcare) in some form. It is only a question of time (if the acquisition closes without issues) until Nuance becomes part of its suite of offerings for healthcare.

What Does it Mean for Service Providers?

As a rejoinder to our earlier point about head starts, this is where Microsoft has a lead over competitors. Our recent research with System Integrators (SI) ecosystem indicates that Microsoft is head and shoulders above its nearest competitors when it comes to leveraging the SI partnership channel to bring its offerings to enterprises. This can act as a significant differentiator when it comes to taking Nuance to healthcare customers as SI partners can expect favorable terms of engagement.

Partners' Perceptions

Lastly, this is not just about healthcare

While augmenting healthcare capabilities and clients is the primary trigger for this purchase, we believe Microsoft aims to go beyond healthcare to achieve the following objectives:

  • Take conversational AI to other industries: Clearly, healthcare is not the only industry warming up to conversational AI. Retail, financial services, and many other industries have scaled usage. Hence, it is not without reason that Mark Benjamin (Nuance’s CEO) will report to Scott Guthrie (Executive Vice President of Cloud & AI at Microsoft) and not Gregory Moore (Microsoft’s Corporate Vice President, Microsoft Health), indicating a broader push
  • Make cloud more intelligent: As mentioned above, Microsoft will pursue full-stack opportunities by combining Nuance’s offerings with its Power Apps and Intelligent Cloud suites. As a matter of fact, it plans to report Nuance’s performance as part of its Intelligent Cloud segment

Microsoft: $2 Trillion and Beyond

This announcement comes against the background of BigTech and platform companies making significant moves to industry-specific use cases, which will drive the next wave of client adoption and competitive differentiation. Microsoft’s turnaround and acceleration since Nadella took over as CEO in 2014 are commendable (see the image below). It is on the verge of becoming only the second company to achieve $2 trillion in market capitalization. This move is a bet on its journey beyond the $2 trillion.

Picture2 5

What do you make of its move? Please feel free to reach out to [email protected] and [email protected] to share your opinion.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.