Tag: artificial intelligence

Federated Learning: Privacy by Design for Machine Learning | Blog

With cyberattacks and data breaches at all-time highs, consumers are increasingly skeptical about sharing their data with enterprises, creating a dilemma for artificial intelligence (AI) that needs massive data to thrive. The nascent technology of federated learning offers an ideal growing alternative for healthcare, life sciences, banking, finance, manufacturing, advertising, and other industries to unleash the full potential of AI without compromising the privacy of individuals. To learn how you can have all the data you need while protecting consumers, read on.  

Privacy preservation with federated learning

The infinite number of massive data breaches that have stripped individuals of their privacy has made the public more aware of the need to protect their data. In the absence of strong governance and guidelines, people are more skeptical than ever about sharing their personal data with enterprises.

This new data-conscious paradigm poses a problem for artificial intelligence (AI) that thrives on huge amounts of data. Unless we can figure out a way to train machines on significantly smaller data sets, protecting the privacy and data of users will remain key obstacles to intelligent automation.

Federated learning (FL) is emerging as a solution to overcome this problem. Broadly speaking, Federated learning is a method of training machine learning models in a way that the user data does not leave its location, keeping it safe and private. This differs from the traditional centralized machine learning methods that require the data to be aggregated in a centralized location.

Federated learning is a mechanism of machine learning, wherein the process of learning takes place in a decentralized manner across a network of nodes/edge devices, and the results are aggregated in a central server to create a unified model. It essentially comprises decoupling the activity of model training with centralized data storage.

The Mechanism of Federated Learning

By training the same model across an array of devices, each with its own set of data, we get multiple versions of the model, which, when combined, create a more powerful and accurate global version for deployment and use.

In addition to training algorithms in a private and secure manner, Federated learning provides an array of other benefits such as:

  • Training across data silos
  • Training on heterogeneous data
  • Lower communication costs
  • Reduced liability

Federated learning applicability and use cases

Based on an Everest Group framework, we have found Federated learning is most suitable and being adopted at higher rates in industrials where data is an extremely critical asset that is present across different locations in a distributed fashion and privacy is paramount.

Federated learning is especially beneficial for industries that have strict data residency requirements. This makes the healthcare and life-sciences industries perfect candidates for its adoption. Federated learning can help facilitate multi-institution collaborations between medical institutions by helping them overcome regulatory hurdles that prevent them from sharing patient data by pooling data in a common location.

The next industry ripe for the adoption of Federated learning is the banking and financial sectors. For instance, it can be used to develop a more comprehensive and accurate fraud analytics solution that is based on data from multiple financial entities.

Another industry where we see high applicability of Federated learning is the manufacturing industry. By ensuring collaboration between different entities across the supply chain, using Federated learning techniques, there is a case to build a more powerful model that can help increase the overall efficiency across the supply chain.

Federated learning also might find increased use in interest-based advertising. With the decision to disable third-party cookies by major internet browsers, marketers are at a loss for targeted advertising and engagement. With Federated Learning, marketers can replace individual identifiers with cohorts or group-based identifiers. These cohorts are created by identifying people with common interests based on individual user data such as browsing habits stored on local machines.

An ecosystem on the rise

Since Google introduced the concept of Federated learning in 2016, there has been a flurry of activity. Given that this is a nascent technology, the ecosystem is currently dominated by big tech and open-source players. We see hyperscalers taking the lead with Microsoft and Amazon Web Services (AWS) making investments to activate Federated learning, followed by Nvidia and Lenovo who are looking at the market from a hardware perspective.

Another segment of players working in this arena are startups that are using Federated learning to build industry-specific solutions. AI companies such as Owkin and Sherpa.ai are pioneering this technology and have developed Federated learning frameworks that are currently operational at a few enterprises’ locations.

The adoption and need for Federated learning depend on the industry and vary with the use case. Everest Group has developed a comprehensive framework to help you assess and understand the suitability of Federated learning for your use-case in our Latest Primer for Federated Learning. The framework is built on four key parameters that include data criticality, privacy requirement, regulatory constraint, and data silo/ diversity.

Federated learning provides us with an alternative way to make AI work in a world without compromising the privacy of individuals.

If you are interested in understanding the suitability of federated learning for your enterprise, please share your thoughts with us at [email protected].

Recharge Your AI initiatives with MLOps: Start Experimenting Now | Blog

In this era of industrialization for Artificial Intelligence (AI), enterprises are scrambling to embed AI across a plethora of use cases in hopes of achieving higher productivity and enhanced experiences. However, as AI permeates through different functions of an enterprise, managing the entire charter gets tough. Working with multiple Machine Learning (ML) models in both pilot and production can lead to chaos, stretched timelines to market, and stale models. As a result, we see enterprises hamstrung to successfully scale AI enterprise-wide.

MLOps to the rescue

To overcome the challenges enterprises face in their ML journeys and ensure successful industrialization of AI, enterprises need to shift from the current method of model management to a faster and more agile format. An ideal solution that is emerging is MLOps – a confluence of ML and information technology operations based on the concept of DevOps.

According to our recently published primer on MLOps, Successfully Scaling Artificial Intelligence – Machine Learning Operations (MLOps), these sets of practices are aimed at streamlining the ML lifecycle management with enhanced collaboration between data scientists and operations teams. This close partnering accelerates the pace of model development and deployment and helps in managing the entire ML lifecycle.

Picture1 1

MLOps is modeled on the principles and practices of DevOps. While continuous integration (CI) and continuous delivery (CD) are common to both, MLOps introduces the following two unique concepts:

  • Continuous Training (CT): Seeks to automatically and continuously retrain the MLOps models based on incoming data
  • Continuous Monitoring (CM): Aims to monitor the performance of the model in terms of its accuracy and drift

We are witnessing MLOps gaining momentum in the ecosystem, with hyperscalers developing dedicated solutions for comprehensive machine learning management to fast-track and simplify the entire process. Just recently, Google launched Vertex AI, a managed AI platform, which aims to solve these precise problems in the form of an end-to-end MLOps solution.

Advantages of using MLOps

MLOps bolsters the scaling of ML models by using a centralized system that assists in logging and tracking the metrics required to maintain thousands of models. Additionally, it helps create repeatable workflows to easily deploy these models.

Below are a few additional benefits of employing MLOps within your enterprise:

Picture2

  • Repeatable workflows: Saves time and allows data scientists to focus on model building because of the automated workflows for training, testing, and deployment that MLOps provides. It also aids in creating reproducible ML workflows that accelerate fractionalization of the model
  • Better governance and regulatory compliance: Simplifies the process of tracking changes made to the model to ensure compliance with regulatory norms for particular industries or geographies
  • Improved model health: Helps continuously monitor ML models across different parameters such as accuracy, fairness, biasness, and drift to keep the models in check and ensure they meet thresholds
  • Sustained model relevance and RoI: Keeps the model relevant with regular training based on new incoming data so it remains relevant. This helps to keep the model up to date and provide a sustained Return on Investment (RoI)
  • Increased experimentation: Spurs experimentation by tracking multiple versions of models trained with different configurations, leading to improved variations
  • Trigger-based automated re-training: Helps set up automated re-training of the model based on fresh batches of data or certain triggers such as performance degradation, plateauing or significant drift

Starting your journey with MLOps

Implementing MLOps is complex because it requires a multi-functional and cross-team effort across the key elements of people, process, tools/platforms, and strategy underpinned by rigorous change management.

As enterprises embark on their MLOps journey, here are a few key best practices to pave the way for a smooth transition:

  • Build a cross-functional team – Engage team members from the data science, operations, and business front with clearly defined roles to work collaboratively towards a single goal
  • Establish common objectives – Set common goals for the cross-functional team to cohesively work toward, realizing that each of the teams that form an MLOps pod may have different and competing objectives
  • Construct a modular pipeline – Take a modular approach instead of a monolithic one when building MLOps pipelines since the components built need to be reusable, composable, and shareable across multiple ML pipelines
  • Select the right tools and platform – Choose from a plethora of tools that cater to one or more functions (management, modeling, deployment, and monitoring) or from platforms that cater to the end-to-end MLOps value chain
  • Set baselines for monitoring – Establish baselines for automated execution of particular actions to increase efficiency and ensure model health in addition to monitoring ML systems

When embarking on the MLOps journey, there is no one-size-fits-all approach. Enterprises need to assess their goals, examine their current ML tooling and talent, and also factor in the available time and resources to arrive at an MLOps strategy that best suits their needs.

For ML to keep pace with the agility of modern business, enterprises need to start experimenting with MLOps now.

Are you looking to scale AI within your enterprise with the help of MLOps? Please share your thoughts with us at [email protected].

Democratization: Artificial Intelligence for the People and by the People | Blog

Enterprises have identified Artificial Intelligence (AI) as a quintessential enabling technology in the success of digital transformation initiatives to further increase operational efficiency, improve employee productivity, and deliver enhanced stakeholder experience. According to our recent research, Artificial Intelligence (AI) Services – State of the Market Report 2021 | Scale the AI Summit Through Democratization, more than 72 percent of enterprises have embarked on their AI journey. This increased AI adoption is leading us into an era of industrialization of AI.

Talent is the biggest impediment in scaling AI

Enterprises face many challenges in their AI adoption journey, including rising concerns of privacy, lack of proven Return on Investment (RoI) for AI initiatives, increasing need for explainability, and an extreme crunch for skilled AI talent. According to Everest Group’s recent assessment of the AI services market, 43 percent of the enterprises identified limited availability of skilled, mature, and niche AI talent as one of the biggest challenges they face in scaling their AI initiatives.

Lack of skilled AI talent

Enterprises face this talent crunch in using both the open-source ecosystem and hyperscalers’ AI platforms for the reasons below:

  • High demand for open-source source machine learning libraries such as TensorFlow, scikit-learn, and Keras due to their ability to let users leverage transfer learning
  • Low project readiness of certified talent across platforms such as SAP Leonardo, Salesforce Einstein, Amazon SageMaker, Azure Machine Learning, and Microsoft Cognitive Services due to lack of domain knowledge and industry contextualization

As per our research in the talent readiness assessment, a large demand-supply gap exists for AI technologies (approximately 25 to 30 percent), hindering an enterprise’s ability to attract and retain talent.

In addition to this technical talent crunch, another aspect where enterprises struggle to find the right talent is the non-technical facet of AI that includes roles such as AI ethicists, behavioral scientists, and cognitive linguists.

As more and more enterprises adopt AI, this talent challenge becomes more exacerbated at the same time the demand for AI technologies is skyrocketing. There is an ongoing tussle for AI talent between academic, big tech, and enterprises, and, so far, the big techs are coming out on top. They have been able to successfully recruit huge amounts of AI talent, leaving a drying pool for the rest of the enterprises to fish in.

Democratization to overcome the talent problem

We see democratization as a potential solution to overcome this expanding talent gap. As we define it, democratization is primarily concerned with making AI accessible to a wider set of users targeted specifically at non-technical business users. The principle behind the concept of democratization is “AI for all.”

Democratization has to do with educating business users in the basic concepts of data and AI and giving them access to the data and tools that can help build a larger database of AI use-cases, develop insights, and find AI-based solutions to their problems.

Enterprises can leverage Everest Group’s four-step democratization framework to help address talent gaps within the enterprise and empower its employees. Here are the steps to guide a successful democratization initiative:

  • Data democratization: The first step of AI democratization is enabling data access to business users throughout the organization. This will help familiarize them with the data structures and interpret and analyze the data
  • Data and AI literacy: The next step is embracing initiatives to help business users build general knowledge of AI, understand the implications of AI systems, and successfully interact with them
  • Self-service low-code/no-code tools: Organizations should also invest in tools that provide pre-built components and building blocks in a drag and drop fashion to help business users deploy ML models without having to write extensive code
  • Automation-enabled machine learning (ML): Lastly, enterprises should use automated machine learning (AutoML) for automating ML workflows that involve some or all of the steps involved in the model training process, such as feature engineering, feature selection, algorithm selection, and hyperparameter optimization

Following these steps, democratization can help reduce the barriers to entry for AI experimentation, accelerate enterprise-wide adoption, and speed up in-house innovation among its benefits.

Current state of democratization

The industry as a whole is now in the initial stages of AI democratization, which is heavily focused on data and AI literacy initiatives. Some of the more technologically advanced or well-versed enterprises have been early adopters. The exhibit below presents the current market adoption of the four key elements of democratization and a few industry examples:

current AI adoption

Democratization is essential

As part of their democratization efforts, enterprises must also focus on contextualization, change management, and governance to ensure responsible and successful democratization.

By doing this, companies will not only help solve the persistent AI talent crunch but also ensure faster time to market, empower business users, and increase employee productivity. Hence, democratization is an essential step to ensuring the sustainable, inclusive, and responsible adoption of AI.

What have you experienced in your democratization journey? Please share your thoughts with us at [email protected] or at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.