Tag: Digital Transformation

3 Tips for Managing Perpetual Change from Software-defined Operating Platforms

Over the past seven years, almost all large companies made substantial progress in implementing digital transformation across a wide variety of functions. At the core of those enormous investments and efforts was building software-defined operating platforms, which put companies on a trajectory to fundamentally change how they operate their business. However, studies show many companies (70%) failed or underperformed against their digital transformation objectives. In this blog, I’ll discuss three tips for how to avoid that outcome and, instead, reap the significant benefits of software-defined operating platforms.

Read on in Forbes

Why Metaverse Growth Will Put Trust and Safety (T&S) Center Stage

With Metaverse growth expected to surge to US$679 billion by 2030, its influence and possibilities seem endless. But with great power comes great responsibility. Read on to learn why and how organizations must ensure Trust and Safety (T&S) for metaverse to realize its potential.

The metaverse’s arrival is inevitable and will, for better or worse, be part of our future lives. The metaverse could rival massive shifts in history like the telephone and the internet and, in the next few decades, bring together people in ways we never imagined.

Metaverse growth is expected to increase internet data use by 20 times from sharing personal and financial data, social interactions enhanced by Augmented Reality (AR)/Virtual Reality (VR), and the evolution of video and live streaming content.

But jumping on the metaverse bandwagon won’t be the difficult part – how to keep it secure will be.

Why now is the time to think through the risks of metaverse growth

Organizations that use or provide metaverse services will need to think hard about the implications and work to align with T&S policies, laws, and regulations in parallel to metaverse initiatives to inspire a safe, privacy-sensitive, and regulated environment.

The metaverse promises opportunities for innovation and growth. It could allow companies to reinvent the user experience through an immersive environment and create deeper engagement.

But, if the metaverse is a place where users are meant to communicate, collaborate, co-create, and share ideas, then shouldn’t we expect it to be safe? However, there are incidences already emerging of users being put at risk of security breaches, increased abuse, exposure to the proliferation of objectionable content, and financial fraud.

It will take a village to regulate the metaverse

Organizations will need to align with T&S providers and stakeholders, such as governments, academia, civil society, and possibly others, to identify loopholes and take measures to address gaps before any wrongdoing occurs. Organizations will have to put T&S policies, technologies, and processes in place and think through how they will moderate the metaverse at scale and how it can be done in real time to keep up with the complex forms of interactive live streaming. They will also need to consider how to ensure the well-being of their human moderators, who can be exposed to egregious content over long periods of time. This could mean initiating full teams that work in parallel to the development, deployment, and enrichment of metaverse.

What does this mean for the T&S services industry?

Enterprises across industries are already relying on third-party T&S services to make their current engagement platforms safe for their users. Over the next decade, the demand for T&S services to help maintain metaverse growth and safety will be immense and likely produce an ecosystem of T&S providers and partnerships from various entities. Utilizing T&S service providers, and even specialized service providers, is one-way enterprises can access expertise in risk mitigation and gain guidance and resources for safer metaverse deployments.

The T&S services market is already growing at a blinding speed of 35-38% and is estimated to reach US$15-20 billion by 2024. However, it can see additional 25-30% growth as metaverse scales.

See the exhibit below.

Picture1 1

Learn more about the current market surrounding the metaverse and how partnering with third parties can keep the public safe and aligned with legal and regulatory T&S requirements in our report, Taming the Hydra: Trust and Safety (T&S) in the Metaverse. And discover how organizations are addressing the possibilities and challenges of metaverse growth in our upcoming LinkedIn Live session, Trust and Safety (T&S) in the Metaverse – With Great Power Comes Great Responsibility.

Value Stream Management: A Progression to Agile and DevOps | Blog

During the digital transformation journey, a Global 2000 enterprise uses more than 150 software solutions and tools on average to support its product or services delivery. Despite the huge investment, the value realized from this technology is still unclear to most enterprises. Read on to understand the importance of measuring the value delivered by software applications or products with value stream management (VSM) and our 4D framework to implement it.

More than 90% of enterprises have adopted agile development methods in some shape or form, and DevOps adoption is on the rise. But surprisingly, less than 20% consider themselves highly mature agile enterprises. These few have adopted a Scaled Agile Framework (SAFe) to implement agile and DevOps practices across the enterprise with some having DevSecOps and BizDevOps processes for product development. Even then, enterprises are unable to track and measure the organizational-wide technology value. Most of the remaining 80% have agile and DevOps adoption in pockets, making it tougher to align outcomes and realize meaningful benefits.

With the increasing investment overload and absence of tangible outcomes, the critical importance of delivering and realizing value is gaining enterprise attention. Concerted efforts for defining, measuring, and enabling value are needed.

Agile and DevOps adoption is considered by many as the ultimate step toward digital transformation. However, enterprises must realize it is just a starting stage for continuously tracking, measuring, realigning, and improving digital solutions’ outcomes and value. This is where the concept of value stream management (VSM) becomes pertinent.

Before we delve deeper into what VSM is, let’s understand what VSM is NOT.

People often use value stream management and Value Stream Mapping interchangeability, which is thoroughly misleading. The two are related but not the same.

Exhibit 1: Differences between value stream management and value stream mapping

Picture1

As the graphic above illustrates, Value Stream Mapping is an activity or a subset of VSM. The value streams and processes are defined during Value Stream Mapping and act as an initial step for effective outcomes from value stream management. VSM focuses on a data-centric approach to decision making and promotes a culture of innovation and improvement through a continuous feedback loop and collaboration.

Now that we are clear on what VSM is not, let’s delve deeper into what VSM is, why enterprises need it, and how to adopt it.

Value stream management – the next step in the enterprise agile journey

In an enterprise setup, there are broadly two sets of value streams – operational and development.

Operational value streams or business value streams comprise the processes and people who deliver the value to the end user by leveraging systems or solutions created by the development value streams. Operational value streams are defined by the nature of the business and its business unit. Some examples of operational value streams are product manufacturing, software product sales and support, order fulfillment, and support functions.

Development value streams or IT value streams consist of the systems and software developers, product managers, and other IT practitioners who design, build, deploy, and maintain systems/solutions. These systems/solutions are used by either internal customers (members of the operational value stream) or external customers who are direct buyers and users. The definition of processes or steps in the development value stream is standardized and runs in parallel with the phases of the Software Development Lifecycle (SDLC) – plan, build, release, and operate.

For example, order fulfillment in a software product company is one operational value stream involving different teams and processes – from sales enablement, licensing, and provisioning to customer support and renewals. These teams require software systems like a Customer Relationship Management (CRM) portal, service management platform, license management systems, etc., to support their processes. The development value streams will be aligned to build and support each of these software systems, enabling the operational teams to deliver the product effectively.

The SAFe principles apply to the development value streams. However, enterprises currently focus SAFe implementation efforts on delivering good products or solutions with agility versus delivering customer value. To deliver value along with agility, adopting VSM in the development value streams should be the next step. This will act as a management layer enabling more data-driven decision making at the SDLC level. VSM also can be extended to operational value streams.

Today the focus for VSM has expanded to the enterprise level bringing the delivery and operational value streams closer.

Making the business case for value stream management adoption

Aligning development value streams to the objectives of operational value streams is key to delivering optimum value to the end customer. VSM platforms connect people, processes, and technology across the SDLC and can be extended to integrate heterogeneous value streams across the enterprise.

Some of the key benefits enterprises stand to achieve with a successful VSM approach are:

  • Identify value streams, organize people, and perform cost-benefit analysis during product discovery
  • Make data-driven investment decisions and prioritize product delivery based on end-to-end visibility
  • Improve the value delivery strategy based on real-time metrics

While we understand the need for VSM in enterprises, implementing it in a structured manner to gain maximum value delivery is equally important.

Implementing VSM effectively using the 4D framework

Below is a recommended starting approach:

Determine the current state of value flow and define value streams at an enterprise level, starting with identifying the operational value streams and the respective development value streams. Align the operational and development value streams to the final value to be delivered by the value stream. Identify current system behaviors and interdependencies.

Design value stream maps to achieve the future state of value flow right from ideation to the value delivery stage. Organize teams to value streams by bringing together the right stakeholders accountable for each value stream step for a mapping exercise to decide steps, handoffs, and metrics.

Deploy VSM tools to connect all value stream parts to measure the flow of value in real time using metrics that track the time, velocity, load, and workflow efficiency. Expand to integrate with other value streams as necessary. Providers like Digital.ai, ConnectALL, Micro Focus, Plutora, and Tasktop offer VSM tools to consider.

Demonstrate continuous improvement in value delivery by using real-time insights from flow metrics as feedback to realign the strategy to increase throughput, efficiency, and value stream productively. Continuously measuring flow metrics gives all stakeholders end-to-end visibility to make informed decisions on investments and prioritization.

This 4D framework is a starting point to implement VSM. Additional factors like talent, governance, organizational culture, etc., can further optimize the value delivery through VSM. Adopting a performance-oriented, highly cooperative, and risk-sharing-based environment will enable smooth VSM implementation.

With growing enterprise investments in agile and DevOps adoption for software development, it will be interesting to see how adding a VSM layer will change the value measurement and value delivery game in upcoming years. Stay tuned for our upcoming blog further exploring the enterprise VSM adoption roadmap.

To discuss value stream management, contact [email protected] and [email protected].

Read more of our blogs for more fact-based insights and transformative business process.

Digital Transformations: 5 Emerging Trends in the Intelligent Process Automation Market

The pandemic’s effects on the digital landscape are long-lasting. Businesses are evolving to rely on the intelligent process automation market (IPA) to promote growth and keep up with competitors. Read on to learn more about five growing IPA trends.

In a world becoming increasingly reliant on technology, financial services organizations are digitizing and automating more processes to keep up with the competition. The intelligent process automation market, growing by about 20% across all fields, is now becoming ubiquitous.

IPA is defined as automation in business processes that use a combination of next-generation automation technologies — such as robotic process automation (RPA) and cognitive or artificial intelligence (AI)-based automation, including intelligent document processing and conversational AI. Solution providers are offering solutions across RPA, Intelligent Document Processing (IDP), and workflow/orchestration, as well as crafting innovative solutions such as digital Centers of Excellence (CoE) and investing more in as-a-Service offerings.

In our recent Intelligent Process Automation (IPA) – Solution Provider Landscape with PEAK Matrix® Assessment 2022 report, our analysts ranked IPA technology vendors and looked at the market for IPA solutions. Based on the research, the growth of IPA technology and reliance will expand to around 25% over the next three years.

Five intelligent process automation market trends enterprises should know

The question of how to become faster, more efficient, and more resilient is the focus for just about any organization undergoing digital transformation. Very often, the answer to this question is at least, in part, intelligent process automation. In the near future, we can see five emerging IPA trends:

  1. IPA will get smarter

A greater proportion of cognitive elements is finding its way into the intelligent process automation market. About 60% of new automation projects involve more advanced cognitive tools such as IDP, conversational AI and anomaly detection. As the maturity of AI-based solutions increases, cognitive automation will be in greater demand. All-round adoption of IPA will be fueled by providers entering new geographies and organizations starting IA initiatives.

  1. IPA will be more scalable

Although many organizations are trying to adopt intelligent process automation, the real question is if it can be scaled up or, in other words, if it can be brought across the organization. To help enterprises scale automation, solution providers are investing in expanding their partner ecosystem, strengthening technology capabilities, and enhancing their services portfolio.

Providers are also expected to help enterprises scale up through more effective change management and CoE set-up strategies. Aided by the prevalence of process intelligence solutions to form robust pipelines and orchestration tools to facilitate holistic automation, enterprises are better equipped now to move away from siloed applications of IA to scaled-up automation implementations.

  1. Citizen development will grow

Many organizations are experimenting with what they can do with citizen development, especially with the current talent shortage. Citizen-led development also holds the power to disrupt the current state of building automation and addresses the issue of talent availability. Solution providers are expected to invest in citizen development and low-code/no-code technologies enabling business users to build automation, consequently also addressing the talent shortage in the market.

Solution and technology providers are also expected to invest substantially in developing the low-code/no-code capabilities of their platforms to enable business users with limited technical exposure to build automation solutions on their own. A few solution providers are implementing citizen development programs in their own organizations and are planning to leverage the learnings to develop effective governance programs for enterprises.

  1. IPA service providers will bring IPA solutions packages to the market

Packaged solutions are gaining traction in the IPA market due to their ease of implementation and quick Return on Investment (RoI). Solutions for F&A are the most prevalent in the market. These solutions will need training on particular data sets to make them functional for a particular process, but they will speed up implementation. Providers are expected to take conscious steps toward promoting sustainable AI by developing solutions complying with environmental, social, and governance (ESG) parameters. They are also investing in AI solutions that are transparent about their working and usage of data.

  1. IPA service providers will pre-build connectors to legacy and other systems

There are a host of technologies, including RPA, conversational AI, process mining, and process orchestration in the IA ecosystem. Very often these IA solutions need to talk to the various other systems. Many IPA service providers are driving innovation and crafting new solutions to keep pace with the fast-moving IPA market and create a more holistic integration process. One such method is offering enabling capabilities like pre-built connectors for a faster and less complex implementation.

If you would like to learn more or discuss the intelligent process automation market and IPA trends, reach out to [email protected].

Learn how the healthcare industry is utilizing intelligent automation, digitalization, and telehealth as fundamental driving forces to transform and evolve in the webinar, How Intelligent Document Processing Is Transforming the Healthcare Industry.

10 Steps to Better Evaluating a Cloud Service Agreement | Blog

Comprehending a Cloud Service Agreement (CSA) can be difficult. With the increasing clout of hyperscalers, buyers need to fully understand a CSA to effectively negotiate with cloud service providers. Learn how to better evaluate these contracts in this blog.  

With the increased adoption of cloud services, Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure have come to dominate the public cloud space in recent years. The negotiating power of these hyperscalers has significantly increased, changing the dynamics of the CSA.

As the influence of cloud providers grows, customers need to carefully evaluate the proper terms and conditions in the CSA. First, let’s better understand the key terms:

  • Cloud service agreement (CSA) – a service level agreement (SLA) for cloud computing services between the cloud service consumer and cloud service provider
  • Cloud service consumer – an individual or a corporate enterprise end user accessing cloud computing resources and services from the cloud service provider
  • Cloud service provider (CSP) – third-party suppliers of cloud-based platforms, infrastructure, application, or storage services
  • Customer agreement – the relationship between the provider and the customer, including roles, responsibilities, and processes used by the CSP

The contract may be written according to the service delivery model selected, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS). CSPs can modify their contract terms at any given time.

Based on our observations, many customers have difficulty understanding these contracts. With the growing portfolio of cloud services in every organization, understanding the nuances to better negotiate contracts with service providers is crucial.

Below is a practical reference to safeguard customers’ interests.

Ten Steps to Evaluate a Cloud Service Agreement

  1. Understand the roles and responsibilities properly
  2. Evaluate business-level policies thoroughly
  3. Understand service and deployment model differences
  4. Identify critical performance objectives
  5. Evaluate security and privacy requirements of the environment
  6. Identify service management requirements
  7. Ensure proper backup for service failure management
  8. Understand the disaster recovery plan
  9. Ensure an effective governance process
  10. Evaluate the exit process fully

For a detailed analysis of your contracts, please reach out to [email protected]. To discuss the cloud service agreement, contact Rohan Pant, [email protected], and Vaibhav Jain, [email protected].

Seven Best Practices to Follow During a VDI Implementation | Blog

Driven by the increasing numbers of mobile workers during the pandemic, VDI implementation has rapidly grown as a secure solution that provides flexibility and cost savings. While it’s a good fit with today’s steadily growing remote workforce, VDI must be implemented properly to avoid pitfalls. Read on to learn the challenges and benefits of implementing a virtual desktop infrastructure.

Workplace infrastructure is quickly evolving. While Virtual Desktop Infrastructure (VDI) transformation has been in the industry for some time, COVID-19 has spurred its increased use to manage IT consumerization and control costs.

The benefits of implementing a virtual desktop infrastructure for enterprises can be remarkable and include easier accessibility for users, device flexibility, increased security, and lower costs. However, if not implemented correctly, VDI can bring organizational challenges. Many projects fail due to improper design leading to performance issues.

Based on our experiences helping organizations understand and optimize VDI implementation to achieve the right model for their budgets and timelines, we identified the following seven best practices:

  1. Understand end-user requirements – Boot storms can be avoided by being cognizant of such details as the number of VDI users, end-user applications, and the times of day users will log in and access their virtual desktops
  2. Consider end-user location – VDI architecture and resources may vary for users at different locations. Bandwidth and latency also have a big impact on the end-user experience
  3. Choose the ratio of persistent or non-persistent desktops – The virtual desktop type can sometimes be determined by the user type, such as task workers, power users, kiosk workers, etc. Persistent desktops retain a user’s personal settings when they log off, while non-persistent virtual desktops do not
  4. Consider client device options – A desktop virtualization benefit is that nearly any device can have a virtual desktop client. Deciding the best mix of thin client devices, converting old personal computers into thin clients, and having bring your own device (BYOD) clients are key factors in VDI deployment. Maintenance requirements and ownership will differ for each case
  5. Design for high availability – While a problem with one physical desktop affects just a single user, an overall VDI failure has the potential to impact all employees. Design the underlying architecture to be highly available to avoid this
  6. Craft a BYOD policy – VDI lets organizations deliver a desktop experience to many types of endpoints and devices – even those owned by end users. Carefully design and distribute a BYOD policy indicating what users can and cannot do on their personal devices
  7. Factor in security – Do not overlook infrastructure security. All security best practices that apply to physical desktops/laptops also pertain to virtual desktops. Administrators should make sure to extend patch management operations to cover virtual desktops

For a detailed analysis of your VDI implementation, please reach out to [email protected]. To discuss further, contact Vaibhav Jain at [email protected].

Potential Value for Your Company in the Metaverse | Blog

A deep interest in the metaverse is emerging. It comes at a time when companies increasingly want to fund value-creation initiatives that enable competing better and engaging clients and employees in new and better ways. To help your company consider how it could benefit from the metaverse, I’ll discuss in this blog what companies are doing today – those that have a first-mover advantage with a presence in the metaverse already, as well as companies currently investigating its potential for creating business value.

Read more in my blog on Forbes

Selecting the Right Low-code Platform: An Enterprise Guide to Investment Decision Making | Blog

Enterprise adoption of low-code platforms has been invigorated in recent years by its potential to drive digital transformation. This fast-rising platform solution offers promise to democratize programming with today’s talent shortage and help companies develop applications and enhance functionalities faster. While the opportunities are clear, charting a path to successful adoption is ambiguous. Learn the 4Cs approach used by best-in-class enterprises for selecting and adopting the right-fit low-code platforms in this blog.

As many as 60% of new application development engagements consider low-code platforms, according to Everest Group’s recent market study. Driven by the pandemic, the sudden surge in demand for digital transformation accelerated low-code annual market growth to about 25%. Considering its potential, low code is appropriately being called the “Next Cloud.”

Interest by investors also has accelerated, further driving R&D spend for new product development. Funding activities in 2022 to companies featuring low code in their profiles already amounts to $560 million across 40 rounds.

Platform providers are responding to these elevated expectations with equal fervor by building platforms with deep domain-specific expertise, while others are providing process-specific solutions for enterprises’ customization requirements.

While these markets have resulted in a proliferation of low-code platforms to choose from, it also has led to confusion and inefficiencies for enterprises. As more and more enterprises explore the potential of these platforms, IT leaders are faced with numerous questions and concerns such as:

“How do I select the platform that can address my current and future requirements?”

“Which platform will work best in my specific enterprise IT landscape?”

“How can we optimize the investment in this technology?”

“How do I compare the pricing structures of different low-code platforms?”

“How do we ensure governance and security of the IT estate with these new tech assets?”

Adoption journey and evaluation parameters for low-code platforms

In addition to the high-priority use cases that initiate the adoption, enterprises should consider the platform’s scalability potential, talent availability for support and enhancement, and integration with the broader IT landscape to make the right selection.

Additionally, low-code platforms are intended to address the requirements of the IT function as well as business stakeholders. Considering the drivers, expectations, and requirements of both when making the selection is essential. A collaborative decision-making set-up with the central IT team and key Line-of-Business (LoB) leaders is critical for a successful platform selection. Let’s explore the 4Cs to low code success.

4Cs to low code success

The key steps to ensure successful low-code platform selection and adoption are:

  • Contemplate: Initiate platform adoption by a set of high-priority use cases but plan for scalability at the enterprise level during platform selection
  • Collaborate: Bring together the central IT group to lead the selection and adoption effort and meaningfully involve the LoB stakeholders
  • Compare: Start with business and tech drivers, expectations, and requirements from both IT and business to prioritize and rank platforms and select the best-fit platform
  • Customize: Make small and incremental enhancements post-adoption to broaden the platform’s scope without disrupting daily operations

This approach can provide a roadmap for enterprises with distinct outcomes. We have witnessed enterprises either adopting the best-fit approach resulting in a platform portfolio or leveraging a single platform as a foundation for an enterprise-grade innovation engine.

For instance, the Chief Technology Officer (CTO) of a leading bank in the US invested in establishing a low code Center of Excellence (CoE) that uses different platforms for process automation, IT Service Management (ITSM), and enabling point solutions for business users.

On the other hand, a large US commercial insurer built its entire end-to-end multi-country app on a single low-code platform. This comprehensive, business-critical application managing claims, billing, and collection is accessible by all underwriters and service personnel.

Next, we explore how to best compare platforms based on their offerings and capabilities. The tables below illustrate the top five business and technology-oriented parameters to consider when evaluating platforms, along with their relevance and enterprise expectations.

Technology parameters for low-code platform selection

Factors associated with the platform’s technical robustness are of key importance to IT decision-makers. Integration and UI/UX capabilities are at the top of enterprise’s technology priorities when comparing multiple platforms.

For instance, Appian ships with 150-plus Out-of-the-Box (OOTB) connectors. Appian SAIL, a patented UI architecture, takes declarative UI definitions to generate dynamic, interactive, and multi-platform user experiences. It also makes the applications more secure, easy to change, future-proofed, and native on the latest devices.

Picture1

Business parameters for low-code platform selection

Assessing these parameters is important to understand whether low code can be sustained and scaled long-term and if it addresses the business users’ expectations. Pricing and security constructs are at the top of the list for businesses looking to adopt a low-code platform.

Picture2

Let’s consider Salesforce as a case-in-point. Salesforce has security built into every layer of the platform. The infrastructure layer comes with replication, backup, and disaster recovery planning. Network services have encryption in transit and advanced threat detection. The application services layer implements identity, authentication, and user permissions. In addition, frequent product updates that help it to align its product offering with changing market demands put Salesforce as one of the go-to platforms for all the CRM needs of enterprises.

Low-code platform outlook

The plethora of options makes it difficult for enterprises to zero down their investments on a particular low-code platform. Enterprises must also leverage their network of service partners for guidance in this decision-making process.

Talent availability for implementation and enhancement support is critical to keep in mind during the platform selection. For the same reason, multiple system integrators are now taking the route of inorganic growth to bolster their low-code capabilities.

This is the time to hop on the low-code bandwagon and establish low code as the basis for enterprise digital transformation.

Everest Group’s Low-Code Application Development Platforms PEAK Matrix® Assessment 2022 provides an overview of the top 14 platforms based on vision, strategy, and market impact.

To share your thoughts and discuss our research related to low-code platforms, please reach out to [email protected] and [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.
This field is for validation purposes and should be left unchanged.