Category: IT Services

Exploring the Importance of Post-quantum Cryptography: An Unbreakable Vault to Protect Enterprises Against Advanced Cyberattacks, Part 2 | Blog

Post-quantum cryptography (PQC) has become essential for enterprises to protect against future quantum-enabled attacks and secure digital assets and sensitive data. Read on to discover providers’ crucial role in preparing enterprises for PQC. Reach out to explore this topic further.

As discussed in our previous blog, the emergence of quantum computing poses a significant threat to current public key cryptographic methods. When run on quantum computers – or more specifically, Cryptographically Relevant Quantum Computers (CRQCs) – some algorithms such as Shor’s can potentially break widely used methods like RSA, DSA, ECDSA, EdDSA, and DHKE, among others.

The advancement of quantum computers can seriously threaten data security and privacy for various enterprises, affecting fundamental principles such as confidentiality, integrity, and authentication. This makes it essential to reassess the security of these cryptographic methods.

The early and widespread use of quantum computers could wreak havoc, enabling new advanced cyberattacks that are impossible using classical computers. Post-quantum cryptography (PQC) is the solution to this problem. Let’s explore this further.

What is post-quantum cryptography?

In the quantum computing era, PQC is vital in ensuring the long-term security of digital communication and data protection. PQC focuses on researching and adopting cryptographic algorithms that are ready for this era.

These algorithms are designed to be secure against both quantum and classical computers. Furthermore, they are expected to be deployable and integrable without significant modifications to current protocols and networks.

With extensive ongoing research in this field, researchers have proposed several mathematical schemes that meet the requirements for being potential candidates for quantum-safe cryptographic algorithms. These include lattice-based, multivariate polynomial, code-based, hash-based, and isogeny-based cryptography.

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) launched a program in 2016 to create standardized quantum-safe cryptographic algorithms.

After a rigorous six-year evaluation involving global experts, it announced four finalists for quantum-safe cryptographic standards. The following algorithms selected by NIST address general encryption and digital signatures that are crucial for securing data exchanges and identity authentication:

PQC algorithm Cryptographic scheme Purpose
CRYSTALS-Kyber Lattice-based cryptography Key encapsulation method (KEM)
CRYSTALS-Dilithium Lattice-based cryptography Digital signature
FALCON Lattice-based cryptography Small digital signature
SPHINCS+ Hash-based cryptography Digital signature

Several other developments related to PQC have occurred recently. The notable ones are highlighted below:

Timeline final

Common cryptographic pitfalls

The complexity of cryptographic fields makes it difficult for enterprises to navigate data security. With numerous algorithms, protocols, and standards, enterprises often struggle to understand and implement robust cryptographic solutions.

Enterprises may encounter several common cryptographic pitfalls, including:

  • Lack of awareness about cryptographic algorithms used for data protection
  • Dependency on long-life data secured by cryptographic schemes not suitable for the quantum computing era
  • High costs and efforts required to update cryptography across systems and applications manually
  • Use of outdated cryptographic algorithms
  • Challenges in ensuring interoperability between different cryptographic systems and protocols, especially in hybrid IT environments
  • Limited resources, including security budget and expertise, hindering effective cryptography implementation and management
  • Risk of vulnerabilities and security breaches due to incorrect implementation of cryptographic protocols or algorithms

Enterprise considerations for embracing PQC

Considering the current challenges with cryptography, enterprises would face far more significant difficulties if they do not strategically plan for PQC. To prevent this, cybersecurity leaders globally must proactively prepare and initiate early plans to migrate to post-quantum cryptographic standards.

Taking a proactive stance is crucial since transitioning to new quantum-safe algorithms will be discontinuous, considering the inherent disparities in key size, error-handling properties, and other complexities.

Hence, enterprises should give themselves enough time to start small, experiment, learn from positive impacts and challenges, and explore ways to reduce technology transition costs.

Steps to establishing a quantum readiness roadmap

Staying abreast of advancements in quantum computing and quantum-safe solutions is paramount. Enterprises must establish a comprehensive quantum readiness roadmap following these five steps:

  • Inventory quantum-vulnerable systems: To kickstart readiness efforts, enterprises should conduct a thorough inventory of quantum-vulnerable systems across both information technology (IT) and operational technology (OT) environments, covering all cryptographic assets, including keys, certificates, protocols, libraries, and algorithms. Understanding cryptographic assets and algorithms, locations, and purposes is a fundamental best practice, especially when preparing for post-quantum cryptography. It is also crucial to identify where long-life data resides, comprehend data flows, and understand the types of cryptography used to protect it.
  • Conduct an internal risk assessment: This can help identify and prioritize assets most impacted by a quantum computer cryptographically, thus exposing the organization to greater risk. Chief Information Security Officers (CISOs) and Chief Revenue Officers (CROs) must ensure that quantum risk mitigation is integrated into existing risk management strategies.
  • Engage with technology vendors: Partner with supply chain providers to understand their quantum readiness roadmaps and migration strategies to facilitate a smooth transition that aligns with enterprise goals and timelines.

Streamline the current cryptographic infrastructure: Enterprises can initiate modernization efforts by streamlining their current cryptographic infrastructure, including consolidating or replacing vendors to enable a managed migration process. The CFO should collaborate with other executives to prioritize PQC investments based on the risk appetite and strategic objectives and adopt a fully crypto-agile approach. Establishing a governance structure with clearly defined roles and responsibilities to adopt PQC effectively is also recommended.

  • Adopt PQC algorithms: Enterprises eventually should integrate PQC algorithms into browsers, applications, public key infrastructure (PKI), files, and data systems, wherever quantum-vulnerable cryptography is employed. CIOs must collaborate closely with CISOs and other stakeholders to assess the compatibility of current systems with PQC solutions.

There is an ongoing debate over some adversaries already gathering encrypted foreign communications, anticipating the future ability of quantum computers to decrypt such systems, and aiming to extract valuable secrets from the data collected. This threat, known as “harvest now, decrypt later,” highlights the urgency of making cryptographic changes rather than waiting.

How can service providers help enterprises navigate the PQC era effectively and efficiently?

As quantum computing advances, the demand for comprehensive quantum-resistant cryptographic solutions will only increase, favoring a ripe market for cybersecurity service providers to capitalize on.

PQC offers a significant opportunity for providers to position themselves as vital partners in ensuring the security and resilience of enterprises’ digital assets against the evolving quantum computing threats.

Leaders may need help understanding the advanced mathematical concepts and algorithms involved in PQC. The complexity of these cryptographic methods may need to be clarified for enterprises trying to grasp the intricacies of quantum-resistant solutions.

With all the latest discussions about quantum computers, service providers should take this time to develop a perspective on how PQC would impact enterprises from various industry verticals.

Providers should play an educational role, creating awareness about the risks posed by quantum computing and guiding enterprises on the importance of proactively transitioning to quantum-resistant solutions.

Service providers should develop strategies to hire, train, and upskill talent in PQC and quantum computing concepts. Additionally, they can invest in R&D initiatives to explore new approaches and solutions in the PQC field. By collaborating with relevant technology vendors, research institutions, and other organizations paving the way for PQC, service providers can foster innovation and help their clients stay at the forefront of technological advancements.

Cybersecurity service providers can offer specialized consultation and assessment services to help enterprises evaluate and inventory their current cryptographic infrastructure, prioritize components based on risk, identify vulnerabilities to quantum attacks, and recommend appropriate post-quantum cryptographic solutions.

Moreover, they can engage with enterprises on initial levels to develop comprehensive strategies for implementing and managing these solutions effectively, ensuring seamless integration with existing security frameworks and compatibility with legacy systems.

Unlocking potential: Exploring use cases with PQC

Service providers should prioritize PQC to address the threat quantum computing poses to traditional cryptographic systems. By embracing PQC, service providers can safeguard their clients’ data and infrastructure against potential quantum attacks.

Additionally, they can explore new use cases for PQC to unlock innovative solutions and stay ahead of the curve in the rapidly evolving quantum landscape. These new use cases may include:

  • Quantum-safe communication (use cases for cloud computing, data centers, 5G networks, secure private communication links, )
  • Security in the banking sector, securing ATM and online credit card transactions, as well as customer data stored in bank data centers
  • Quantum-safe VPN and SD-WAN
  • Quantum-safe cybersecurity for automotive systems
  • PQC in Internet of Things (IoT) and Mobile Edge Computing (MEC) domains for protection of data transmitted between connected devices and central data processor/edge servers
  • Quantum-safe blockchain
  • Safeguarding the storage, transmission, and processing of sensitive patient data in healthcare (including that collected by biosensors in wearable devices)
  • Quantum-safe PKI for OT environments
  • PQC in Zero Trust Architecture (ZTA)

Envisioning the future

PQC is no longer a theoretical concept but a reality. Multiple applications of PQC have emerged. In their latest release, OpenSSL has fully enabled PQC for digital signatures and fundamental establishment mechanisms. The Signal Protocol, an essential constituent of Signal, Google RCS, and WhatsApp messengers, has also announced support for the PQXDH protocol, becoming the first to introduce PQC for the initial key establishment. Apple has introduced a fresh encryption protocol named PQ3 for iMessage, offering advanced post-quantum security measures for instant messaging.

PQC is rapidly gaining traction for quantum-safe digital signatures, encryption, and fundamental exchange mechanisms. Its widespread adoption seems inevitable as the risks of quantum supremacy proliferate.

The standardized algorithms aren’t battle-tested yet, and exploitable weaknesses could be uncovered, leading to adjustments in their functioning or the development of entirely new algorithms.

We anticipate PQC becoming the cornerstone of cybersecurity strategies in the coming years. Moreover, the security standards are expected to recommend or mandate PQC.

PQC has become a crucial element of enterprise security, safeguarding against quantum-enabled attacks and ensuring the integrity and confidentiality of sensitive data.

Enterprises must start planning to migrate from a secure lock to an unbreakable vault: post-quantum cryptography! Service providers play a crucial role in guiding and supporting enterprises every step of the way.

To discuss post-quantum cryptography further, please contact Prabhjyot Kaur, Kumar Avijit, and Ronak Doshi.

Driving Factors for IT Services Recovery in 2024: Insights from Everest Group’s Forces & Foresight™ Research | Blog

Our inaugural Forces & Foresight research uncovers three factors we predict will drive a gradual recovery in tech services this calendar year. Continue reading to understand the projected path for the IT services industry’s rebound and implications for providers.

Learn more about Forces & Foresight™.

After the global financial crisis in 2008 and the COVID-19 pandemic in 2020, the IT services industry took five to eight quarters to recover. Both recessions had specific catalysts, and the IT services growth rebound after each downturn was fueled by weak comparisons and a release of significant pent-up demand (as illustrated in Exhibit 1).

Let’s explore whether the current economic decline will follow a similar trajectory based on Forces & Foresight research that dives deep into comprehensive dynamics to understand the industry implications.

Forces Foresight Exhibit 1

Over the past year, the contrasts between past recessions and ongoing macro challenges have been well-researched and discussed. More importantly, the industry is now facing multiple forecasts – pulling in different directions – on how the broader macroeconomic outlook will impact IT services growth.

The following three factors are fueling our foresight:

  1. Stabilizing base

Despite the uncertainty that has impacted enterprise decision-making for the past several quarters, we now see signs of spending stability. The fundamental need to leverage technology for growth or cost is keeping enterprise IT budgets green.

Some examples of various factors that give us confidence the demand downtrend has at least stabilized include:

    • Enterprises: Everest Group’s Key Issues Study of enterprise priorities captured that over 60% of enterprises (from a sample of 200-plus) plan to increase technology budgets in 2024
    • IT Providers: A notable trend for technology platform players’ performance and outlook is that SaaS/PaaS players are not only reporting double-digit growth and new clients but also foresee growth continuing in 2024
    • Services: We see strong resilience for IT services spending in specific segments such as public sector, healthcare, and energy
  1. Fixing revenue leakage

Over several past quarters, service providers have reported contradictory numbers for revenue growth versus bookings (Exhibit 2).

Our research uncovered some key factors creating this dichotomy, including shifting profiles of existing deals to longer terms and lower annual contract values (ACVs), delayed/canceled commitments, and slow ramp-ups of large deals.

As a result, signings did not translate into revenue growth even after expected ramp-up periods. However, we see signs of stabilization in the latest quarter, especially for mid-tier players, and some stalled projects moving forward. This signals that the net negative effect is reversing and that net positive revenue contribution from bookings may begin.

Forces Forsigh Exhibit 2jpg

  1. Pockets of additional demand

We see some specific growth areas in 2024. Rising cybersecurity demand is promising (as is the case in challenging periods) and is seeing more takers, such as governments expanding cybersecurity budgets. Engineering, Research, and Development (ER&D) demand in asset-heavy industries like automotive is getting stronger as enterprises strategically invest in next-generation concepts to modernize, such as smart factories. Data and analytics will continue its growth trajectory, with the rise in Artificial Intelligence (AI) fueling more robust use cases.

Beyond these promising areas, we also see signs of a turnaround in segments and sub-segments that were hit hard in recent quarters. Prominent ones include investment banking and high-tech industries, and North America.

Note: We are not factoring in benefits from AI and generative AI (gen AI) in our foresight for 2024. While we strongly believe in the immense potential of gen AI, our research suggests that scaled adoption will still take time, and the effect on this year’s growth will not move the needle.

Implications for service providers

Despite the ongoing challenges, the industry has not slumped. The revenue declines seen in one to two quarters is better than several quarters in previous downturns (Exhibit 3). We predict pivotal growth soon.

Forces Foresight

However, given the “classical economics” nature of this downturn impacting IT services, we anticipate the industry’s recovery will be more gradual than previous event-based downturns, in contrast to the sharp recovery in the last two recessions. We expect a diverse mix of industry verticals and geographies to undergo various recovery cycles, making it an interesting and complex scenario.

The key for IT service providers lies in identifying early recovery pockets and attacking rather than being defensive and possibly investing in account management on late recovery cycle areas.

For more insights, supporting data, and trends on IT services growth, please read the report, Forces & Foresight Q1 2024.

Contact Prashant Shukla to learn more at [email protected].

Decoding Quantum Computing: Uncovering its Potential Impact and Opportunities, Part I | Blog

With their exceptional computing prowess, quantum computers have the potential to revolutionize various sectors by expediting complex problem-solving. In this first blog of our two-part series, we delve into quantum computer types, opportunities for businesses and IT service providers, and their impact on modern cryptographic algorithms. Get in touch to discuss further.

What is quantum computing?

Quantum computing is an innovative approach that leverages the principles of quantum mechanics to solve extremely complex problems much faster than classical computers. Unlike classical computers using bits, quantum computers employ qubits, such as photons or atoms, for information encoding. Quantum computing progressed from 2-qubit systems in the 1980s to tens in the 2000s, and by the late 2020s, significant milestones were achieved. Google’s “quantum supremacy” in 2019 with a 53-qubit processor and IBM’s 433-qubit chip, IBM Osprey, set records. In 2023, Atom Computing unveiled an 1180-qubit quantum computer.

Quantum bits exhibit numerous types of quantum phenomena. Let’s explore the following:

  • Superposition – Quantum bits, or qubits, can represent both 0 and 1 simultaneously, allowing quantum computers to process information much faster than classical computers
  • Entanglement – Qubits become perfectly correlated, even when separated by large distances. This means that changing one qubit instantly affects its entangled partner, enabling quantum computers to determine the value of the other qubit immediately.

These principles allow quantum computers to perform calculations based on the probability of a qubit’s state before measurement, revolutionizing computing capabilities

Despite substantial investments, current systems face scalability and stability issues. Error correction and fault tolerance also remain complex, with each additional qubit increasing the probability of errors and higher sensitivity to environmental noise. These issues highlight the ongoing hurdles in quantum computing’s path to widespread commercialization.

Quantum computer types

Quantum computers have different architectures, determined by the nature of qubits and quantum mechanics phenomena used to manipulate them. The research and innovation put into these architectures deliver solutions to problems that previously could not be solved due to classical computers’ limited computing capabilities.

Below are some of the most typical architectures that enterprises should be familiar with:

  • Superconducting: Usually made from superconducting materials, these computers use loops and circuits to produce and alter the qubits. They are the most sophisticated and popular quantum computers and can accurately model and simulate the behavior of molecules, materials, chemical reactions, etc. This feature finds practical utility in fields like drug discovery, materials science, and chemistry, where understanding the quantum behavior of complex systems is essential. They also excel in solving optimization problems, such as route optimization, scheduling, and resource allocation, which have applications in logistics, supply chain management, and financial portfolio optimization
  • Trapped ion: These quantum computers use ions trapped and manipulated in an electromagnetic field as qubits. Their long coherence times make them viable for applications requiring high stability and control levels
  • Neutral atom: Similar to trapped ion quantum computers, these use neutral atoms suspended in an ultra-high vacuum by arrays of tightly focused laser beams. They also offer long coherence times and high-fidelity operations, making them suitable for implementing complex quantum algorithms such as simulations and solving optimization problems in logistics and cryptography
  • Quantum dots: These use semiconductor quantum dots or tiny semiconductor nanocrystals as qubits that can confine electrons or other charge carriers, usually manipulated by electrical, magnetic, or microwave pulses. Theyhave the potential for robust scalability and are typically implemented in quantum communication networks and quantum sensing, among other use cases
  • Photonic/optical: Photonic quantum computers leverage photons (or packets of light) to carry and process quantum information. It can play a significant role in quantum communication protocols, enabling secure transmission of information through quantum key distribution (QKD) and quantum teleportation. This ensures the confidentiality and integrity of data, which is essential for various sectors such as finance, defense, and telecommunications

Implications for enterprises and IT service providers  

Quantum computing presents numerous opportunities for enterprises across various industries to revolutionize their operations, drive efficiency, and unlock new possibilities for growth and innovation.

As the field matures, business leaders must prepare to embrace quantum computing in the following five ways:

  1. Educate stakeholders: Enterprise leaders must educate themselves, their teams, and stakeholders about quantum computing, its potential applications, and its implications for their industry. They can organize workshops, training sessions, and seminars to increase awareness and understanding of quantum computing concepts and opportunities
  1. Identify potential use cases: Leaders must understand their respective fields’ most significant challenges and opportunities and actively search for quantum computing use cases. This can be achieved by either having an in-house team of quantum computing experts or collaborating with academia, research institutions, regulatory bodies, and other industry players to stay abreast of the latest quantum computing technology advances
  1. Build a quantum-ready workforce: After identifying relevant quantum computing use cases, leaders must build a dedicated team with expertise in quantum physics, algorithms, hardware, software, and other related fields that can work together to research, design, and implement quantum solutions tailored to their needs. This will enable the enterprise to filter out the hype and focus on areas with real business implications
  1. Invest in research and development: By allocating resources to R&D initiatives focused on quantum computing, enterprises can explore potential use cases, develop proof-of-concept projects, and experiment with quantum algorithms and applications relevant to their industry
  1. Understand technology needs: Enterprises should determine the frequency of their quantum computing usage to help decide whether to purchase/own a quantum computer or utilize cloud-based quantum services provided by computing companies. It is crucial for enterprises to carefully evaluate and choose quantum-computing partners based on their unique requirements

Service providers can play a crucial role in educating enterprises about the potential applications of quantum computing in their specific industry sectors and help them navigate the challenges and benefits associated with its adoption. Enterprises should understand they don’t necessarily need to own or build a quantum computer. Instead, they should embrace quantum computing as a service that provides multiple benefits, such as scalability, elasticity, reduced costs, and increased accessibility.

Furthermore, it’s crucial to communicate to enterprises that quantum computers will not require continuous availability, as they will coexist alongside classical computers. Providers can collaborate with enterprises on R&D initiatives and develop custom algorithms and applications tailored to their business needs. Additionally, providers have an essential role in helping enterprises navigate quantum computing security concerns.

Quantum computing’s impact on modern cryptographic algorithms

Cryptography has served as the cornerstone of secure communication in the digital age, addressing multiple information security aspects, such as confidentiality, data integrity, authentication, and non-repudiation in the presence of third parties or adversaries. Some of the foundational elements of cryptography are:

 

Algorithm Description Use-cases Examples
Hash function/

algorithm

Transform input data into fixed-size strings called hash values Password hashing, digital signatures, hash-based message authentication codes (HMACs), and data integrity verification SHA2, SHA3, Blake2
Symmetric algorithms Uses one key for both encryption and decryption Data encryption, SSL/TLS, MACs, and VPNs AES, RC6, Blowfish, Twofish
Asymmetric algorithms Uses a pair of keys: a public key for encryption and a private key for decryption HTTPS, digital signatures, email encryption, blockchain, public key infrastructure (PKI) RSA, DSA, ECDSA, EdDSA, DHKE, ECDH, ElGamal

Many cryptographic algorithms that enterprises rely on today, such as RSA and ECC, are based on mathematical problems that are computationally difficult for classical computers to solve efficiently.

However, the advent of quantum computing threatens the security of these algorithms. Shor’s algorithm efficiently solves integer prime factorization and discrete logarithm problems, breaking the security of RSA and other asymmetric encryption schemes. Additionally, Grover’s algorithm threatens symmetric cryptographic algorithms and hash functions by offering a quadratic speedup in searching through unsorted databases.

 

Cryptographic algorithm Type Purpose Impact from large-scale quantum computer
AES Symmetric key Encryption Large key sizes needed
SHA-2, SHA-3 Hash functions Large output needed
RSA Public key Signatures, key establishment No longer secure
ECDSA, ECDH

(Elliptic curve cryptography)

Public key Signatures, key establishment No longer secure
DSA

(Finite field cryptography)

Public key Signatures, key establishment No longer secure

Source: Report on Post-Quantum Cryptography (nist.gov)

Quantum computing – a mixed blessing?

Given their immense computational powers, quantum computers have the potential to revolutionize various fields by solving specific problems much faster than classical computers. Rapid technological advancements in the field make it critical for enterprises to understand the technology, determine potential use cases, and prepare for it.

However, the need for robust quantum-safe or post-quantum cryptographic solutions becomes increasingly evident as quantum computing advances. Read our next blog in this series to learn how to navigate quantum computing security concerns.

To discuss further, please contact Prabhjyot Kaur, Kumar Avijit, and Suseel Menon.

Don’t miss the Global Services Lessons Learned in 2023 and Top Trends to Know for 2024 webinar to learn the successes, challenges, and trends that defined the services industry in 2023 and the opportunities for business leaders in 2024.

Generative AI and Cloud Integration Keep Mainframes Alive in the BFSI Industry | Blog

Though a recent Everest Group survey revealed the pressing need for mainframe modernization, the technology is far from dead in the banking, financial services, and insurance (BFSI) industry. The rise of generative AI (gen AI) will encourage more BFSI firms to adopt a comprehensive technical architecture, integrating cloud and mainframe technology at its core. Read on for insights from the survey or get in touch. 

Are BFSI firms really ditching mainframes? The BFSI industry is indeed grappling with the prospect of abandoning this approach. According to an Everest Group survey, about half of the respondent firms have shifted their peripheral tasks away from mainframes.  

Concerns about mainframe system scalability loom large for more than 50% of sizable BFSI firms, while about 60% of smaller firms struggle primarily with finding talent skilled in older programming languages such as COBOL.  

Operational complexity with mainframe systems is also reported as a challenge by over 90% of BFSI respondents, underscoring the pressing need for mainframe modernization. Evolving priorities such as building data-driven workflows, digitalization, and enhancing customer experiences further fuel this urgency. 

Cost efficiency and talent unavailability are the main drivers for mainframe modernization, closely followed by the imperative for innovation. North American firms prioritize core banking and CRM workloads over modernization, while European players emphasize digital channels and payment infrastructure. 

Despite these challenges, mainframes are expected to remain integral to BFSI operations. A significant majority, about 60% of the respondent firms, have not yet started modernizing their core systems. In the coming years, non-core applications will continue to have a higher migration rate than core applications.  

However, industry research underscores that BFSI enterprises optimize and enhance their mainframe ecosystems, presenting a promising opportunity for service providers to assist. Let’s explore this further.  

Capturing cloud value through a hybrid infrastructure

With mainframes here to stay in the BFSI industry, enterprises can gain a competitive advantage by investing in the private cloud to capture the underserved and large demand for hybrid IT. Hybrid cloud is a constant across all our BFSI industry enterprise conversations.  

Of the respondent BFSI firms, 35% utilize private cloud for their modernization initiatives — mainly in a hybrid cloud environment — while 65% rely on a multi-cloud strategy.  

IBM’s focus on transforming the mainframe’s interface to other environments validates this trend. The launch of z15 and z16 is the company’s answer to the age of cloud computing. It is an evolution to meet the needs of hybrid cloud deployments, leveraging investment in data, generative Artificial Intelligence (gen AI), and applications, adding features and functionality to complement this strategy. IBM is focusing its messaging on rightsizing over downsizing. The strategy to provide more flexibility, predictability, and cost-effectiveness is evident in the company’s push for tailored fit pricing.   

The survey reveals many firms believe the disconnection between mainframe environments and new cloud-native systems and applications is a big challenge. Further investments in technologies like application programming interfaces (APIs), in collaboration with technology and service providers, will help bridge this gap in the coming years. 

Will banking’s AI revolution enable cloud-based modernization? 

We expect a symbiotic relationship between gen AI and IT modernization, each complementing the other’s growth. Cloud computing is the foundational block providing the right computational power to run AI applications, while AI’s enhanced speed and efficiency will support cloud migrations.  

BFSI firms are channeling investments into gen AI, crafting use cases to support their modernization initiatives and business operations. The survey found that 40% of the BFSI firms have proof of concepts or use cases for gen AI to support mainframe modernization.  

Firms have moved beyond experimenting with gen AI. Goldman Sachs and Deutsch Bank have started using gen AI to generate code and refactor their modernization initiatives, closely watching the impact. They are building and rolling out use cases to improve operational efficiency. We believe that the banks poised for future success are identifying use cases that solve specific business problems aligned with their organization’s strategy. This can enable them to measure the results easily and encourage leadership buy-in.  

With mainframe modernization services growing at a steady 4-5%, the ability to adapt and innovate using newer technologies such as gen AI will drive more BFSI firms to adopt a more robust and holistic technical architecture with both cloud and mainframes at its core. 

However, the question remains: will gen AI bring exponential change in the next three years? There is one certainty: the need for a strong IBM, service provider, and hyperscaler value proposition will continue to grow for BFSI clients. 

To discuss mainframe modernization further, please contact [email protected] and [email protected]. Understand more about the future of the enterprise mainframe, or watch our on-demand webinar on the future of generative AI implementation at enterprise level.

Gig Worker Benefits Collective: Transforming Benefits Delivery in the Gig Economy | Blog

The rising trend of gig work, especially among Gen Z and Millennials who favor nontraditional employment, is garnering attention. However, the lack of workplace benefits for many gig workers is a significant concern. A viable proposal for this urgent need is a Gig Worker Benefits Collective. Read on to learn more.

Reach out to discuss this topic further.

With more than 70 million individuals in the US engaged part or full-time in the gig economy, the notable lack of workplace benefits and life insurance coverage for these flex workers has sparked widespread debate.

About half of Gen Z and millennials, who constitute over 50% of the workforce, gravitate toward nontraditional employment. Consequently, employers are reconsidering benefits for gig workers while navigating various legal and tax complexities.

Employees access benefits in the following ways:

  • Employer-provided benefits – Employees typically offer benefits, using them as an incentive to attract top talent
  • Purchasing individual policies – This alternative provides individuals flexibility and control over coverage
  • Government assistance – This serves as many employees’ safety net. However, the ongoing debate in the US over classifying an “employee” and “independent contractor” complicates matters for gig workers

While this issue appears daunting, finding an answer is possible. Let’s explore this further.

Proposing a solution: The Gig Worker Benefits Collective

One proposal is to create a Gig Worker Benefits Collective for these short-term workers across various employers. This centralized hub would bring together multiple employers or gig platforms under a comprehensive benefits umbrella.

Here are some benefits a Gig Worker Benefits Collective could offer:

  • Centralized administration: A central administration entity or pooled plan provider would manage benefit plans, including health insurance, retirement savings, and other relevant benefits. Technology and digital platforms would streamline administration and communication
  • Flexible benefit options: Through the collection, gig workers could choose from a wide range of benefits based on needs. Options may include health insurance, retirement plans, disability coverage, and more. Furthermore, this would give gig workers the flexibility to customize benefit packages
  • Cost sharing: Participating employers would contribute to the collective pool to cover administrative expenses, potentially reducing costs for individual employers. The collective could leverage economies of scale to negotiate better rates with insurance providers and service vendors
  • Portability and continuity: This solution would ensure gig workers could continue their benefits even when transitioning between different gigs or employers within the collective. Having seamless benefit continuity would enhance worker satisfaction and retention

The Gig Worker Benefits Collective represents a professional, tech-enabled solution designed to address gig workers’ unique needs, potentially changing the landscape of life insurance and group benefits for the gig economy.

To discuss gig worker benefits further, please contact [email protected] and [email protected].

Watch the webinar, Locations and Workforce Strategy 2024: Insights, Trends, and Key Priorities, for insights into strategic workforce decision-making for 2024.

2024 is an Inflection Point For IT Services | Blog

We are now in the digital world with operational platforms. In the platform world, looking for stability in processes no longer works because platforms collapse business processes to align with improving the user experience and objectives and key results (OKRs), thus creating new value. Consequently, in IT, we are at an inflection point, a point in time where the attention for IT or IT services is moving away from building out infrastructure and moving to achieving demonstrable business value.

Read more in my blog on Forbes

Navigating Cloud Portability and Exit Strategies in Banking and Financial Services | Blog

Cloud service providers are vital partners in helping Banking and Financial Services (BFS) institutions build robust systems for cloud migration and exit strategies to maneuver complex regulatory and operational environments. These approaches promise to ignite technological innovation. Discover actions the world’s leading banks are taking and their importance in today’s financial landscape in this blog. Contact us to discuss further.

In the BFS industry, cloud has become synonymous with innovation and agility. Yet, as the space matures in its digital transformation journey, a crucial pivot is taking place. The focus is not solely on cloud migration but on nimbly and cautiously maneuvering within it. This shift brings to the forefront the importance of cloud portability and exit strategies – concepts rapidly gaining traction as BFS enterprises seek to future-proof technology investments. Let’s explore this further.

The strategic imperative of cloud portability

Cloud portability has risen to a strategic imperative within the BFS space. It encapsulates the capability to seamlessly transition applications and workloads between cloud environments, ensuring operational resilience and uninterrupted compliance. This degree of agility is fundamental in mitigating risks associated with vendor lock-in. Additionally, it enables BFS enterprises to adapt rapidly to evolving regulatory requirements and market conditions.

Major banks have spearheaded the charge towards cloud portability by embracing technologies that allow flexibility. Adopting containerization technologies and microservices architecture, notably through Kubernetes, is a case in point. These technologies provide a layer of abstraction, decoupling applications from the underlying cloud infrastructure, which empowers banks to maneuver digital assets across platforms without the burdens of significant downtime or exorbitant costs.

For instance, major financial institutions such as Bank of America and JPMorgan Chase have been at the forefront of embracing cloud-native technologies. Bank of America has utilized Kubernetes to enhance its application deployment processes, enabling faster innovation and improved customer service. Similarly, JPMorgan Chase has invested in containerization to streamline its IT infrastructure, demonstrating the significant efficiency and flexibility benefits these technologies offer to the BFS industry.

Navigating exit strategies in a regulated landscape

While less discussed, cloud exit strategies are vital to a comprehensive cloud governance framework. In an industry where strategic pivots or regulatory mandates can necessitate a change in cloud service providers, BFS enterprises must have clear, actionable plans for such eventualities. Crafting a cloud exit strategy involves thoroughly understanding service agreements and ensuring the transition can be executed with minimal disruption to operations and compliance protocols.

Goldman Sachs’ adoption of a multi-cloud strategy exemplifies a preemptive approach to exit planning. By distributing workloads across AWS, Azure, and Google Cloud, it is poised to maintain continuity of service and positioned to negotiate the transition of services, should strategic or regulatory circumstances change.

The formulation of cloud exit strategies is intricately linked to the BFS industry’s regulatory environment. Institutions must have actionable plans to transition away from cloud providers as strategic, regulatory, or operational landscapes evolve.

Over the last decade, regulations such as the General Data Protection Regulation (GDPR) in Europe and the Federal Financial Institutions Examination Council (FFIEC) guidelines in the United States have necessitated that banks maintain strict data governance and security protocols during such transitions.

These regulatory frameworks compel banks to plan their cloud engagements meticulously. For instance, compliance with GDPR requires that any BFS institution operating in or serving customers in the European Union (EU) must ensure its cloud exit strategy does not compromise data protection standards, even during service provider transitions.

For BFS enterprises, investing in cloud portability and a strategic exit plan is a direct response to the industry’s complex risk profile. These strategies protect them against the uncertainties of the cloud market and the evolving regulatory landscape. The goal is to safeguard investments and ensure that cloud engagements remain agile, compliant, and aligned with the overarching business objectives.

How can service providers become strategic partners in this roadmap?

Cloud service providers are pivotal in facilitating the BFS sector’s cloud transitions, having evolved from mere hosts of workloads to strategic partners. Mid-market providers illustrate this evolution by aiding BFS institutions in cloud migration and strategically planning portability and exit. These service providers ensure that cloud architectures are crafted to be vendor-agnostic and that exit strategies are incorporated into the engagement from the outset, aligning with the BFS industry’s stringent standards.

Elements of a comprehensive cloud strategy

The evolving cloud landscape necessitates a proactive and all-encompassing approach to strategy development. A holistic cloud strategy should incorporate the following:

  • Prioritizing open standards and application programming interfaces (APIs) to facilitate easy transition between cloud environments
  • Evaluating technology stacks in detail to uncover and mitigate potential lock-in risks
  • Negotiating transparent and favorable contractual terms with cloud providers that account for the potential need to exit
  • Developing robust business continuity plans that include cloud service transitions

The road ahead

As the BFS sector looks to the future, the trajectory of cloud computing strategies points toward greater flexibility, regulatory compliance, and strategic agility. The increasing importance of cloud portability and exit strategies is set to catalyze a new wave of technological innovation and strategic foresight. The pioneering steps some of the world’s leading banks are already taking demonstrate this evolution.

Large banks have been front-runners in leveraging cloud technology to enhance their financial services. Collaboration with hyperscalers, such as AWS, Azure, and Google Cloud, is part of the broader strategy to adopt a cloud-first approach by distributing workloads across different cloud providers.

Goldman Sachs has been using AWS’s capabilities to innovate in financial data management, leveraging cloud technology for scalability and efficiency and ensuring its architecture supports portability and compliance. This move indicates a broader trend among BFS institutions to harness the power of cloud computing while emphasizing the importance of cloud portability and the ability to adapt and exit in line with strategic and regulatory needs.

Moving forward, collaboration between BFS institutions and cloud service providers is expected to deepen, focusing on creating more robust frameworks for cloud portability and exit strategies. This partnership will be crucial in navigating the modern financial world’s regulatory complexities and operational demands, setting new standards for innovation, security, and customer-centric services in the banking sector.

BFS enterprises that diligently incorporate cloud portability and strategic exit planning into their operational frameworks are setting themselves up for enduring success. They will safeguard current investments and position themselves to leverage future technological advances and adapt to an ever-evolving regulatory landscape. We foresee that these proactive enterprises and service providers will spearhead the next wave of innovation and resilience in the BFS sector’s cloud journey.

To explore how to achieve cloud-first transformation in tandem with safeguarding the existing technology estate, contact Ayan Pandey, [email protected], and Pranati Dave, [email protected].

Don’t miss the webinar, Global Services Lessons Learned in 2023 and Top Trends to Know for 2024, to learn about the successes, challenges, and transformative trends that defined the global services industry in 2023 and discuss the opportunities that lie ahead for business leaders in 2024.

Five Tactics for Technology Service Providers to Guard Profit Margins Against Generative AI Impact | Blog

Technology service providers are committing to gen AI-related benefits to clients without a clear path to realization, which will negatively impact their deal margins. Uncover five strategies providers can use to optimize the productivity benefits of gen AI without denting profit margins in this blog.

The notion that generative AI (gen AI) can negatively impact technology service providers’ profit margins is counterintuitive. Service providers have assumed leveraging gen AI would improve margins by amplifying productivity and efficiency. Unfortunately, this has not been the case.

Most technology service providers commit to delivering gen AI-driven productive gains to clients. However, they will struggle to achieve these benefits in the short term due to the highly contextual and probabilistic nature of gen AI tools and their niche impact on specific tasks. This is hard to guarantee in solutioning.

Consequently, service providers must adopt other methods to realize these productivity benefits. This often involves a combination of people and intellectual property (IP) assets that cannot be charged to clients, hurting near to mid-term profitability.

Let’s look at the following five options that CEOs and CFOs of technology service providers have to address this issue:

  1. Monetize generative AI assets: Service providers have struggled to monetize their IP and assets because they are mostly used as service enablers and differentiators. Clients do not perceive additional value and are not incentivized to pay for these assets. However, gen AI assets could change this trend. Investing in gen AI assets can significantly vary from weeks to months of effort. Monetizing these assets should go beyond implementation and support fees and focus on IP value. If service providers can demonstrate genuine value, clients will be willing to pay the cost through better service pricing, an innovation fund, or direct monetization
  2. Educate sales teams and clients about the realistic impact of generative AI: Sales teams often fear their competitors are moving faster than them. In this rush, they commit gen AI benefits to clients without involving the practice, solution, or delivery teams. Leaders should coach the sales team and engage with strategic clients to have realistic gen AI discussions. Providers should create gen AI literacy services that peel away the hype of eye-catching Big Tech productivity announcements about gen AI offerings. Educational initiatives can help moderate client expectations and limit margin erosion
  3. Transform deal-related enabling functions: Generative AI has significant potential for summarizing and querying knowledge repositories. Service providers already use it to enhance the sales function through requests for proposal (RFP) bots, translating documents, and understanding service level agreement (SLA) requirements and the nuances of client needs. Accelerating such initiatives across sales, pre-sales, delivery management, deal finance, and business reviews can lower deal-related costs and alleviate margin pressure from productivity commitments
  4. Identify the right clients: Service providers need to identify the right set of clients for committing and delivering gen AI productivity benefits. This crucial aspect of the puzzle is frequently overlooked. The selection process should be driven by data, the client’s AI maturity, and the nature of the relationship and services used. Choosing the right clients can increase providers’ margins in their gen AI service delivery adoption journey. These clients will be more supportive of letting specific units experiment with gen AI productivity benefits, provide realistic feedback, and set internal expectations
  5. Transform effort and pricing models: Service providers are struggling to transform their effort and pricing model in the wake of gen AI committed benefits. Providers continue to rely on traditional approaches mainly because they are uncertain about linking gen AI committed client benefits to effort and pricing to manage deal margins. However, this problem must be solved to prevent it from significantly denting profitability. Apparent solutions, such as delinking price from effort, value-based pricing, and outcome-based commercials, have not worked. Since clients are more accommodating to collaborate with providers in this journey, providers should reignite these conversations, educate procurement and vendor management offices, and refresh their solutioning playbooks

While account and delivery leaders are held financially accountable for managing margins, linking this specifically to gen AI-related commitments can backfire. Instead, the focus should be shifted upstream during deal bids, solutioning, and client engagement.

To share your experience and discuss the impact of generative AI on profit margins for technology service providers, contact [email protected] and [email protected].

Watch this webinar, The Generative AI Odyssey: A Year in Review and What’s Ahead in 2024, to hear our analysts discuss the hype vs. reality of generative AI, production-level use cases, and the future of this transformative technology.

Are Expectations For Gen AI Lower Than Planned? | Blog

I am coming to the conclusion that the hoped-for gen AI spending wave for both tech and tech services could be longer in developing than both the investors and companies are planning on. In a recent blog, I talked about the realization that most (perhaps as high as or above 90%) of the Proof of Concept (POC) initiatives that launched during 2023 will not make it to production in 2024. Yet, the tech firms are plowing hundreds of millions of dollars into building AI tools and building the capabilities for running them. What has changed? And where are there still opportunities for this important technology?

Read more in my blog on Forbes

Unpacking The Potential of a Hybrid Copilot Strategy: A Roadmap for Success | Blog

“We are the Copilot company; we believe in a future where there will be a Copilot for everyone and everything you do.” – Satya Nadella, CEO Microsoft

A hybrid Copilot strategy delivers the benefits of purchasing ready-made Copilots and developing custom solutions. Discover a six-step roadmap for devising a successful hybrid Copilot strategy and compare buying versus building in part two of our blog series.

Read our previous blog to explore the emerging Copilot trend, M365 Copilot opportunities for service providers and enterprises, and why a hybrid strategy represents the future for Copilot. Reach out to us to discuss more in depth.

Hybrid Copilot is an innovative approach that seamlessly blends out-of-the-box offerings procured directly from vendors (Buy Copilots) and customized solutions built by the user with native tools or no-code/low-code platforms such as Microsoft Copilot Studio, Azure Open AI Service, Google Vertex AI, Open AI GPT Builder, AWS PartyRock, and others (Build Copilots). This synergy empowers enterprises to overcome the limitations of purchasing or developing, unlocking unprecedented potential within their environments.

While the traction for out-of-the-box Copilots like M365 Copilot is palpable, enterprises are increasingly recognizing the value of building Copilots. A shift is underway with more customers developing Copilots rather than defaulting to M365 Copilot.

The exhibit below summarizes the rising momentum for Build Copilots among enterprises in various sectors, signaling a growing demand for custom solutions tailored to unique requirements.

Build Copilot Blog Exhibit 1 scaled

In two months, more than 10,000 organizations have used Copilot Studio to either tailor Copilot for Microsoft 365 or create their own custom Copilots, Microsoft announced in its second quarter earnings call.

Contrasting Buy versus Build Copilot

The exhibit below captures the differences between the two approaches:

Picture2 2

In navigating the Copilot landscape, enterprises find themselves at a critical juncture where the choice between buying and building carries profound implications. Let’s look at the pros of each:

  • Buy Copilot: Offers the allure of rapid deployment, lower upfront investment, and access to a vast array of pre-packaged functionalities. Additionally, it requires minimal IT support, streamlining implementation processes, and reducing overhead costs
  • Build Copilot: Provides unparalleled customization capabilities. By developing Copilots internally, enterprises gain complete control over the development roadmap, enabling tailored solutions fine-tuned to address unique business requirements. Furthermore, building provides integration flexibility, allowing seamless alignment with existing business applications and workflows

Economies of scale also come into play. While the upfront investment might be higher, the total cost of ownership (TCO) decreases as the adoption of custom solutions scales. Conversely, pursuing a Buy Copilot strategy may lead to constant TCO increases due to ongoing licensing fees and dependencies on external vendors.

Roadmap for a hybrid Copilot strategy

In charting the course forward, enterprises must strike a delicate balance between buying and building Copilots.

Build Copilot Blog Exhibit 2 scaled

The following six-step roadmap outlines a path to craft a successful Copilot strategy:

  1. Assess enterprise needs: Begin by comprehensively evaluating enterprise requirements, including workforce dynamics, operational workflows, and strategic objectives

 

  1. Identify use cases: Determine specific use cases and essential functions to maximize productivity and efficiency in the enterprise environment

 

  1. Evaluate Copilot categories: Weigh factors such as the number of use cases needed, customization and integration requirements, and budget constraints to determine what option is better

 

  1. Design and implement the hybrid strategy: Formulate a hybrid Copilot strategy that blends the strengths of buying and building by providing employees with either Buy or Build Copilots as required

 

  1. Monitor and optimize: Continuously monitor Copilot performance, gather user feedback, and optimize the strategy iteratively to drive ongoing improvements and maximize value

 

  1. Establish change management: Provide employees with training and adoption workshops to ease them into working with Copilots, helping boost productivity

 

The roadmap for a hybrid Copilot strategy empowers enterprises to leverage the best of both worlds, harnessing the rapid deployment and diverse functionalities of Buy Copilots while capitalizing on the customization and integration flexibility Build Copilots offer.

By strategically aligning with organizational needs and continuously optimizing, enterprises can confidently navigate the Copilot landscape, driving innovation, efficiency, and success.

Everest Group will continue to follow development in this space. To discuss, contact [email protected] and [email protected], or share your views at [email protected].

Watch the webinar, Global Services Lessons Learned in 2023 and Top Trends to Know for 2024, to learn the latest on delivery locations, sourcing strategies, deal trends, talent strategy, and cost optimization strategy.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.