Category: Cloud Infrastructure

You Need to Rethink Your Mainframe Strategy in Today’s Digital World | Blog

The demise of the mainframe has been predicted every year over the past decade. With digital and cloud transformation becoming the enterprise norm, the death-knell has been getting louder. But, for multiple reasons, mainframes aren’t going anywhere anytime soon.

For example, they are designed for efficiency and allow enterprises to run complex computations in a compact infrastructure with high utilization levels. They receive regular updates that can be applied without any business disruption, making them easily expandable and upgradable. The latest mainframes work well with mobile applications, which are becoming the norm across industries. And the fact that mainframes host some of enterprises’ most critical production data has created somewhat of a lock-in situation.

Despite mainframes’ staying power, a variety of factors – including 1) difficulty integrating mainframe-housed data with the rest throughout the enterprise; 2) the shrinking number of IT professionals who understand mainframes’ architectural complexities; and 3) mainframes’ lack of agility – can prevent enterprises from excelling in today’s digital environment.

Levers Enterprises Can Pull to Maximize Their Mainframe Value

With these issues in mind, some enterprises think they should eliminate mainframes completely from their technology environment. But that’s not the best route to take in the short- to medium-term. Rather, by embracing the best of mainframes and digital technologies, they can gain on operational costs and capital invested and realize business flexibility and agility without a loss of continuity or high mainframe efficiency levels.

Our Recommendations on How You Can Achieve Maximum Value from Your Mainframes

Mainframe upgrades – The latest mainframe releases mimic the benefits offered by the cloud. If you haven’t upgraded to the newest release, you should consider doing so now.

Phased retiring of applications – For applications that can work as effectively on the cloud as on mainframes, you should develop new ones on the cloud and slowly phase out the old ones from your mainframe. This approach will avoid business disruptions and help you quickly build new services while still being able to access real-time mainframe data.

Mainframe-as-a-Service (MaaS) –If you’re looking to go asset light, you can adopt MaaS, wherein your existing mainframe assets are transferred to a services provider. In these arrangements, you’ll be charged on an actual consumption basis if you meet a minimum volume commitment. You’ll gain the most value from MaaS when you use it in conjunction with phased retiring of applications, because it will allow you to gain the benefits of a consumption model while preparing your cloud environment in parallel.

Automated migration to modern tech stacks – Multiple tools and services are available to migrate a legacy stack (such as COBOL-based) to a newer stack (such as Java-based or .Net-based,) in an automated fashion. Given the variety of mainframe languages, databases, and infrastructure technologies going into the migration, you should always adopt a custom migration approach.

Wrapper approaches – In the short-term, instead of migrating away from your mainframe, you can augment it with agile data services that enable data interoperability with the rest of your infrastructure. You can also run emulators on the cloud and host legacy application code with minimum changes.

Mainframes are far from dead and will continue to form the backbone of many large enterprises in the near future. However, to excel in today’s digital world, you need to reconsider your mainframe strategy to get the best of all emerging digital technologies. Of course, there is no one size fits all solution. So, you’ll need to take a customized approach, combining the various transformation levers that are most applicable to your enterprise’s unique situation.

How do you think mainframes will fare in the digital world? Please share your thoughts with me at [email protected].

Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud | Blog

Our research suggests that more than 90 percent of enterprises around the world have adopted cloud services in some shape or form. Additionally, 46 percent of them run a multi-cloud environment and 59 percent have adopted more advanced concepts like containers and microservices in their production set-up.  As they go deeper into the cloud ecosystem to realize even more value, they need to be careful of two seemingly similar but vastly different concepts: cloud native and native cloud.

What are Cloud Native and Native Cloud?

Cloud native used to refer to more container-centric workloads. However, at Everest Group, we define cloud native as building blocks of digital workloads that are scalable, flexible, resilient, and responsive to business demands for agility. These workloads are developer centric and operationally better than their “non-cloud” brethren.

Earlier, native cloud meant any workloads using cloud services. Now – just like in the mobile world where apps are “native Android or iOS,” meaning specifically built for these operating systems – native cloud refers to leveraging the capabilities of a specific cloud vendor to build a workload that is not available “like-to-like’ in other platforms. These are innovative disruptive offerings such as cloud-based relational database services, serverless instances, developer platforms, AI capabilities, workload monitoring, and cost management. They are not portable across other cloud platforms without a huge amount of rework.

With these evolutions, we recommend that enterprises…

Embrace Cloud Native

Cloud native workloads provide the fundamental flexibility and business agility enterprises are looking for. They thrive on cloud platforms’ core capabilities without getting tied to them. So, if need be, the workloads can easily be ported to other cloud platforms without any meaningful rework or drop in performance. Cloud native workloads also allow enterprises to build hybrid cloud environments.

And Be Cautious of Native Cloud

Most cloud vendors have built meaningfully advanced capabilities into their platforms. Because these capabilities are largely native to their own cloud stack, they are difficult to port across to other environments without considerable investment. And if – more likely, when – the cloud vendor makes changes to its functional, technical, or commercial model, the enterprise finds it tough to move away from the platform…the workloads essentially become prisoners of that platform.

At the same time, native cloud capabilities are fundamentally disruptive and very useful for enterprise workloads. However, to adopt such advanced features in the right manner and still be able build a multi-cloud strategy, enterprises need the necessary architectural, technical, deployment, and support capabilities. For example, in a serverless application, the architect can put business logic in a container and the event trigger in serverless code. With that approach, when porting to another platform, the container can be directly ported and only the event trigger needs to change.

Overall, enterprise architects need to be cautious of how deep they are going into a cloud stack.

Going Forward

Given that cloud platform feature functionality parity is becoming common, cloud vendors will increasingly push enterprises to go deeper into their stack. The on-premise cloud stack offered by all three large hyperscalers – Amazon Web Services, Azure, and Google Cloud Platform — is an extension of this strategy. Enterprise architects need to have a well thought out plan if they want to go deeper into a cloud stack. They must evaluate the interoperability of their workloads and run simulations at every critical juncture. Although it is unlikely that an enterprise would completely move off a particular cloud platform, enterprise architects should make sure they have the ability to compartmentalize workloads to work in unison and interoperate across a multi-cloud environment.

Please share your cloud native and native cloud experiences with me at [email protected].

Oracle Wins Over Microsoft and SAP in the Cloud ERP BigTech Battle

As part of our enterprise platform services research, we reached out to 15 global IT service providers and some of their key enterprise clients to understand their views on the leading cloud ERP vendors: Microsoft Dynamics 365, Oracle ERP Cloud, and SAP S/4 HANA.

We then analyzed their input against five important parameters.

Who’s the winner? Oracle ERP Cloud.

snapshot of cloud ERP vendor assessment

Here’s a drill-down on our analysis of the five parameters.

Technology sophistication/product excellence

Microsoft and SAP are still struggling to migrate all the on-premise functionalities to their cloud offerings. In fact, many of the enterprises we spoke with consider Dynamics 365 and S/4 HANA simplified versions of their on-premise offering, but with some functionality gaps. On the other hand, Oracle has made significant headway in its migration and is stepping up to integrate emerging technology capabilities into its cloud offering. Microsoft and SAP also lack case study-based proof points that demonstrate the maturity of their solution.

Ease of implementation and integration

Although implementation completion time is consistent among the three vendors’ cloud offerings, there are significant variations among their ease of integration. Because of its Fusion middleware, Oracle ERP Cloud is considerably easier to integrate with on-premise systems and other third-party applications than the others. SAP ranks lowest on this parameter, mainly because of challenges associated with integrating other SAP cloud offerings, such as SuccessFactors, Ariba, Concur, and Hybris, with the core S/4 HANA and on-premise SAP products.

Commercial flexibility

Here, Microsoft fares better than both Oracle and SAP. It has a friendlier licensing model wherein it bundles its cloud ERP offering with CRM and other Microsoft products. In comparison, SAP’s limited features and functionalities make mid-sized enterprises its largest buyer group. And Oracle’s hosting environment isn’t particularly flexible; it is pushing to keep the NetSuite and Oracle ERP Cloud workloads in-house on the Oracle platform.

Talent availability

Because of Oracle’s and SAP’s strong presence in the on-premise ERP market, there’s an abundance of talent with the knowledge to be upskilled to implement, integrate, and manage their cloud-based offerings. In fact, supply is larger than demand. But Microsoft is struggling here, with a ~20 percent demand-supply gap for trained developers and integration consultants.

Overall customer experience

Over the past few years, Oracle has been able to improve its end-user experience with software updates. Microsoft is trying to create a better customer experience with its integrated enterprise offering. Dynamics 365 engagements are no longer just standalone ERP or CRM engagements; instead, oriented around a transformational impact message, they also encompass Office 365, Azure cloud services, and the Power platform. SAP is creating a better customer experience by collaborating effectively with its clients on implementation and maintenance issues. But it still delivers an inconsistent user experience between its on-premise and cloud version. While all three vendors have made strides in delivering a better customer experience, Oracle rose to the top on this parameter.

Our analysis shows that Oracle ERP Cloud is the clear, present winner in the war among the top three vendors. Although Microsoft and SAP are catching up with Dynamics 365 and S/4 HANA, and are doing great in specific niches, it will take some time before they evolve their offerings and establish some credible proof points across different industries.

Watch this space for additional blogs on the kind of challenges enterprises are facing with cloud ERP adoption, and what they should do to tackle them.

What has been your experience with cloud ERP? Please write to us at [email protected] and [email protected].

Aware Automation: An Enabler of Business-Centric Infrastructure | Blog

In today’s digital world, enterprise success is all about speed, agility, and flexibility in order to adapt to market and competitor dynamics. It is no surprise that 62% of enterprises view IT services agility and flexibility as a primary focus of their IT services strategy1, with cost reduction seen as a derivative.

The digital businesses of today require a business-centric IT infrastructure that is agile, flexible, scalable and cost-effective. For a long time, IT infrastructure has taken up an inordinate amount of time and the lion’s share of precious resources (particularly financial). However, with new cloud delivery models gaining prominence and advancements in the underlying technology, business leaders now view IT infrastructure as an enabler of digital transformation — or at the very least, want to ensure that their IT infrastructure evolves to such a state.

Read the blog on IPSoft

 

The Top Three Cloud Vendors Are Fighting An Ugly War, And It’s Only Going To Get Uglier | Blog

With the massive size of the public cloud market, it’s reasonable to assume that there’s plenty of pie for each the top three vendors –Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure (Azure) – to get their fill.

But the truth is that they’re all battling to capture even larger slices. While this type of war has happened in other technology segments, this one is unique because the market is growing at 30-40 percent year-over-year.

Here are a few examples of the current ugly wars these vendors are waging against each other.

AWS is luring away Azure customers. Channel checks suggest that AWS is incentivizing clients to move their Windows workloads to Linux. The next step is to move their SQL Server workloads to other databases (e.g, PostgreSQL). Of course, it won’t stop there; there will be an entire migration strategy in place. And there have even been a few instances in which AWS has funded clients’ early PoCs for this migration along with the implementation partner.

Azure is pushing for AWS migration. It isn’t uncommon for many mid-sized implementation partners to make their client pitch solely on the fact that they can migrate AWS virtual instances to Azure and achieve 20-30 percent, or more, cost savings. It also isn’t uncommon for Microsoft to bundle a lot of its offerings, e.g., Office 365, to create an attractive commercial bundling for its broader cloud portfolio against AWS, which lacks an enterprise applications play.

GCP is pushing Kubernetes cloud and Anthos. GCP’s key argument against AWS and Azure is that they are both “legacy clouds.” The entire Kubernetes cloud platform story is becoming very interesting and relevant for clients. More so, for newer workloads, such as AI, Machine Learning, and Containers, GCP is pushing hard to take the lead.

Each of these vendors will continue to find new avenues to create trouble for each other. Given that Azure and GCP are starting from a low base, AWS has more to lose.

So, how will the cloud war play out? Three things will happen going forward.

Stack lock-in

The vendors have realized that clients can relatively easily move their IaaS, and even PaaS, offerings to another cloud. Therefore, they’ll push to make their clients adopt native platform offerings that cannot be easily ported to different clouds (e.g., serverless). While some of the workloads will be interoperable across other clouds, parts will run only on one cloud vendor’s stack.

Preferred partnership for workloads

While the vendors will acknowledge that implementation partners will always have cloud alliances, they’ll push to have preferred partner status for specific workloads such as database lift and shift, IoT, and AI. For this, most cloud vendors will partner with strategy consulting firms and implementation partners to shape enterprises’ board room agenda.

Migration kits

In 2018, Google acquired cloud migration specialist Velostrata. This year, both AWS and Azure launched migration kits targeting each other’s clients. This battle will soon become even fiercer, and will encompass not only lift and shift VM migration, but also workloads such as database instances, DevOps pipelines, application run time, and even applications.

With the cloud giants at war, enterprises need to be cautious of where to place their bets. They need to realize that working with cloud vendors will become increasingly complex, because it’s not only about the offerings portfolio but also the engagement model.

Here are three things enterprises should focus on:

  • Ensure interoperability and migration: Enterprises need to make the cloud vendors demonstrate evidence of easy workload interoperability with and migration to other cloud platforms. They should also determine the target cloud vendor’s own native migration tool kits and services, regardless of what the selected implementation partner may use.
  • Stress test the TCO model: Enterprises need to understand the total cost of ownership (TCO) of the new services offered by the cloud vendors. Most of our clients think the cloud vendors’ “new offerings” are expensive. They believe there’s a lack of translation between the offerings and the TCO model. Enterprises should also stress test the presented cost savings use cases, and ask for strong references.
  • Get the right implementation partner: For simpler engagements, the cloud vendors are increasingly preferring smaller implementation partners as they are more agile. Though the vendors claim their pricing model doesn’t change for different implementation partners, enterprises need to ensure they are getting the best commercial construct from both external parties. For complex transformations, enterprises must do their own evaluation, rather than rely on cloud vendor-attached partners. Doing so will become increasingly important given that most implementation partners work across all the cloud vendors.

The cloud wars have just begun, and will become uglier going forward. The cloud vendors’ deep pockets, technological capabilities, myriad offerings, and sway over the market are making their rivalries different than anything your business has experienced in the past. This time, you need to be better prepared.

What do you think about the cloud wars? Please write to me at [email protected].

You are on AWS, Azure, or Google’s Cloud. But are you Transforming on the Cloud? | Blog

There is no questioning the ubiquity of cloud delivery models, independent of whether they’re private, public, or hybrid. It has become a crucial technology delivery model across enterprises, and you would be hard pressed to find an enterprise that has not adopted at least some sort of cloud service.

However, adopting the cloud and leveraging it to transform the business are very different. In the Cloud 1.0 and Cloud 2.0 waves, most enterprises started their adoption journey through workload lift and shifts. They reduced their Capex and Opex spend by 30-40 percent over the years. Enamored with these savings and believing their job was done, many stopped there. True that the complexity of the lifted and shifted workload increased when they moved from Cloud 1.0 to Cloud 2.0, e.g., from web portal to collaboration platforms to even ERP systems. But, it was still lift and shift, with minor refactoring.

This fact demonstrates that most enterprises are, unfortunately, treating the cloud as just another hosting model, rather than a transformative platform.

Yet, a few forward-thinking enterprises are now challenging this status quo for the Cloud 3.0 wave. They plan to leverage the cloud as a transformative model where native services can be built in to not only modernize the existing technology landscape but also for cloud-based analytics, IoT-centric solutions, advanced architecture, and very heavy workloads. The main difference with these workloads is that they won’t just “reside” on cloud; they will use the fundamental capabilities of the cloud model for perpetual transformation.

So, what does your enterprise need to do to follow their lead?

Of course, you need to start by building the business case for transformation. Once that is done, and you’ve taken care of the change management aspects, here are the three key technology-centric steps you need to follow:

Redo workloads on the cloud

Many monolith applications, like data warehouses and sales applications, have already been ported to a cloud model. You need to break the ones you use down based on their importance and the extent of debt in terms of the transformation needed. Many components may be taken out of the existing cloud and ported in-house or to other cloud platforms based on the value they can deliver and their architectural complexity. Some components can leverage cloud-based functionalities (e.g., for data analytics) and drive further customer value. You need to think about extending the functionality of these existing workloads to leverage newer cloud platform features such as IoT-based data gathering and advanced authentication.

Revisit new builds on the cloud

Our research suggests that only 27 percent of today’s enterprises are meaningfully building and deploying cloud-native workloads. This includes workloads with self-scaling, tuning, replication, back-up, high availability, and cloud-based API integration. You must proactively assess whether your enterprise needs cloud-native architectures to build out newer solutions. Of course, cloud native does not mean every module should leverage the cloud platform. But a healthy dose of the workload should have some elements of cloud adoption.

Relook development and IT operations on the cloud

Many enterprises overlook this part, as they believe the cloud’s inherent efficiency is enough to transform their operating model. Unfortunately, it does not work that way. For cloud-hosted or cloud-based development, you need to relook at your enterprise’s code pipelines, integrations, security, and various other aspects around IT operations. The best practices of the on-premise era continue to be relevant, albeit in a different model, such as tweaks to the established ITSM model). Your developers need to get comfortable with leveraging abstract APIs, rather than worrying about what is under the hood.

The Cloud 3.0 wave needs to leverage the cloud as a transformation platform instead of just another hosting model. Many enterprises limit their cloud journey to migration and transition. This needs to change going forward. Enterprises will also have to decide whether they will ever be able to build so many native services in their private cloud. The answer is probably not. Therefore, the strategic decision of leveraging hybrid models will become even more important. The service partners will also need to enhance their offerings beyond migration, transformation during migration, and management. They need to drive continuous evolution of workloads once ported or built on the cloud.

Remember, the cloud itself is not magic. What makes it magical is the additional transformation you can derive beyond the cloud platform’s core capabilities.

What has been your experience in adopting cloud services? Please write to me at [email protected].

The Amazon Web Services Juggernaut: Observations from the AWS Summit India 2019 | Blog

Amazon Web Services’ (AWS) Summit in Mumbai last week made it clear that its trifecta juggernaut in customer centricity, long-term thinking, and innovation is giving other public cloud vendors a run for their money.

Here are our key takeaways for AWS clients, partners, and the ecosystem.

Solid growth momentum

Sustaining a growth rate in the mid-teens is a herculean task for most multi billion-dollar businesses. But AWS has an annual run rate of US$31 billion, and clocked-in a 41 percent Y/Y growth rate, underpinned by millions of monthly active customers and tens of thousands of AWS Partner Network (APN) partners around the globe.

Deep focus on the ecosystem

Much of this momentum is due to AWS’ heavy focus on developing a global footprint of partners to help enterprises migrate and transform their workloads. Taking a cautious and guided approach to partner segmentation, it not only broke out its Consulting and Technology partners, but also segmented its Consulting Partners into five principal categories: Global SIs and Influencers, National SIs, Born-in-the-Cloud, Distributors, and Hosters. This is helping AWS establish specific innovation and support agendas for its partners to grow.

AWS growth momentum – underpinned by expansive global partner network

This partner ecosystem focus is increasingly enabling enterprises to achieve real business value through the cloud, including top-line/bottom-line growth, additional RoI, lower cost of operations, and higher application developer productivity. And AWS’ dedicated focus on articulating business benefits such as operational agility, operational resilience, and talent productivity, along with the underlying tenets of the cloud economy, has helped it onboard more enterprises.

Cloud convenience will need an accelerated Outposts push

Enterprises are looking for cloud convenience, which often manifests in location-agnostic (on-premise or on cloud) access to AWS cloud services. To bring native AWS services, infrastructure, and operating models to virtually any datacenter, co-location space, or on-premises facility, the company launched AWS Outposts at its 2018 re:Invent conference. Outposts is expected to go live by H2 2019 for Indian customers. Despite this, AWS is trailing in this front, playing catch-up to Microsoft Azure, which launched Azure Stack almost a year ago (and previewed a version in 2015.) At the same time, AWS will have to educate its enterprise clients and ease their apprehensions about vendor lock-in challenges while leveraging integrated hardware and software packages.

Helping clients avoid consumption fatigue

Shifting the focus toward AWS’ innovation agenda, the public cloud vendor launched over 1,800 services and features in 2018. As enterprises grapple with the rising number of tools and technologies at their disposal – which can lead to consumption fatigue – this can manifest in different ways:

  • Large enterprises will often depend on system integrators to help them unlock value out of latest technologies – AWS’ success in furthering the partner ecosystem will be crucial here
  • For SMBs, AWS will build on its touchpoints with the segment, something that Microsoft and Google already enjoy because of their respective enterprise productivity suites.

What’s next on AWS’ innovation front

There seemed to be a lack of development on the quantum or high-performance computing front. Client conversations suggested that they are struggling to figure out the right use cases depending on whether they need more compute and/or data – something AWS can help educate them on.

Gazing into the enterprise cloud future

We do not believe enterprises will move their entire estates to the public cloud. Indeed, as they transition to the cloud, we expect the future to be decidedly hybrid, i.e., a mix of on-premise and public, as this approach will allow every organization to choose where each application should reside based on its unique needs.

To deliver on this hybrid need, product vendors are inking partnerships with virtualization software companies. And the services and product line-ups are piquing enterprises’ curiosity. To help stake its claim in this hybrid space, AWS Outposts does have a VMware Cloud option, which is AWS’ hardware with the same configurations but using VMware’s Software Defined Data Center (SDDC) stack running on EC2 bare-metal. But it will need to educate the marketplace to accelerate adoption.

The bottom line is that although AWS is facing some challenges on the competitor front – with Azure and a reinvigorated Google Cloud under Thomas Kurian – it is well positioned on account of a solid growth platform and ecosystem leverage, which it demonstrated at the 2019 India Summit.

Busting Four Edge Computing Myths | Blog

Interest in edge computing – which moves data storage, computing, and networking closer to the point of data generation/consumption – has grown significantly over the past several years (as highlighted in the Google Trends search interest chart below). This is because of its ability to reduce latency, lower the cost of data transmission, enhance data security, and reduce pressure on bandwidth.

Interest over time on Google

 

But, as discussions around edge computing have increased, so have misconceptions around the potential applications and benefits of this computing architecture. Here are a few myths that we’ve encountered during discussions with enterprises.

Myth 1: Edge computing is just an idea on the drawing board

Although some believe that edge computing is still in the experimental stages with no practical applications, many supply-side players have already made significant investments in bringing new solutions and offerings to the market. For example, Vapor IO is building a network of decentralized data centers to power edge computing use cases. Saguna is building capabilities in multi access edge computing. Swim.ai allows developers to create streaming applications in real time to process data from connected devices locally. Leading cloud computing players, including Amazon, Google, and Microsoft, are all offering their own edge computing platform. Dropbox formed its edge network to give its customers faster access to their files. And Facebook, Netflix, and Twitter use edge computing for content delivery.

With all these examples, it’s clear that edge computing has advanced well beyond the drawing board.

Myth 2: Edge computing supports only IoT use cases

Processing data on a connected device, such as a surveillance camera, to enable real-time decision making is one of the most common use cases of edge computing. This Internet of Things (IoT) context is what brought edge computing to the center stage, and understandably so. Indeed, our report on IoT Platforms highlights how edge analytics capabilities serve as a key differentiator for leading IoT platform vendors.

However, as detailed in our recently published Edge Computing white paper, the value and role of edge computing extends far beyond IoT.

Edge computing

For example, in online streaming, it makes HD content delivery and live streaming latency free. Its real-time data transfer ability counters what’s often called “virtual reality sickness” in online AR/VR-based gaming. And its use of local infrastructure can help organizations optimize their web sites. For example, faster payments processing will directly increase an e-commerce company’s revenue.

Myth 3: Real-time decision making is the only driver for edge computing

There’s no question that one of edge computing’s key value propositions is its ability to enable real-time decisions. But there are many more use cases in which it adds value beyond reduced latency.

For example, its ability to enhance data security helps manufacturing firms protect sensitive and sometimes highly confidential information. Video surveillance, where cameras constantly capture images for analysis, can generate hundreds of petabytes of data every day. Edge computing eases bandwidth pressure and significantly reduces costs. And when connected devices operate in environments with intermittent to no connectivity, it can process data locally.

Myth 4: Edge spells doom for cloud computing

Much of the talk around edge computing presents that the current cloud computing architecture is not suited to power new age use cases and technologies. This has led to attention grabbing headlines about edge spelling the doom of cloud computing, with developers moving all their applications to the edge. However, edge and cloud computing share a symbiotic relationship. Edge is best suited to run workloads that are less data intensive and require real-time analysis. These include streaming analytics, running the inference phase for machine learning (ML) algorithms, etc. Cloud, on the other hand, powers edge computing by running data intensive workloads such as training the ML algorithms, maintaining databases related to end-user accounts, etc. For example, in the case of autonomous cars, edge enables real-time decision making related to obstacle recognition while cloud stores long-term data to train the car software to learn to identify and classify obstacles. Clearly, edge and cloud computing cannot be viewed in exclusion to each other.

To learn more about edge computing and to discover our decision-making framework for adopting edge computing, please read our Edge Computing white paper.

Journey Migrating to Hybrid Cloud has Three Issues Crucial to Success | Blog

When companies undertake digital transformation, it’s crucial that they keep executive and organizational support throughout the multi-year journey. An effective strategy for getting and sustaining that support is to focus on the “moments that matter” to the executives and/or users. Those are the moments (or events, decisions, actions) that comprise the most important issues to decide and evolve on the journey – things that the company must get right.

Leaders must not only communicate effectively about those moments but also deal with the related challenges along the way. Otherwise, progress on the digital transformation journey will slow or the journey will be derailed and likely will fail. To avoid either of these outcomes, let’s consider three moments that matter in a common digital initiative – migrating to a hybrid cloud environment.

Read more in my blog on Forbes

Enterprises Must Bake “Contextualization” into Their IT Security Strategies | Sherpas in Blue Shirts

Given the rapid uptake of digital technologies, proliferation in digital touchpoints, and consumerization of IT, traditional enterprise security strategies have become obsolete. And challenges such as security technology proliferation, limited user/customer awareness, and lack of skills/talent are making the enterprise security journey increasingly complex.

Against that backdrop, the key thrust of our just released IT Security Services – Market Trends and Services PEAK Matrix™ Assessment 2019 is that the conventional, cookie cutter best practices prescribed by service providers no longer cut it. Indeed, we subtitled this new assessment “Enterprise Security Journeys and Snowflakes – Both Unique and Like No Other!” because the complexities of today’s technological and business landscape are forcing enterprises to use a much more guided and contextualized approach toward securing their IT estates.

What does this mean? To achieve success, enterprise IT security strategies must focus on three discrete, yet intertwined, levers.

Enterprise-specific Business Dynamics

In order to prioritize their investments in next-generation IT security, every enterprise needs to understand which assets it considers its crown jewels, how the business – and its security investments – will scale, and how to best mitigate risk within budgetary constraints. For example, a traditional BFS enterprise has far different endpoint security needs than does a digital-born bank.

Enterprises must also determine how delivery of superior customer and user experiences and exceptional security can co-exist. For example, a BFS enterprise’s introduction of an innovative new payments service backed by multi-factor authentication must operate without degrading the customer experience with delays.

Vertical Considerations

Enterprises need to take an industry-specific, value chain-led view of IT security that ensures optimal budget control without compromising the overall security posture.

For example, BFS firms must invest in security measures that protect their transaction processing and control/compliance capabilities. And building security controls for user access management, introducing behavioral biometrics into an integrated authentication process, and developing identity controls for anti-money laundering compliance are essential safeguards for sustainable competitive advantage.

Regional Considerations

Stringent regulatory environments (such as GDPR for customer data protection in Europe, PCI DSS for payments in the U.S., HL7 for international standards for transfer of clinical and administrative data between applications) and geography-specific nuances require a circumstantial approach to IT security. This means that geography-specific compliance around data protection, protectionist measures undertaken by the government, enterprises’ digital demand characteristics, and enterprises’ priorities in specific regions need to be taken into account. And global organizations must adhere to a well-defined strategic roadmap to address multiple variants of IT security standards across the globe.

For service providers, this essentially implies delivery of localized services in their focus geographies.

Taking a Phased Approach

While bolting-on IT security capabilities may lead to unnecessary – and valueless – sprawl, enterprises can avoid this challenge by investing in their IT security strategies in a phased manner, as outlined in the figure below.

IT Security Blog

To learn more about IT security contextualization, please see our latest report delves deeply into the important whys and hows of contextualizing IT security, and also provides assessments and detailed profiles of the 21 IT service providers featured in Everest Group’s IT Security Services PEAK Matrix™.

Feel free to reach out us to explore this further. We will be happy to hear your story, questions, concerns, and successes!

Request a briefing with our experts to discuss the 2022 key issues presented in our 12 days of insights.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.