Category: Cloud Infrastructure

Enterprises Should Jump – Carefully – on the Cloud Native Bandwagon | Sherpas in Blue Shirts

With enterprise cloud becoming mainstream, the business case and drivers for adoption have also evolved. The initial phase of adoption focused on operational cost reduction and simplicity – what we call the “Cloud for Efficiency” paradigm. We have now entered Wave 2 of enterprise cloud adoption, where the cloud’s potential to play a critical role in influencing and driving business outcomes is being realized. We call this the “Cloud for Digital” paradigm. Indeed, cloud is now truly the bedrock for digital businesses, as we wrote about earlier.

Cloud blog image 1

This is good and powerful news for enterprises. However, to successfully leverage cloud as a business value enabler, the services stack needs to be designed to take advantage of all the inherent benefits “native” to the cloud model – scalability, agility, resilience, and extendibility.

Cloud Native – What Does it Mean Anyway?

Cloud native is not just selective use of cloud infrastructure and platform-based models to reduce costs. Neither is it just about building and deploying applications at pace. And it is definitely not just about adopting new age themes such as PaaS or microservices or serverless. Cloud native includes all of these, and more.

We see cloud native as a philosophy to establish a tightly integrated, scalable, agile, and resilient IT services stack that can:

  • Enable rapid build, iteration, and delivery of, or access to, service features/functionalities based on business dynamics
  • Autonomously and seamlessly adapt to any or all changes in business operation volumes
  • Offer a superior and consistent service experience, irrespective of the point, mode, or scale of services consumption.

Achieving a true cloud native design requires the underlying philosophy to be embedded within the design of both the application and infrastructure stacks. This is key for business value creation, as lack of autonomy and agility within either layer hinders the necessary straight-through processing across the integrated stack.

In this regard, there are salient features that define an ideal cloud native IT stack:

Cloud native applications – key tenets

  • Extendable architecture: Applications should be designed for minimal complexity around adding/modifying features, through build or API connections. While microservices inherently enable this, not all monolithic applications need to be ruled out from becoming components of a cloud native environment
  • Operational awareness and resilience: The application should be designed to track its own health and operational performance, rather than shifting the entire onus on to the infrastructure teams. Fail-safe measures should be built in the applications to maximize service continuity
  • Declarative by design: Applications should be built to trust the resilience of underlying communications and operations, based on declarative programming. This can help simplify applications by leveraging functionalities across different contexts and driving interoperability among applications.

 Cloud native infrastructure – key tenets

  • Services abstraction: Infrastructure services should be delivered via a unified platform that seamlessly pools discrete cloud resources and makes them available through APIs (enabling the same programs to be used in different contexts, and applications to easily consume infrastructure services)
  • Infrastructure as software: IT infrastructure resources should be built, provisioned/deprovisioned, managed, and pooled/scaled based on individual application requirements. This should be completely executed using software with minimal/no human intervention
  • Embedded security as code: Security for infrastructure should be codified to enable autonomous enforcement of policies across individual deploy and run scenarios. Policy changes should be tracked and managed based on version control principles as leveraged in “Infrastructure as Code” designs.

Exponential Value Comes with Increased Complexity

While cloud native has, understandably, garnered significant enterprise interest, the transition to a cloud native model is far from simple. It requires designing and managing complex architectures, and making meaningful upfront investments in people, processes, and technologies/service delivery themes.

Everest Group’s SMART enterprise framework encapsulates the comprehensive and complex set of requirements to enable a cloud native environment in its true sense.

Smart Cloud blog image

Adopting Cloud Native? Think before You Leap

Cloud native environments are inherently complex to design and take time to scale. Consequently, the concept is not (currently) meant for all organizations, functions, or applications. Enterprises need to carefully gauge their readiness through a thorough examination of multiple organizational and technical considerations.

Cloud Key Questions blog image

Our latest report titled Cloud Enablement Services – Market Trends and Services PEAK Matrix™ Assessment 2019: An Enterprise Primer for Adopting (or Intelligently Ignoring!) Cloud Native delves further into the cloud native concept. The report also provides the assessment and detailed profiles of the 24 IT service providers featured on Everest Group’s Cloud Enablement Services PEAK MatrixTM.

Feel free to reach out us to explore the cloud native concept further. We will be happy to hear your story, questions, concerns, and successes!

SAP Accelerates Experience Pivot with a $8 billion Bet on Qualtrics | Sherpas in Blue Shirts

Just days before 16-year old Qualtrics was due to launch its IPO, SAP announced its acquisition of the customer experience management company in an attempt to bolster its CRM portfolio. Qualtrics, one of the most anticipated tech IPOs of the year, and oversubscribed 13 times due to investor demand, adds to SAP’s arsenal of cloud-based software vendor acquisitions.

Delving into SAP’s Strategic Intent

Seeking transformational opportunities, the acquisition will allow SAP to sit atop the experience economy through the leverage of “X-data” (experience data) and “O-data” (operational data). Moreover, the acquisition will enable SAP to cash in on a rather untapped area that brings together customer, employee, product, and brand feedback to deliver a holistic and seamless customer experience.

SAP had multiple reasons to acquire Qualtrics:

  • First, it combines Qualtrics’ experience data collection system with SAP’s expertise in slicing and dicing operational data
  • Second, it sits conveniently within SAP’s overarching strategy to push C/4 HANA, its cloud-based sales and marketing suite.

SAP’s acquisition history makes it clear it seeks to achieve transformative growth by bolting in capabilities from the companies it acquires. It has garnered a fine reputation when it comes to onboarding acquired companies and realizing increasing gains out of the existing mutual synergies. Its unrelenting focuses on product portfolio/roadmap alignment, cultural integration, and GTM with acquired companies have been commendable.

Here is a look at its past cloud-based software company acquisitions:

SAP blog

SAP has taken a debt to finance the Qualtrics acquisition, making it imperative to show business gains from the move. With Qualtrics on board, it seems SAP’s ambitious cloud growth target (€8.2-8.7 billion by 2020) will receive a shot in the arm. However, the acquisition is expected to close by H1 2019, implying that the investors will have to wait to see returns. Moreover, SAP’s stock price in the past 12 months has dropped by 10.6 percent versus the S&P 500 Index rise of 3.4 percent. While SAP has seen revenue growth, its bottom-line results have been disappointing with a contraction in operating margins (cloud revenues have grown but tend to have a lower margin profile in the beginning.) This is likely to be further exacerbated given the enterprise multiple for this deal.

SAP Blog image 2

Fighting the Age-old Enterprise Challenge

Having said that, SAP sits in a solid location to win the war against the age-old enterprise conundrum of integrating back-, middle-, and front-office operations and recognize the operational linkages between the functions. Qualtrics’ experience management platform, known for its predictive modeling capabilities, generating real-time insights, and decentralizing the decision-making process, will certainly augment SAP’s value proposition and messaging for its C/4 HANA sales and marketing cloud. In fact, the mutual synergies between the two companies might put SAP at an equal footing with Salesforce in the CRM space.

While it may seem that SAP has arrived a bit early to the party, given that customer experience management is still a niche area, the market’s expected growth rate and SAP’s timely acquisition decision may allow it to leap-frog IBM and CA Technologies (now acquired by Broadcom), the current leaders in the space. Indeed, over the last couple of years, Qualtrics has pivoted beyond survey and other banal customer sentiment analysis methods to create a SaaS suite capable of:

  • Analyzing experience data to derive insights about employees, business partners, and end-customers
  • Democratizing and unifying analytics across the back-, middle-, and front-office operations
  • Delivering more proactive and predictive insights to alleviate experience inadequacy.

Cognitive Meets Customer Experience Management – The Road Ahead

SAP’s Intelligent Enterprise strategic tenet, enabled by its intelligent cloud suite (S/4 HANA, Fiori), digital platform (SAP HANA, SAP Data Hub, SAP Cloud Platform), and intelligent systems (SAP Leonardo, SAP Analytics Cloud), has allowed customers to embed cutting edge technologies – conversational AI, ML foundation, and cloud platform for blockchain. SAP is already working towards the combination of machine learning and natural language query (NLQ) technology to augment human intelligence, with a vision to drive business agility. Embedding the experience management suite within next-generation Intelligent Enterprise tenet will play a key role in achieving the exponential growth targets by 2020.

Please share your thoughts on this acquisition with us at: [email protected] and [email protected].

Acquisition Of Red Hat Repositions IBM For Digital IT Modernization | Sherpas in Blue Shirts

IBM’s $34 billion cash acquisition of Red Hat announced early this week has far-reaching implications for the IT services world. IT is modernizing, moving from a legacy world with data centers, proprietary operating systems and proprietary technologies to a digital environment with cloud, open-source software, a high degree of automation, DevOps and integration among these components. IBM’s legacy assets and capabilities are formidable, but the firm was not well positioned for IT modernization and struggled with digital operating models. The Red Hat acquisition is significant as it repositions IBM as a vital, must-have partner for enterprise customers in IT modernization and evolving digital operating models. This is a very intriguing acquisition for IBM. Let’s look at the implications for IBM and enterprise customers.

Read more in my blog on Forbes

Digital Transformation Reveals Limitations Of Software Packages And SaaS | Sherpas in Blue Shirts

Most large enterprises were on a journey for the past 30 years where a higher and higher proportion of the core systems driving the enterprises was software packages or software as a service. Traditional wisdom for companies was “don’t build – buy.” Then, again, as companies undertook digital transformation journeys, the prevailing belief was that the best way to do digital transformation is to get there as fast as possible by buying (not building) many components, using third-party software and SaaS products. Now, two disruptive forces are starting to shift the balance between build vs. buy in the IT world.

Read more in my blog on Forbes

Learn more about our digital transformation analyses

Broadcom, CA Technologies, and the Infrastructure Stack Collapse | Sherpas in Blue Shirts

In news that has caused a huge stir in the technology world, Broadcom, the semiconductor supplier, reached a definitive agreement to acquire CA Technologies, a leading infrastructure management company, for a whopping US$18.9 billion.

Unpacking the Strategic Intent behind the Deal

Many view the deal through a dubious, even critical, lens that points to Broadcom’s loss of strategic focus through a broadening of its capabilities beyond the semiconductors space. While the paucity of business synergies may seem true given the discrete nature of the two companies, the deal is not surprising when you examine the fragmented nature of the infrastructure software market.

Coping with bewildering choices in the realm of IT infrastructure management has been an impediment for most enterprises, leaving IT personnel grappling with a myriad of software and tools. Having said that, the advent of the converged stack approach is seen as the vanguard that can bear the mantle that democratizes infrastructure management. As time unravels the mysteries behind this move, the acquisition of an infrastructure software company may prove to be Broadcom’s crown jewel.

Broadcom blog Enterprise stack

Why CA?

Broadcom has long embraced inorganic growth. While its past acquisitions have centered around expanding its portfolio in the semiconductor business, CA will likely give it considerable headway in becoming a leading infrastructure technology company.

Broadcom’s revenue has been bolstered by its strategy of buying smaller businesses, and incorporating their best performing business units into the company. With this acquisition – expected to close by Q4 2018 – Broadcom is looking at ~25 percent business revenue from enterprise software solutions.

Broadcom will also gain access to CA’s 1,500+ existing patents on various topics including service authentication, root cause analysis, anomaly detection, IoT, cloud computing, and intelligent human-computer interfaces, as well as 950 pending patents.

Broadcom blog History

When you examine Broadcom’s business mix shift, you see an acquisition-driven approach aligned to its Wired Infrastructure and Wireless Communication business segments. These are the segments where CA brings in more downstream muscle to create an end-to-end offering for the infrastructure stack.

Broadcom blog Revenue History

Thus, Broadcom’s apparent strategic tenet to establish a “mission critical technology business” seems to be satisfied.

However, not everyone is convinced. The market was caught off guard, and is worried that this might be a reaction to Broadcom’s failed bid for Qualcomm earlier this year. Its stock has fallen by 15 percent since June 11, and the street is betting that it will plummet by another 12 percent by the middle of August 2018.

Broadcom blog History Graph

It’s Not Just about Broadcom, Is It?

With software as the strategic cornerstone, CA Technologies has scaled its offerings in systems management, anti-virus, security, identity management, applications performance monitoring, and DevOps automation. With enterprises shifting gears in their cloud adoption journey, revenue from CA Technologies’ leading business segment – Mainframe Solutions – has been declining for the last couple of years. But this decrease has been offset with rising revenues from its Enterprise Solutions. Moreover, before the acquisition announcement, CA Technologies had been trying to shift its model from perpetual licenses to SaaS and cloud models. As Broadcom moves ahead with onboarding CA Technologies’ offerings, it will gain access to downstream revenue opportunities as it will be able to provide customers a broader solutions portfolio.

The Way Forward

The size and opaque intent of this deal have evoked myriad market reactions. With Broadcom taking an assertive stance to expand into the fragmented infrastructure software market, increase its total addressable market, and capitalize on a recurring revenue stream, we wouldn’t be surprised to see it forging partnerships to propel the software solutions business it acquired from CA. Additionally, this deal will probably not face the same regulatory hurdles that ended up derailing Broadcom’s US$117 billion takeover bid for Qualcomm.

As Broadcom broadens its portfolio from beyond its core semiconductors business, it is laying down a marker and taking meaningful steps to build an enterprise infrastructure technology business. This aligns well with the collapsing enterprise infrastructure stack. But the question is – will CA’s largely legacy dominance be enough to propel this turnover in the digital transformation era?

While uncertainty about business synergies looms over this proposed acquisition, it will be interesting to monitor how Broadcom nurtures and aligns CA’s enterprise software business in its broader go-to-market strategy.

How Cloud Impacts APIs and Microservices | Sherpas in Blue Shirts

Companies considering moving workloads to cloud environments five years ago questioned whether the economics of cloud were compelling enough. The bigger question at that time was whether the economics would force a tsunami of migration from legacy environments to the cloud world. Would it set up a huge industry, much like Y2K, of moving workloads from one environment to another very quickly? Or would it evolve more like the client-server movement that happened over 5 to 10 years? It’s important to understand the cloud migration strategy that is occurring today.

We now know the cloud migration did not happen like Y2K. Enterprises considered the risk and investment to move workloads as too great, given the cost-savings returns. Of course, there are always laggards or companies that choose not to adopt new technology, but enterprises now broadly accept both public and private cloud.

The strategy most companies adopt is to put new functionality into cloud environments, often public cloud. They do this by purchasing SaaS applications rather than traditional software, and they do their new development in a Platform-as-a-Service (PaaS) cloud environment. These make sense. They then build APIs or microservices layers that connect the legacy applications to the cloud applications.

 

PaaS, be Warned: APIs are Here and Containers Are Coming | Sherpas in Blue Shirts

A few months ago, Workday, the enterprise HCM software company, entered into the Platform-as-a-Service (PaaS) world by launching (or opening, as it said) its own platform offering. This brings back the debate of whether using PaaS to develop applications is the right way to go as an enterprise strategy.

Many app developers turning from PaaS to APIs

While there are multiple arguments in favor of PaaS, an increasing number of application developers believe that APIs may be a better and quicker way to develop applications. Pro-API points include:

  • PaaS is difficult and requires commitment, whereas any API can be consumed by developers with simple documentation from the provider
  • Developers can realistically master only a couple of PaaS platforms. This limits their abilities to create exciting applications
  • PaaS involves significant developer training, unlike APIs
  • PaaS creates vendor platform lock-in, whereas APIs are fungible and can be replaced when needed

Containers moving from PaaS enablers to an alternative approach

In addition, the rise of containers and orchestration platforms, such as Kubernetes, are bringing more sleepless nights to the Platform-as-a-Service brigade. Most developers believe containers’ role of standardizing the operating environment casts strong shadows on the traditional role of PaaS.

While containers were earlier touted as PaaS enablers, they will increasingly be used as an alternative approach to application development. The freedom they provide to developers is immense and valuable. Although PaaS may offer more environment control to enterprise technology shops, it needs to evolve rapidly to become a true development platform that allows developers focus on application development. And while PaaS promised elasticity, automated provisioning, security, and infrastructure monitoring, it requires significant work from the developer’s end. This work frustrates developers, and is a possible cause for the rise of still nascent, but rapidly talked about, serverless architecture. This is evident by the fact that most leading PaaS providers, such as Microsoft Azure, CloudFoundry, and OpenShift, are introducing Kubernetes support.

As containers get deployed for production at scale, they are moving out of the PaaS layer and directly providing infrastructure control to the developers. This is helping developers to consume automated operations at scale, a promise that PaaS couldn’t fulfill due to higher abstraction. Kubernetes and other orchestration platforms can organize these containers to deliver portable, consistent, and standardized infrastructure components.

All is not lost for PaaS

However, given strong enterprise adoption, all is not lost for PaaS. Enterprises will take significant time to test containers as an alternative to a PaaS environment. Moreover, given that no major PaaS or IaaS vendor other than Google owns container technology, there is an inherent interest among large cloud providers such as AWS and Azure to build something as an alternative to containers. No wonder most of them are now pushing their serverless offerings in the market as an alternate architectural choice.

Which of these architectural preferences will eventually become standard, if at all, is a difficult pick as of today. Yet, while it’s a certainty that infrastructure operations will completely change in the next five years, most enterprise shops aren’t investing meaningfully in the new tools and skills that are required to make this shift. Thus, the futuristic enterprises that realize this tectonic shift will trample their competition. No ifs, ands, or buts about it.

What has been your experience with containers, APIs, microservices, serverless, and Platforms-as-a-Service? Do you think you need all of them, or do you have preferences? Do share with me at [email protected].

SMBs Turning To Finance and Accounting Outsourcing Because Of The Cloud | Sherpas in Blue Shirts

An interesting phenomenon is happening because of digital transformation. As enterprises collapse their technology and functional stacks through digital, it disrupts their talent model and leads to a new organizational risk. At the same time, it’s starting to drive a new market for service providers. Let’s take a closer look at this developing trend where it’s currently most evident – in the Finance and Accounting (F&A) processes in small and mid-sized enterprises.

In collapsing the technology stack, companies move from running financial management software on their servers with a license update to cloud-based SaaS systems (such as Intacct or NetSuite) that provide the software and a more flexible set of reporting functions. The standardization and functionality benefits are great.

Read More Here

Service Integration and Management in the Digital Era | Sherpas in Blue Shirts

As enterprises increasingly realize that their ability to compete hinges on their digital strategy, they’re engaging with a wide, ever-growing range of niche small- to mid-sized digital technology providers. In some cases, we’ve seen organizations’ portfolios include more than 50 providers servicing a mix of traditional and next generation IT services.

The high complexity of such a massive number of providers is driving a surge in the need for Service Integration and Management (SIAM) specialists to help ensure seamless service and contract management and integration through a single body that interfaces with the multiple stakeholders including business and IT. While digital programs are being led by enterprise business units, the IT unit is focusing on rationalization of the legacy landscape and providing support for digital transformation projects.

In the golden days of outsourcing, when things were much simpler and outsourcing-related benefits were limited to cost, enterprises clearly preferred to completely retain the SIAM function internally. The enterprise IT teams collaborated with suppliers and ”leased” resources in a T&M fashion, while completely owning the operational and strategic aspects of the services.

More recently, some organizations have employed hybrid SIAM, wherein enterprises willingly relinquish the design, operations, and contractual aspects of the service to a third-party with proven SIAM expertise, while retaining the more strategic aspects such as portfolio strategy, business relationship management, and procurement.

But in the digital era, hybrid SIAM is starting to take a different shape and flavor.

In a traditional IT delivery model, enterprise IT is the interface between the provider and the business. But we’re now seeing enterprise business units become increasingly involved in end-to-end digital transformation engagements, and interacting and collaborating directly with providers.

Following is an illustration of two different hybrid SIAM models, outlining key functions that are outsourced or retained:

eg12

 

So, what will outsourcing the SIAM function cost you?

It fully depends on multiple factors. The first is team size, which must appropriately match to the input volumes. Next is scope and responsibilities. For example, does the engagement include cross-functional activities?

Of course, the location from which the SIAM program is delivered – i.e., onshore, nearshore, or offshore – also impacts the cost. While offshoring will provide the lowest price, the complexity of new age digital engagements requires a SIAM practice that is located closer to business.

Has your company outsourced SIAM, or is it considering doing so? Are there any best practices or pitfalls that you would like to share? I encourage you to do so by contacting me directly at: [email protected].

 

 

Remedy for frustrations in legacy IT infrastructure contracting model | Sherpas in Blue Shirts

A significant driver motivating companies to migrate workloads out of their legacy environment into the cloud is the increasing frustration of operating under onerous, complicated services contracts. Of course, these workloads migrate to the cloud and a software-defined environment primarily for greater efficiency and agility. But many workloads are too expensive and risky to migrate and thus are better suited for maintaining in a legacy environment. So, I’m calling for a better, more rational legacy infrastructure contracting vehicle. Here’s what it would look like and how companies would benefit.

What’s wrong with the typical contract?

Large, cumbersome, difficult master services agreements (MSAs) with functional areas or towers govern the legacy IT outsourcing market. No matter the function outsourced, these legacy contracts have in common several characteristics that make them too complex and make administering these contracts incredibly complicated and frustrating.

Request a briefing with our experts to discuss the 2022 key issues presented in our 12 days of insights.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.