Category: Cloud Infrastructure

Acquisition Of Red Hat Repositions IBM For Digital IT Modernization | Sherpas in Blue Shirts

IBM’s $34 billion cash acquisition of Red Hat announced early this week has far-reaching implications for the IT services world. IT is modernizing, moving from a legacy world with data centers, proprietary operating systems and proprietary technologies to a digital environment with cloud, open-source software, a high degree of automation, DevOps and integration among these components. IBM’s legacy assets and capabilities are formidable, but the firm was not well positioned for IT modernization and struggled with digital operating models. The Red Hat acquisition is significant as it repositions IBM as a vital, must-have partner for enterprise customers in IT modernization and evolving digital operating models. This is a very intriguing acquisition for IBM. Let’s look at the implications for IBM and enterprise customers.

Read more in my blog on Forbes

Digital Transformation Reveals Limitations Of Software Packages And SaaS | Sherpas in Blue Shirts

Most large enterprises were on a journey for the past 30 years where a higher and higher proportion of the core systems driving the enterprises was software packages or software as a service. Traditional wisdom for companies was “don’t build – buy.” Then, again, as companies undertook digital transformation journeys, the prevailing belief was that the best way to do digital transformation is to get there as fast as possible by buying (not building) many components, using third-party software and SaaS products. Now, two disruptive forces are starting to shift the balance between build vs. buy in the IT world.

Read more in my blog on Forbes

Learn more about our digital transformation analyses

Broadcom, CA Technologies, and the Infrastructure Stack Collapse | Sherpas in Blue Shirts

In news that has caused a huge stir in the technology world, Broadcom, the semiconductor supplier, reached a definitive agreement to acquire CA Technologies, a leading infrastructure management company, for a whopping US$18.9 billion.

Unpacking the Strategic Intent behind the Deal

Many view the deal through a dubious, even critical, lens that points to Broadcom’s loss of strategic focus through a broadening of its capabilities beyond the semiconductors space. While the paucity of business synergies may seem true given the discrete nature of the two companies, the deal is not surprising when you examine the fragmented nature of the infrastructure software market.

Coping with bewildering choices in the realm of IT infrastructure management has been an impediment for most enterprises, leaving IT personnel grappling with a myriad of software and tools. Having said that, the advent of the converged stack approach is seen as the vanguard that can bear the mantle that democratizes infrastructure management. As time unravels the mysteries behind this move, the acquisition of an infrastructure software company may prove to be Broadcom’s crown jewel.

Broadcom blog Enterprise stack

Why CA?

Broadcom has long embraced inorganic growth. While its past acquisitions have centered around expanding its portfolio in the semiconductor business, CA will likely give it considerable headway in becoming a leading infrastructure technology company.

Broadcom’s revenue has been bolstered by its strategy of buying smaller businesses, and incorporating their best performing business units into the company. With this acquisition – expected to close by Q4 2018 – Broadcom is looking at ~25 percent business revenue from enterprise software solutions.

Broadcom will also gain access to CA’s 1,500+ existing patents on various topics including service authentication, root cause analysis, anomaly detection, IoT, cloud computing, and intelligent human-computer interfaces, as well as 950 pending patents.

Broadcom blog History

When you examine Broadcom’s business mix shift, you see an acquisition-driven approach aligned to its Wired Infrastructure and Wireless Communication business segments. These are the segments where CA brings in more downstream muscle to create an end-to-end offering for the infrastructure stack.

Broadcom blog Revenue History

Thus, Broadcom’s apparent strategic tenet to establish a “mission critical technology business” seems to be satisfied.

However, not everyone is convinced. The market was caught off guard, and is worried that this might be a reaction to Broadcom’s failed bid for Qualcomm earlier this year. Its stock has fallen by 15 percent since June 11, and the street is betting that it will plummet by another 12 percent by the middle of August 2018.

Broadcom blog History Graph

It’s Not Just about Broadcom, Is It?

With software as the strategic cornerstone, CA Technologies has scaled its offerings in systems management, anti-virus, security, identity management, applications performance monitoring, and DevOps automation. With enterprises shifting gears in their cloud adoption journey, revenue from CA Technologies’ leading business segment – Mainframe Solutions – has been declining for the last couple of years. But this decrease has been offset with rising revenues from its Enterprise Solutions. Moreover, before the acquisition announcement, CA Technologies had been trying to shift its model from perpetual licenses to SaaS and cloud models. As Broadcom moves ahead with onboarding CA Technologies’ offerings, it will gain access to downstream revenue opportunities as it will be able to provide customers a broader solutions portfolio.

The Way Forward

The size and opaque intent of this deal have evoked myriad market reactions. With Broadcom taking an assertive stance to expand into the fragmented infrastructure software market, increase its total addressable market, and capitalize on a recurring revenue stream, we wouldn’t be surprised to see it forging partnerships to propel the software solutions business it acquired from CA. Additionally, this deal will probably not face the same regulatory hurdles that ended up derailing Broadcom’s US$117 billion takeover bid for Qualcomm.

As Broadcom broadens its portfolio from beyond its core semiconductors business, it is laying down a marker and taking meaningful steps to build an enterprise infrastructure technology business. This aligns well with the collapsing enterprise infrastructure stack. But the question is – will CA’s largely legacy dominance be enough to propel this turnover in the digital transformation era?

While uncertainty about business synergies looms over this proposed acquisition, it will be interesting to monitor how Broadcom nurtures and aligns CA’s enterprise software business in its broader go-to-market strategy.

How Cloud Impacts APIs and Microservices | Sherpas in Blue Shirts

Companies considering moving workloads to cloud environments five years ago questioned whether the economics of cloud were compelling enough. The bigger question at that time was whether the economics would force a tsunami of migration from legacy environments to the cloud world. Would it set up a huge industry, much like Y2K, of moving workloads from one environment to another very quickly? Or would it evolve more like the client-server movement that happened over 5 to 10 years? It’s important to understand the cloud migration strategy that is occurring today.

We now know the cloud migration did not happen like Y2K. Enterprises considered the risk and investment to move workloads as too great, given the cost-savings returns. Of course, there are always laggards or companies that choose not to adopt new technology, but enterprises now broadly accept both public and private cloud.

The strategy most companies adopt is to put new functionality into cloud environments, often public cloud. They do this by purchasing SaaS applications rather than traditional software, and they do their new development in a Platform-as-a-Service (PaaS) cloud environment. These make sense. They then build APIs or microservices layers that connect the legacy applications to the cloud applications.

 

PaaS, be Warned: APIs are Here and Containers Are Coming | Sherpas in Blue Shirts

A few months ago, Workday, the enterprise HCM software company, entered into the Platform-as-a-Service (PaaS) world by launching (or opening, as it said) its own platform offering. This brings back the debate of whether using PaaS to develop applications is the right way to go as an enterprise strategy.

Many app developers turning from PaaS to APIs

While there are multiple arguments in favor of PaaS, an increasing number of application developers believe that APIs may be a better and quicker way to develop applications. Pro-API points include:

  • PaaS is difficult and requires commitment, whereas any API can be consumed by developers with simple documentation from the provider
  • Developers can realistically master only a couple of PaaS platforms. This limits their abilities to create exciting applications
  • PaaS involves significant developer training, unlike APIs
  • PaaS creates vendor platform lock-in, whereas APIs are fungible and can be replaced when needed

Containers moving from PaaS enablers to an alternative approach

In addition, the rise of containers and orchestration platforms, such as Kubernetes, are bringing more sleepless nights to the Platform-as-a-Service brigade. Most developers believe containers’ role of standardizing the operating environment casts strong shadows on the traditional role of PaaS.

While containers were earlier touted as PaaS enablers, they will increasingly be used as an alternative approach to application development. The freedom they provide to developers is immense and valuable. Although PaaS may offer more environment control to enterprise technology shops, it needs to evolve rapidly to become a true development platform that allows developers focus on application development. And while PaaS promised elasticity, automated provisioning, security, and infrastructure monitoring, it requires significant work from the developer’s end. This work frustrates developers, and is a possible cause for the rise of still nascent, but rapidly talked about, serverless architecture. This is evident by the fact that most leading PaaS providers, such as Microsoft Azure, CloudFoundry, and OpenShift, are introducing Kubernetes support.

As containers get deployed for production at scale, they are moving out of the PaaS layer and directly providing infrastructure control to the developers. This is helping developers to consume automated operations at scale, a promise that PaaS couldn’t fulfill due to higher abstraction. Kubernetes and other orchestration platforms can organize these containers to deliver portable, consistent, and standardized infrastructure components.

All is not lost for PaaS

However, given strong enterprise adoption, all is not lost for PaaS. Enterprises will take significant time to test containers as an alternative to a PaaS environment. Moreover, given that no major PaaS or IaaS vendor other than Google owns container technology, there is an inherent interest among large cloud providers such as AWS and Azure to build something as an alternative to containers. No wonder most of them are now pushing their serverless offerings in the market as an alternate architectural choice.

Which of these architectural preferences will eventually become standard, if at all, is a difficult pick as of today. Yet, while it’s a certainty that infrastructure operations will completely change in the next five years, most enterprise shops aren’t investing meaningfully in the new tools and skills that are required to make this shift. Thus, the futuristic enterprises that realize this tectonic shift will trample their competition. No ifs, ands, or buts about it.

What has been your experience with containers, APIs, microservices, serverless, and Platforms-as-a-Service? Do you think you need all of them, or do you have preferences? Do share with me at [email protected].

SMBs Turning To Finance and Accounting Outsourcing Because Of The Cloud | Sherpas in Blue Shirts

An interesting phenomenon is happening because of digital transformation. As enterprises collapse their technology and functional stacks through digital, it disrupts their talent model and leads to a new organizational risk. At the same time, it’s starting to drive a new market for service providers. Let’s take a closer look at this developing trend where it’s currently most evident – in the Finance and Accounting (F&A) processes in small and mid-sized enterprises.

In collapsing the technology stack, companies move from running financial management software on their servers with a license update to cloud-based SaaS systems (such as Intacct or NetSuite) that provide the software and a more flexible set of reporting functions. The standardization and functionality benefits are great.

Read More Here

Service Integration and Management in the Digital Era | Sherpas in Blue Shirts

As enterprises increasingly realize that their ability to compete hinges on their digital strategy, they’re engaging with a wide, ever-growing range of niche small- to mid-sized digital technology providers. In some cases, we’ve seen organizations’ portfolios include more than 50 providers servicing a mix of traditional and next generation IT services.

The high complexity of such a massive number of providers is driving a surge in the need for Service Integration and Management (SIAM) specialists to help ensure seamless service and contract management and integration through a single body that interfaces with the multiple stakeholders including business and IT. While digital programs are being led by enterprise business units, the IT unit is focusing on rationalization of the legacy landscape and providing support for digital transformation projects.

In the golden days of outsourcing, when things were much simpler and outsourcing-related benefits were limited to cost, enterprises clearly preferred to completely retain the SIAM function internally. The enterprise IT teams collaborated with suppliers and ”leased” resources in a T&M fashion, while completely owning the operational and strategic aspects of the services.

More recently, some organizations have employed hybrid SIAM, wherein enterprises willingly relinquish the design, operations, and contractual aspects of the service to a third-party with proven SIAM expertise, while retaining the more strategic aspects such as portfolio strategy, business relationship management, and procurement.

But in the digital era, hybrid SIAM is starting to take a different shape and flavor.

In a traditional IT delivery model, enterprise IT is the interface between the provider and the business. But we’re now seeing enterprise business units become increasingly involved in end-to-end digital transformation engagements, and interacting and collaborating directly with providers.

Following is an illustration of two different hybrid SIAM models, outlining key functions that are outsourced or retained:

eg12

 

So, what will outsourcing the SIAM function cost you?

It fully depends on multiple factors. The first is team size, which must appropriately match to the input volumes. Next is scope and responsibilities. For example, does the engagement include cross-functional activities?

Of course, the location from which the SIAM program is delivered – i.e., onshore, nearshore, or offshore – also impacts the cost. While offshoring will provide the lowest price, the complexity of new age digital engagements requires a SIAM practice that is located closer to business.

Has your company outsourced SIAM, or is it considering doing so? Are there any best practices or pitfalls that you would like to share? I encourage you to do so by contacting me directly at: [email protected].

 

 

Remedy for frustrations in legacy IT infrastructure contracting model | Sherpas in Blue Shirts

A significant driver motivating companies to migrate workloads out of their legacy environment into the cloud is the increasing frustration of operating under onerous, complicated services contracts. Of course, these workloads migrate to the cloud and a software-defined environment primarily for greater efficiency and agility. But many workloads are too expensive and risky to migrate and thus are better suited for maintaining in a legacy environment. So, I’m calling for a better, more rational legacy infrastructure contracting vehicle. Here’s what it would look like and how companies would benefit.

What’s wrong with the typical contract?

Large, cumbersome, difficult master services agreements (MSAs) with functional areas or towers govern the legacy IT outsourcing market. No matter the function outsourced, these legacy contracts have in common several characteristics that make them too complex and make administering these contracts incredibly complicated and frustrating.

How DevOps changes the delivery of IT functions | Sherpas in Blue Shirts

Labor arbitrage and shared services companies have had a perfect marriage over the last 20 years. Then along came the Digital Revolution with new business models and a new construct for services. One component of the digital model construct is DevOps. It makes a significant impact on business services, but it’s important to understand how it changes the picture for labor arbitrage and shared services.

Shared service companies are structured on a functional basis. One way to think about them is they are a stack of functional expertise. In the case of IT, the stack includes such functions as infrastructure, security, application development and maintenance, and compliance. There is a multiple stack hierarchy, with each functional layer having shared service champions responsible for delivering that function cost-effectively at a high level of quality. Labor arbitrage fits perfectly into this equation in that each functional layer uses people, and the work can often be done more cost-effectively offshore than onshore.

Read more at my CIO blog

The “War” in Ransom“war”e – Service Providers will Feel the Pain of Clients’ Tougher Security Policies | Sherpas in Blue Shirts

In the immediate aftermath of last week’s Wannacry ransomware attacks around the world, many organizations will consider how quickly and effectively to update older Microsoft operating systems and apply the necessary patches. The longer-term effects, however, will be more far reaching as governments and other organizations review their security policies to protect their systems against future attacks. This spells tougher requirements on IT services as well as service providers’ connections to client systems.

Tougher government policies on suppliers

The Wannacry attack in the UK crippled the National Health Service (NHS), putting people’s lives at risk. It is going to cost billions to put right, not only in terms of upgrading systems but also rescheduling operations and treating people whose condition will have worsened after the delay caused by the attack. The UK government must act and be seen to act to better protect vital services in the future. It is likely to unveil new stringent policies for cyber security.

While this spells new business opportunities for IT service providers to enhance the public sector’s cyber security, other service providers will feel the pain of even more longwinded procedures to connect to client’s VPNs when working on system integration or business process services. Many already have to apply to clients’ IT departments on a daily-basis to be allowed to connect to VPNs. More stringent requirements are likely to come into force.

Microsoft must face the music

Let us not forget that it was a Microsoft Windows vulnerability that enabled this attack. Microsoft must face pressure to continue to support its older operating systems for longer. There are often legacy systems that work only with older operating systems. A Windows upgrade can therefore be very costly. A cash-strapped organization, the NHS prioritises patients care over keeping up with Microsoft’s timetable for Windows upgrades and discontinuing support for older operating systems. This is something that the UK government must address. It has enough buying power to demand action from Microsoft.

Upgrade pressure on government agencies

Government bodies such as the NHS will be put under renewed pressure to upgrade their systems and keep them up-to-date. The organizations will no doubt demand extra cash to deal with the situation. Spending on cyber security is set to increase whether agencies find new money or redirect funds from other activities. This ransomware attack will therefore boost the IT market for end-point security if not the wider security sector.

Pressure on users

Users too will feel the pain of ransom“war”e. Tougher usage policies are likely to get enshrined in IT department guidelines. Users are likely to experience reduced flexibility as more organizations adopt desktop lock downs with workspaces become more centrally controlled and monitored to reduce risks.

With numbers and varieties of attacks increasing, all aspects of IT security will be tightened up. Even the most laggard of organizations will look to build better security controls across their broad IT services or risk loss of business, revenue, reputation and in some cases, the wellbeing of their customers.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.