Saying No to Bundling IT Services | Market Insights™
Saying no to bundling: Despite a meaningful push from service providers, enterprises remain reluctant to leverage the bundled services model
Saying no to bundling: Despite a meaningful push from service providers, enterprises remain reluctant to leverage the bundled services model
Despite Hadoop’s and OpenStack’s adoption, our recent discussions with enterprises and technology providers revealed two prominent trends:
Big Data will need more than a Hadoop: Along with NoSQL technologies, Hadoop has really taken the Big Data bull by the horns. Indications of a healthy ecosystem are apparent when you see that leading vendors such as MapR is witnessing a 100% booking growth, Cloudera is expecting to double itself, and Hortonworks is almost doubling itself. However, the large vendors that really drive the enterprise market/mindset and sell multiple BI products – such as IBM, Microsoft, and Teradata – acknowledge that Hadoop’s quantifiable impact is as of yet limited. Hadoop’s adoption continues on a project basis, rather than as a commitment toward improved business analytics. Broader enterprise class adoption remains muted, despite meaningful investments and technology vendors’ focus.
OpenStack is difficult, and enterprises still don’t get it: OpenStack’s vision of making every datacenter a cloud is facing some hurdles. Most enterprises find it hard to develop OpenStack-based cloud themselves. While this helps cloud providers pitch their OpenStack offerings, adoption is far from enterprise class. The OpenStack foundation’s survey indicates that approximately 15 percent of organizations utilizing OpenStack are outside the typical ICT industry or academia. Moreover, even cloud service providers, unless really dedicated to the OpenStack cause, are reluctant to meaningfully invest in it. Although most have an OpenStack offering or are planning to launch one, their willingness to push it to clients is subdued.
It’s easy to blame these challenges on open source and contributors’ lack of coherent strategy or vision. However, that just simplifies the problem. Both Hadoop and OpenStack suffer from lack of needed skills and applicability. For example, a few enterprises and vendors believe that Hadoop needs to become more “consumerized” to enable people with limited knowledge of coding, querying, or data manipulation to work with it. The current esoteric adoption is driving these users away. The fundamental promise of new-age technologies making consumption easier is being defeated. Despite Hortonworks’ noble (and questioned) attempt to create an “OpenStack type” alliance in Open Data Platform, things have not moved smoothly. While Apache Spark promises to improve Hadoop consumerization with fast processing and simple programming, only time will tell.
OpenStack continues to struggle with a “too tough to deploy” perception within enterprises. Beyond this, there are commercial reasons for the challenges OpenStack is witnessing. Though there are OpenStack-only cloud providers (e.g., Blue Box and Mirantis), most other cloud service providers we have spoken with are half-heartedly willing to develop and sell OpenStack-based cloud services. Cloud providers that have offerings across technologies (such as BMC, CloudStack, OpenStack, and VMware) believe they have to create sales incentives and possibly hire different engineering talent to create cloud services for OpenStack. Many of them believe this is not worth the risk, as they can acquire an “OpenStack-only” cloud provider if real demand arises (as I write the news has arrived that IBM is acquiring Blue Box and Cisco is acquiring Piston Cloud).
The success of both Hadoop and OpenStack will depend on simplification in development, implementation, and usage. Hadoop’s challenges lie both in the way enterprises adopt it and in the technology itself. Targeting a complex problem is a de facto approach for most enterprises, without realizing that it takes time to get the data clearances from business. This impacts business’ perception about the value Hadoop can bring in. Hadoop’s success will depend not on point solutions developed to store and crunch data, but on the entire value chain of data creation and consumption. The entire process needs to be simplified for more enterprises to adopt it. Hadoop and the key vendors need to move beyond Web 2.0 obsession to focus on other enterprises. With the increasing focus on real-time technologies, Hadoop should get a further leg up. However, it needs to provide more integration with existing enterprise investments, rather than becoming a silo. While in its infancy, the concept of “Enterprise Data Hub” is something to note, wherein the entire value chain of Big Data-related technologies integrate together to deliver the needed service.
As for OpenStack, enterprises do not like that they currently require too much external support to adopt it in their internal clouds. If the drop in investments is any indication, this will not take OpenStack very far. Cloud providers want the enterprises to consume OpenStack-based cloud services. However, enterprises really want to understand the technology to which they are making a long-term commitment, and are cautious of anything that requires significant reskill or has the potential to become a bottleneck in their standardization initiatives. OpenStack must address these challenges. Though most enterprise technologies are tough to consume, the market is definitely moving toward easier deployments and upgrades. Therefore, to really make OpenStack an enterprise-grade offering, its deployment, professional support, knowledge management, and requisite skills must be simplified.
What do you think about Hadoop and OpenStack? Feel free to reach out to me on [email protected].
Photo credit: Flickr
Too much. That’s an accurate assessment of IT environments in most, if not all, enterprises. They have more data center space than they need and more servers than they can use at any point in time. They have more software operating systems, middleware, and enterprise licenses than necessary. They also have more of the wrong resources and never enough of the right resources in application development and maintenance. The as-a-service movement seeks to address this, but the journey to get there isn’t as simple as it appears.
So how much overcapacity is present in enterprises? At every level there seems to be a 25-50 percent overcapacity in IT. Since IT varies from 1-7 percent of revenues, the 25-50 percent overcapacity is in the range of 40 percent overcapacity overall.
As we at Everest Group look at applying as-a-service principles into IT environments, we see an opportunity to remove 40 percent of the IT cost by eliminating the wastage in service capacity. But the journey to achieve this as-a-service cost benefit is neither quick nor easy.
Renegotiating enterprise licenses takes time and often requires waiting until they expire. Reconceptualizing the infrastructure and application support is also complicated and requires a resolute effort and substantial patience.
It can take a year to three years to complete the journey. But the benefits are very substantial, starting with a 40 percent cost reduction in IT — a heady prize for the journey. In a future blog I’ll discuss other benefits.
Right now Remote Infrastructure Management (RIM) service providers are enjoying explosive growth as they take share from asset-heavy players. The labor arbitrage market is disintermediating or successfully attacking the traditional asset-heavy infrastructure space. But in every boom are the seeds of undoing.
It reminds me of the story of Joseph in the Biblical book of Genesis. Egypt’s Pharaoh had a dream about seven fat cows and seven scrawny cows coming out of the Nile River and the scrawny cows ate the fat cows. He sent for Joseph to interpret the dreams, and Joseph revealed that Egypt would have seven years of great abundance followed by seven years of famine.
Everest Group anticipates that, with HCL and TCS and Wipro having achieved a break-through level of credibility doing large transactions, these three and other providers behind them are poised for a number of strong years of growth as their cost to operate is lower than the traditional players. We forecast three years of plenty where they will drive explosive growth.
But behind the RIM contracts is coming a world in which the integrated automated cloud platforms such as IBM’s SoftLayer and AWS will move into the enterprise and start taking share from the RIM players.
Unlike Egypt where they experienced seven years of plenty and then seven years of famine, we forecast three years of plenty for RIM players followed by increasingly lean years.
RIM players will need to adapt to integrated automated platforms such as SoftLayer and AWS as they move into the enterprise — just like the RIM model is currently disintermediating the asset-heavy players.
Photo credit: George Thomas
I recently had the privilege to sit through a two-day session with IBM’s senior executive team in services. I’m someone who tries not to drink the Kool-Aid. Even so, I came away truly impressed by the work that IBM has done to position itself to be relevant and a major player in the future of IT infrastructure.
I’ve written frequently in this blog about the impending crisis that all asset-heavy players face as first RIM and then cloud attack their revenue base. This unrelenting onslaught is already moving share from the incumbents such as IBM to challengers such as HCL and TCS and will only be exacerbated as cloud takes stronger hold.
I came away with from the two days with IBM realizing Big Blue’s profound understanding of this phenomenon and its positioning of offers that allows them to leapfrog the RIM model and play a decisive and significant role in cloud.
Taken as a whole, IBM’s public cloud software and private cloud and automation strategies gives it the capability to move clients smoothly into the future. And it assures capturing the run-off from IBM’s traditional business while at the same time expanding market share. This is truly a formidable set of capabilities that, if executed well, will ensure IBM is a major — if not dominant — player in the future of IT infrastructure.
Everything will come down to execution, and history has seldom been kind to incumbents in the face of major technology and business model disruption. But based on the two days I spent with IBM, I believe that IBM has more than a fighting chance to successfully make this transition.
Photo credit: Irish Typepad
As part of our efforts to profile the rapidly evolving service delivery automation (SDA) landscape, I am speaking with the leaders of many of the technology players who are helping stimulate innovation in this space. This second of a series of blogs on SDA technologies, is based on observations and learnings from a recent briefing with Hans Christian (Chris) Boos, CEO of Arago.
The company was founded in 1995 but its intelligent automation software for enterprise IT, in its current form, became generally available only 2-3 years ago. Arago has since experienced rapid growth, more than trebling revenue since 2011.
Arago’s flagship product is AutoPilot. This uses an inference engine with, what essentially sounds like, a neural network to speed up processing, although the term was not used by Boos during the briefing. Instead, he refers to human brain like activity to learn and apply learning (knowledge items) to new or changing environments to infer how to process requirements automatically. The machine gets more useful the more knowledge it gains but it also has to manage the knowledge, for example, deal with rules that contradict each other. It does this in a mathematical way and uses analytics. According to Boos, it can apply this approach to different areas such as database management, incident management and also to more architectural processes and business logic.
Arago figures show that AutoPilot processed nearly 2 million tickets (as produced by infrastructure management tools such as BMC) for clients in 2013. Circa 87% of these were fully automated. Processes automated at the middleware layer, AutoPilot’s sweet spot, had the highest level of automation at 98%.
Clients are typically large organizations or IT service providers. These include two major global IT service providers.
The software is available as a service, as well as on premise but interestingly the majority of clients want it on premise.
Two licensing models are available from Arago:
AutoPilot comes with built connectivity to infrastructure management tools such as BMC and IBM Tivoli and with APIs for integration with other packages.
Arago’s proposition comes with an estimated cost saving of between 30% and 50%.
If AutoPilot can successfully tap into its acquired knowledge to handle non-standard environments or changing conditions, then it could minimize the need for predefined scripts, to automate parts of IT that are more challenging to automate. I believe this can complement other automation tools that are highly scripted and which are used in other parts of IT infrastructure. The potential benefits in large and highly heterogeneous IT environments, could soon accumulate.
This is advanced technology and could also increase complexity, potentially leading to tickets itself, at least initially while the knowledge-base is being developed.
In terms of Arago’s target market, the company is selling to a converted crowd – IT service providers and IT departments of large organizations that have automated parts of their IT infrastructure already. Its challenge is its size which is not big enough for the demand that it is seeing. Arago is enhancing its partnership network. It is also expanding geographically. At the moment Arago operates out of Germany with all its 92 staff currently based there. It is looking to open an office in the United States soon but it has no physical presence in other countries such as India.
Other measures include creating a community where clients can share automations/knowledge items for free or buy or sell them.
These plans will start to pay off but for now demand is likely to remain choked by lack of scale, I believe, particularly, in initial consultancy and client training services.
AutoPilot is still a relatively new product and I expect some functionality enhancements to be on the cards. More work on the UI is already underway.
Growth opportunities include selling to smaller companies. Arago has released a community edition that can give smaller organizations a fully functioning AutoPilot that is only limited in the size of the IT that it can automate. This is a clever bit of marketing that prepares the ground for attracting large companies of the future.
Arago’s core technology is application agnostic. The company chose to apply it to IT first but the core product can also learn to handle business logic, potentially leaving Arago with opportunities to expand into business process automation in the future.
Software-defined infrastructure:a revolution for the infrastructure landscape
Remote infrastructure management (RIM) services were the disrupter for asset-heavy infrastructure services over the past several years and, in all likelihood, will continue to be for the next few years. However, as we look down the road it appears that RIM will hit the speed bump of automation and cloud, which will impact RIM in much the same way that RIM currently disrupts the asset-heavy infrastructure market. At Everest we believe that about 50 percent of all current RIM workloads are viable and cheaper in the cloud and likely will migrate to the cloud over the next three years. So what’s the prognosis for the RIM model?
As shown in the charts below, RIM grew at an average of 27 percent per year while the asset-heavy space lost share at 1.6 percent a year.
But the cost of operating in a pay-for-usage cloud world is about 50 percent lower than the take-or-pay world of the existing data center. Although the cloud is currently a small part of the global services marketplace, the cloud providers are operating a lot more profitably and thus disrupting the RIM providers. Automation and cloud are the areas of action and investment for growth. We need look no further than the recent acquisitions of IBM and Dell to understand how this market is evolving.
IBM’s Strategic Moves. IBM’s latest acquisitions include Cloudant, for delivering NoSQL on-demand database-as-a-service (DBaas), and SoftLayer Technologies, a global cloud infrastructure provider. And Big Blue plans to spend more than $1 billion over the next two years to bolster SoftLayer’s platform. IBM also scooped up several other strategic cloud companies over the past couple of years including UrbanCode, for software delivery automation; Green Hat, delivering software quality testing for the cloud environment; and Big Fix, providing management and automation for security and compliance software updates.
Dell’s Strategic Moves. Acquisitions adding to Dell’s capabilities include Enstratius, enabling consolidated management across multiple cloud platforms; Credant Software, providing data protection; Gale Technologies, enabling infrastructure automation; Quest Software, for value-added software solutions and virtualization; and Boomi, for SaaS integration.
To date the power of the cloud disruption has not yet felled the infrastructure services space. But over the next three years, we expect 30-50 percent of the infrastructure services work to migrate to a cloud model. What impact will it have on the RIM market?
First of all, RIM’s impact on the existing IT infrastructure market is not finished. We think RIM service providers have at least three more years of significant share gain shifting from IT asset-heavy infrastructure to a RIM model. After that? Not so much. And towards the third year, we expect to see cloud disintermediate the RIM market.
It will be interesting to see whether RIM providers can make the accommodations for the new cloud world. Yes, there will be a role for RIM in cloud, but we believe it will be less than in its current IT infrastructure space. And managing the the automated cloud world will require fewer people, which means lower revenue for RIM vendors.
For vendors and service providers, the non-cloud IT infrastructure space is becoming a very bad place to be.
Why Service Integration and Management (SIAM) is taking hold in infrastructure outsourcing