One of the better indicators that corporate IT groups are starting to get serious about cloud is their growing interest in solutions that help them aggregate and manage multiple cloud services. Some call these solutions cloud services brokerage and management, and others term them cloud orchestration. While the market hasn’t yet converged on a common set of capabilities or definition, the broad category typically includes the following:
- Service catalogs – “App Store”- like models that provide users access to internal IaaS and PaaS services and in some cases third-party SaaS apps and infrastructure services as well
- Service provisioning – capabilities that support end-user requests, provisioning, and deployment of cloud services
- Service integration – data integration services across multiple cloud services, including “cloud-to-cloud” and “cloud-to-ground” models
- Chargeback and billing – consumption-based metering and billing of cloud services to internal users, including private services and aggregation of public cloud services spend
- Service management – monitoring and management across multiple cloud services, including performance, capacity planning, workload management, and identity management
- Sourcing – contracting and sourcing of cloud services across multiple platforms and providers
These solutions are being offered by a wide variety of players, including not only traditional enterprise systems management vendors – which in some cases are just repackaging SOA offerings – but also global systems integrators (SIs) and focused startups.
What’s important about this phenomenon?
First, corporate IT’s interest in these capabilities is, in a way, an implicit acknowledgement that:
- Cloud services will be adopted in scale across enterprises
- Multiple large scale services will need to be orchestrated and managed
- Orchestrating these services will be hard and will require external third party solutions
This is a far different conversation than corporate IT was having a year ago at this time, which was primarily around what pilot or proof of concept to launch.
Second, interest in cloud orchestration is being “pulled” by corporate IT, rather than “pushed” by the business. A premise we recently heard is that business’ role in driving adoption of cloud is no different than it was in the packaged software era. Packaged software required servers, storage, and networks, all of which required IT management and support. This provided IT with long-term job security and the opportunity to “empire build.” As a result, corporate IT aggressively supported packaged software rollouts and implementations.
The difference in the cloud era is corporate IT’s attitude. To date, it largely perceived the cloud as a threat. But now, IT is discovering it can potentially regain a measure of relevance and control by adopting a service provider mindset, and service catalogs / chargeback models combined with private and public cloud services.
Is corporate IT finally finding a path to building its empire in the cloud? Are you or your IT group considering, or embarking upon, a cloud orchestration initiative? What thoughts and experiences do you have to share with your peers?
The main indicator of productivity in application development (AD) – periodic reduction in the number of application defects per 1000 lines of code written in X man hours – has stood the test of time at a modular level, as have CMMI standard metrics against which any calculation of productivity can be benchmarked.
However, in the midst of mounting pressure to optimize discretionary IT spend, buyers have no option but to justify their spending via data that evaluates productivity at the enterprise level. CIOs must meet this requirement, while also facing the following two challenges:
- First, they must simultaneously deliver increasing value and achieve year-on-year annual reduction in costs
- Second, they have to contend with educated business users who may not have the patience to deal with a behemoth enterprise IT organization to get their functional needs fulfilled. Indeed, with the advent of market forces such as social media, mobility, analytics, and cloud (SMAC), business users today understand more about applications and technology than they ever did in the past
As a result, IT buyers must ignore all the ambient noise created by multiple metrics and focus only on the following two factors:
- Business functions: The functionality that the business users are demanding, complete with SLAs
- Application cost: The cost to acquire the applications that provide the required functions.
Fortunately for them, data indicates that “business functions versus cost” can be useful in benchmarking the productivity of any application development project. As the following image illustrates, for AD projects of specified complexity and type, a combination of the number of functions (F) in an application and the cost (C) to acquire that application can be plotted into an “indifference curve.” An indifference curve is a graph showing different bundles of two variables between which a consumer is indifferent. Basically, all points on the curve deliver the same level of utility (satisfaction) to the consumer.
Based on benchmarking data over many years, all points that form these indifference curves deliver an optimal level of productivity for the complexity specified.
This productivity benchmarking can be used to assess and optimize the productivity of AD projects. For instance, if a particular AD project, for example application 3 in the above image, falls below its indifference curve, it is delivering sub-optimal productivity. Thus, by using this method of benchmarking, a buyer can immediately raise a flag with its service provider and force it to either:
- Reduce costs to achieve optimal productivity, or
- Provide more functionality to align the project with productivity indifference curve
This benchmarking tool also delivers benefits to service providers. If their case in point AD project is application 1, which falls above the indifference curve, they: 1) have consumer surplus into which they can dig (i.e., buyers could be willing to more); and 2) can use the data to advertise their higher productivity performance to generate more business and differentiate themselves from their competition.
We’d love to hear your thoughts on this type of productivity benchmarking. Have you employed it? What lessons learned can you share with your peers?
While buyers have typically approached, evaluated, and made third-party business process service delivery sourcing decisions at the operational level, and separated out decisions on the underlying software applications and/or technology infrastructure, they are increasingly realizing the value of looking at IT and BPO in an integrated manner.
Enter Business-Process-as-a-Service (BPaaS), a model in which buyers receive standardized business processes on a pay-as-you-go basis by accessing a shared set of resources – people, application, and infrastructure – from a single provider.
There are many potential upsides of BPaaS…but is it right for your organization? The answer lies in evaluating the model based on a holistic business case that looks at a range of factors including total cost of ownership (TCO), the nature of the process/functional area under consideration, business volume fluctuations, time-to-market, position on the technology curve, and internal culture and adaptability to change.
Let’s take a deeper look at the TCO factor, which must be analyzed in terms of both upfront and ongoing costs for all three layers of service delivery.
For our just released research report, Is BPaaS the Model for You?, we developed and used a holistic evaluation framework that compared TCO for BPaaS and the traditional IT+BPO model across three buyer sizes: small (~US$1 billion revenue/5,000 employees), medium (~US$5 billion revenue/20,000 employees), and large (~US$20 billion/100,000 employees.)
BPaaS brings big benefits. It is independent of deal duration, delivers 35-40 percent savings compared to traditional IT+BPO, enables leverage of the provider’s economies of scale, provides access to otherwise cost-prohibitive technology, and allows entry into BPO relationships that as stand-alone’s lack the necessary scale. Small buyers are also highly amenable to BPaaS’ process and technology standardization requirements.
BPaaS is pretty impressive. It delivers 25-30 percent savings over the traditional IT+BPO model, which while less than what small buyers reap is still significant in driving a successful business case. Medium buyers’ increased scale allows them to capture some of the economies of scale benefits even in the traditional IT+BPO model, thereby having a lower differential between the two models.
BPaaS is not too shoddy. While it only provides ~10 percent cost savings compared to traditional IT+BPO, the absolute differential in cumulative TCO can still be substantial given the high base. And, in certain buyer-specific situations such as technology enhancements, exploration of new BPO/IT infrastructure relationships, and expiration of legacy technology licenses, BPaaS can be a good model for large buyers to evaluate. But…their tendency to balk at following a tightly defined, standardized approach – unless significant configuration features offset a good portion of customization needs – reduces BPaaS’ appeal for them.
As you see, our evaluation framework shows an inverse relationship between buyer size and cost savings from BPaaS – i.e., the larger the buyer, the lower the percentage savings. In fact, as buyer size increases, the scale benefits of renting versus owning infrastructure and applications can dip into the negative column over ten years. Of course, the assumptions in our BPaaS to IT+BPO model analysis are ideal, and your organization’s individual reality may be quite different.
To learn more about how to evaluate BPaaS’ applicability to your company, select a BPaaS provider and solution, and implement the selected solution, please read our report, Is BPaaS the Model for You?
Healthcare IT has long been accused of being 10 years behind other industries, with hospitals being another five years behind that. This cliché has been around for so long, it is almost an axiom. And while (like with most clichés) there is, unfortunately, more than a bit of truth, we firmly believe that with the growing push and pull demands of healthcare mobility, this cliché’s days are numbered.
Mobility in healthcare will place demands on IT that will both force upgrades and elevate IT’s business impact in ways never seen before – thus making upgrade funding possible. Just a few examples:
- mHealth applications that can enable direct entry, which boosts clinician productivity and revenue potential
- Gamification that provides uniquely engaging ways to distill care information and entice people into improved wellness
- Cost savings, and patient comfort benefits, from remote viewing and remote sensors
Clearly, mobility will strain existing capabilities on not only wireless infrastructure but also servers and most substantially the data integration structures, in order to provide timely and accurate information as individuals move within facilities and across the country. Thus, the push is on to update and upgrade IT.
But while there have always been “pushes” on IT (demand always exceeds budgets), what makes us optimistic that the upgrades will actually begin to catch-up with other industries is the emerging “pull” from the business side of healthcare. A recent anecdote well illustrates this potential for IT transformation. A nursing home/assisted living operator approved a system-wide installation and upgrade across its facilities to support mobile data entry. Given the tight margins and the very thin IT budgets (typically less than one percent of revenue), this major initiative was demanded and funded by clinical operations, and cost justified by a one percent increase in therapist billable hours.
As this and the other examples above suggest, there are major direct business benefits from instituting robust and useful healthcare mobility.