Tag: Amazon

Is Amazon’s HQ2 Strategy Viable? | Sherpas in Blue Shirts

Unique, bold, unprecedented…those are just some of the words that describe Amazon’s announcement to establish a second headquarters in North America, with an end-state over the course of 10-15 years of 50,000 employees with an average salary exceeding $100,000.

The intent is exciting, both for potential employees and the cities that Amazon identified as location contenders. But is Amazon’s strategy viable? We believe it’s realistic to the extent that Amazon can keep its mojo going over such a long period of time.

Four factors driving Amazon’s HQ2 location selection

Scalability of talent

For Amazon to be able to amass 50,000 employees in 15 years, it will need to add 3,000 to 5,000 employees per year. These employees will have to be a mix of recent college grads and mid-management-level workers. So Amazon will have to take into account how many local universities and businesses there are in the selected location and the city’s relocation attractiveness.

Business mix

This is all about what Amazon intends HQ2’s business concentration to be. For example, will it have a retail focus like its Seattle headquarters? Does it intend to grow its advertising business in its new location? Will its focus be on the government sector, shipping/transportation/ logistics, or Latin American market growth? Whatever its intent, the new location will have to be a hot-bed of activity.

Time zone

If Amazon wants its second headquarters to help lead its international growth or manage suppliers based in Europe or Asia, the time zone overlaps of a location in the eastern or central U.S. time zone will be a huge business advantage.

Physical proximity

This relates to how easy it is to get to other geographical markets of interest. For example, if Europe is a target, a site in the northeast or central U.S. makes sense. But if Mexico, Central America, and Latin America are on the radar screen, a major city in the southeastern U.S. would be a good option.

Of these four, the biggest issue and most important consideration is: can Amazon get the volume and quality of talent it needs to fulfill its HQ2 vision? As we said above, the company will need to add up to 5,000 employees each year to get to 50,000 strong in 15 years. That’s a massive hiring agenda. For comparison purposes, when we’ve worked with other leading brand name organizations that are scaling new IT shops, they’ve typically been able to hire 200-350 people per year.

Can Amazon do it, or might it ultimately revert to a hubbed model, first establishing HQ2, and several years later HQ2.5? Only time will tell.

The six cities on our short list

Although Amazon recently winnowed its options down to 20 cities, we think only six of those are truly viable. Here, in alphabetical order, is a quick look at our short list:

Atlanta is a diverse economy, with UPS and Delta-driven strength in shipping and transportation, and Home Depot in retail. There’s a lot of impressive university talent available, in Georgia as well as the neighboring states of Alabama, Florida, and South Carolina

Boston is a great education center, and is known for large, corporate innovation. It would be a wonderful place for Amazon to build on what its been doing with Siri there, and to continue its growth in the high tech space. It’s also an airline hub with comparatively short flights to Europe.

Chicago is strong in the consumer and retail industries, and that obviously overlaps nicely with Amazon. It also has good connectivity to Europe, Seattle, and, to a reasonable extent, Asia.

Dallas has a good history of relocating companies’ operations and a strong tech talent pool and ability to pull talent in from universities in the region. One unique aspect in favor of Dallas is Amazon’s AWS business. As that matures, we expect it will need to increase the sophistication of how it sells to enterprises. And because Dallas is considered the original home of outsourcing enterprise IT services in North America, Amazon could find a lot of sales, marketing, and other talent in the area that wouldn’t be as common in the other cities.

New York City is by far the largest labor market, and Amazon could attract a lot of talent. If the company is trying to really increase its advertising business, the Big Apple would be a good choice. And because of the city’s diversity, Amazon wouldn’t be too limited in any one direction.

Washington, D.C. has a fairly large labor pool with a lot of high tech, much of it government-oriented, which may be both a pro and a con, depending upon Amazon’s mission. It’s well connected internationally, and is a fairly interesting place to try to establish a large corporate.

To learn more about our take on the viability of Amazon’s HQ2 strategy, please read our viewpoint, and listen to a discussion I recently had Ryan Takeo, host of KING-TV’s The Sound Podcast, called, “Talent, not incentives, most important in HQ2 search.”

Amazon, Berkshire Hathaway & JPMorgan Chase, Team Up to Tackle the Messy Business of Healthcare | Sherpas in Blue Shirts

On January 30, 2018, Amazon, Berkshire Hathaway, and JPMorgan Chase & Co.  announced a partnership to address healthcare for their U.S. employees. The goal is simple – provide their employees and their families with simplified, high-quality, and transparent healthcare at a reasonable cost, through technology solutions. They intend to pursue this opportunity through an independent company that is free from profit-making constraints.

The rationale behind this move

While this might not be the big Amazon-disrupts-healthcare reveal the market had been hoping for, it is still a meaningful move. Employer-sponsored health insurance currently covers around 157 million people in the United States, and people are not satisfied with the present state of affairs:

  • Insurers and employers are shifting the burden of increasing healthcare costs to the employees. Employees are now facing much higher deductibles and insurance contributions.HC PremiumsEmployers are moving towards programs with narrower networks. And if employees choose to visit a doctor outside the network, they have to spend more out of their own pocket.

HC Deductible_1

  • Health insurance premiums are growing faster than employee wages for both private and public workers. On average, premium as a ratio of wages has increased by four percentage points in the last five years.

The new healthcare normal calls for a fresh approach

Amidst rising costs, evolving consumer preferences, changing operating models, and an uncertain regulatory environment, stakeholders in the healthcare ecosystem are trying to create innovative partnerships and business models. For example:

  • CVS is buying Aetna for US$69 billion, creating a mini healthcare ecosystem
  • Anthem broke up with Express Scripts, its long-term pharmacy benefit management (PBM) partner, and is building its own PBM capabilities with some help from CVS
  • Intermountain Healthcare is leading a collaboration with Ascension, SSM Health, and Trinity Health, in consultation with the U.S. Department of Veterans Affairs, to form a new, not-for-profit generic drug company. The goal is to make essential generic medications more available and more affordable, bringing competition to the market for generic drugs
  • At last count in 2017, there were 923 accountable care organizations (ACO) covering approximately 32 million lives.

The Amazon-Berkshire Hathaway-JPMC trio could well lay down a marker on how employers shape and drive their own healthcare mandates. Consider the firms’ complementary skill sets:

  • Amazon has the deep technology expertise and experience-first approach crucial to addressing needs of an evolving workforce and consumer base. And from a data standpoint, AWS has already stated interest in leveraging longitudinal health records for population health and analysis efforts. E.g., it could use expertise in logistics to rethink warehousing and distribution to make drugs more cost efficient.
  • Berkshire Hathaway and JPMC can help improve the financial engineering that underpins the new endeavor, provide scale, and improve collective bargaining power.

 How might the mega-alliance play out?

This alliance can potentially have a huge impact on all the healthcare stakeholders.

HC Impact_1

The road ahead

The mega-healthcare company will currently focus on its combined employee base of approximately 1 million employees – plus their families – in the U.S. If it’s successful, it can take the model to other employer groups to help them address inefficiencies in their current healthcare setup.

However, it’s critical to keep in mind that healthcare differs from other areas disrupted by tech. It is often messy, fragmented, and lacks interoperable/standard data. Strikingly similar initiatives have faced hurdles and shut down (…remember Dossia?) Many initiatives to reimagine healthcare from outside have failed to move the needle meaningfully.

Given the lack of clarity around specifics of this partnership, some amount of skepticism is warranted. But for now, everybody’s looking at what the future holds.

What is your take on this mega-alliance? We would love to hear from you at [email protected] and [email protected]

‘Unprecedented’ Hiring Could Make The Amazon HQ2 Shortlist Much Shorter | In the News

Amazon’s HQ2 bidding process has advanced to a shortlist of 20 locations.

That narrowing field has some questioning how feasible creating a 50,000-person office from scratch will be for smaller contenders not known for their tech base.

“When this first came out, our reaction was: ‘Holy cow. That’s big, and it’s quick. Can they do it?’” Everest Group Managing Partner Eric Simonson said.

Amazon’s aggressive HQ2 building plans, coupled with its desire to hire 50,000 employees, would be a difficult task for any city to fulfill, Simonson said — but particularly those smaller markets like Columbus, Ohio; Raleigh, North Carolina; Nashville, Tennessee; and Indianapolis. An Everest Group report on HQ2 bidders said Amazon is going to struggle with filling seats if it considers regions with populations of less than 4 million people.

Read more in Bisnow

Clues into Amazon’s HQ2: What Does the Vancouver Announcement Tell Us? | Sherpas in Blue Shirts

In early November, Amazon announced that it will expand its presence in Vancouver from 1,000 jobs to 2,000 jobs by 2020. Although this did not receive nearly the same attention as Amazon’s request for proposals for the 50,000 employee location dubbed “HQ2”, there are some valuable clues to glean (see our earlier detailed assessment on the viability of Amazon’s HQ2 strategy and potential locations for our more complete analysis).

We read three important clues in this announcement.

  1. Vancouver is not a serious HQ2 candidate. Although Amazon is clearly comfortable enough with Vancouver to continue expanding there, it is a signal that Vancouver is not a serious candidate for the second headquarter location. If Amazon felt otherwise, the announcement did not need to be made and lose leverage in negotiating incentives for HQ2. There are multiple reasons why Vancouver may not be a strong candidate – size or cost of talent pool, too similar to Seattle, no time zone diversification, or that the complexities of operating in Canada outweigh the benefits of mainly operating in the U.S.
  2. The targeted scalability of HQ2 is going to be REALLY HARD. Assuming that Vancouver and HQ2 will have roughly similar mixes of talent, we can see that Amazon is scaling at only 15% of the rate targeted for HQ2. After setting up in 2015 and reaching 1,000 employees in 2017, Amazon is planning to reach 2,000 employees by 2020. Let’s assume that is 2,000 people over four years for an annual rate of 500 net-new employees. HQ2 is targeting 50,000 employees over 15 years, which is over 3,000 per year – 6 times what is being achieved in Vancouver. This supports our earlier view that any city under 4 million in population is clearly not viable (Vancouver is under 2.5 million) and even the largest cities (which are 7-15 million) will struggle to consistently grow at the rate indicated by Amazon for HQ2.
  3. Hmmm…is Amazon truly serious about HQ2 as stated? For purposes of our earlier analysis, we assumed that Amazon truly intended to pursue its stated vision (up to 50,000 employees in 15 years with an average salary in excess of US$100,000 and the HQ2 acting as an equal to Seattle). The announcement about Vancouver is interesting and revealing because it is inconsistent with Amazon seeking to aggregate its scale into large locations. A 2,000 employee location is certainly large, but it is much smaller than currently located in Seattle or the planned HQ2.

If centers at much smaller scale are valuable to Amazon, why even pursue the HQ2 strategy?

First, Amazon might realize that a single 50,000 location is likely too big and contemplating whether it can make “clusters” (cities within very short distances from each other) produce similar benefits as a single location, which would be multiple buildings anyway. If Amazon believes this, it might be looking to select multiple cities within a cluster for the HQ2 strategy (think Philadelphia, Baltimore, Washington, DC).

Second, Amazon may have intentionally set a very, very large 50,000 employee target to get maximum attention and creativity, but is planning to structure the eventual single location agreement to only commit to 5,000-10,000 employees. Still very large, but something it has a much easier chance to fulfill and then potentially exceed as it so desires.

In summary, we believe these clues Echo many of our earlier perspectives and underscore that the eventual outcome may be quite different than stated – we remain Primed to hear what Amazon decides in 2018.

Live from Bangalore – the NASSCOM IMS Summit, September 22 | Gaining Altitude in the Cloud

Hello everybody! I’m back, reporting from day two of the NASCOMM IMS Summit in Bangalore. Today’s conference was focused on discussing alternative models of cloud computing and what works best for who.

First, Adam Spinsky, CMO, Amazon Web Services (AWS), told us his view of what happening out there in the cloudosphere. An interesting factoid to chew on – as of today, AWS is adding as much data center capacity every day as the entire Amazon company had in its fifth year of operation when it was a US$2.7 billion enterprise.

Even more compelling proof of the fact that the cloud revolution is really happening were Spinsky’s examples of the types of workloads AWS supports – SAP, entire e-commerce portals that are the revenue engines of companies, and disaster recovery infrastructure…all are hosted on the cloud. Fairly mission critical stuff, rather than “ohh, it’s only email that’s going to go on the cloud,” you must admit.

Next up, Martin Bishop of Telstra spoke of the customer’s dilemma in choosing the right cloud model. This segued nicely into the panel discussion, “Trigger Points – Driving Traditional Data Center to the Private Cloud,” of which I was a part.

M.S. Rangaraj of Microland chaired the panel and set the context by talking about the key considerations of cloud implementation. According to Rangaraj, the key issues are orchestration and management, as the IT environment morphs into new levels of complexity with multiple providers delivering services across a multitude of devices.

I spoke of the business case for a hybrid cloud model. While private cloud is good, and current levels of public cloud pricing provide slightly better business value, a combination of the two enables clients to reduce the huge wastage of unused data center resources they now have to live with. Today, infrastructure is sized to peak capacity, which is utilized once in a blue moon. The dynamic hybrid model enables companies to downsize capacity to the average baseline. Associated savings in energy, personnel, and maintenance imply dramatic cost advantages over both pure public or private models.

Kothandaraman Karunagaran from CSC took up the thread and spoke of the role of service providers in this new paradigm. While outsourcing may not “die” as a result of the cloud movement, it’s jolly well going to be transformed. Service providers will need to spend far more time in managing, planning, and analyzing usage and consumption data, and less time on monitoring and maintenance. In other words, service providers’ roles will evolve from reactive to proactive management.

Some of my key takeaways from the conference include:

  • Everybody agrees that there is no silver bullet model, meaning that there are no clear winners in a cloud environment, and the hybrid model will keep getting traction as the world becomes increasingly, well, hybrid.
  • Until not long ago, we spoke of the need to simplify IT. Well, the only part of IT that’s going to get simplified is the consumption bit. If you are a CIO reading this, we’ve got bad news for you. Management of IT is going to get more, not less, complicated. Multiple service providers, networks and devices, reduced cycle time, and self-provisioning means that management just got a whole lot tougher.
  • Service providers need to rapidly engage with this new reality and figure out business models can adapt to it. The unit of value is no longer the FTE. It’s what the FTE achieves for the client, or even more complicated, what the consumer actually ends up using. We live in interesting times, and they will only become more interesting as time goes on.

That’s it from my end. I enjoyed the conference, look forward to more illuminating discussions next year, and, hopefully, to seeing you there!

If you weren’t able to attend this year’s conference – or even if you were – you can download all speaker presentations at: http://www.nasscom.in/nasscom/templates/flagshipEvents.aspx?id=61241

 

What If the Hackers Had Attacked Sony Through Microsoft Azure Instead of Amazon’s EC2? | Gaining Altitude in the Cloud

There is widespread speculation that the recent attack on Sony was accomplished by utilizing credit card information stolen via compute resources purchased from Amazon’s EC2 cloud offering. This high profile incident has attracted attention in the mainstream press and in the blogosphere, underscoring the interconnected and anonymous nature of cloud computing, as well as the need for vigilance and improved security. Interestingly, there has been little attention paid or blame allocated to Amazon’s EC2 offering in the public discussion. Amazon, rightly or wrongly, has largely escaped unscathed, and the cloud infrastructure services sector – of which EC2 is the most visible champion – continues to enjoy increased adoption, favorable press, and commentary largely unaffected by this incident.

There are many good reasons why Amazon’s EC2 has not been vilified and cloud adoption continues at its frenetic pace. But what if the circumstances had been different? What if the credit card information had been stolen utilizing Microsoft’s Azure platform? Would the world have responded with the same collective yawn? Would there have been an attempt to hold Microsoft accountable for the nefarious use of its compute power? Would open source enthusiasts have suggested it to be another reason to move to open source from Microsoft products? To explore this, let’s first examine why it might have made a difference:

  • Microsoft plays a different role in championing cloud than Amazon. Azure is the Microsoft answer to the Windows operating system (OS) and bundled IP provided through the cloud. As such, it represents Windows and the dominant OS at this time.
  • As the dominant OS provider, Microsoft appears to be held to a different standard than most other providers; if there is a hole in Windows, we are all vulnerable (except, of course, Apple fanatics).
  • Microsoft acts as a lightning rod like no other, drawing negative attention from all quarters.
  • There seems to be a preference to excoriate past monopolists in favor of newer entrants that may yet gain similar market power, akin to market behavior that favored the Microsoft upstart over the established IBM in the 1980s.

So, what would have happened? Would the steady march to the cloud be delayed as we criticized Microsoft and questioned more deeply not only its culpability for how its service is utilized, but also the requirements for security in the cloud more broadly? Would regulators be initiating inquiries threatening further changes in compliance security laws, or attempting to add responsibility to providers of compute power? Or would there have been a similar yawn? It’s interesting to speculate… and as we do, what does this tell us about where we are headed and where we have been?

The Facts, not the Fluff and Puff: What Really Happened with Amazon’s Cloud Outage | Gaining Altitude in the Cloud

In the more than 1,000 articles written about Amazon’s April 21 cloud outage, I found myriad “lessons,” “reasons,” “perspectives,” and a “black eye,” a “wake up call” and a “dawn of cloud computing.”

What I struggled to find was a simple, but factual, explanation of what actually happened at Amazon. Thankfully, Amazon did eventually post a lengthy summary of the outage, which is an amazing read for IT geeks, to which the author of this blog proudly belongs, but may induce temporary insanity in a more normal individual. So this blog is my attempt to describe the event in simple, non-technical terms. But if I slip once or twice into geek-speak, please show some mercy.

The Basics of Amazon’s Cloud Computing Solution

When you use Amazon to get computing resources, you’d usually start with a virtual server (aka EC2 instance). You’d also want to get some disk space (aka storage), which the company provides through Amazon Elastic Block Store (EBS). Although EBS gives you what looks like storage, in reality it’s a mirrored cluster of two nodes … oops, sorry … two different physical pieces of storage containing a copy of the same data. EBS won’t let you work using only one piece of storage (aka one node) because if it goes down you’d lose all your data. Thus in Amazon’s architecture, a lonely node without its second half gets depressed and dedicates all its effort to finding its mate. (Not unlike some humans, I might add.)

Amazon partitions its storage hardware into geographic Regions (think of them as large datacenters) and within Regions to Availability Zones (think of them as small portions of a datacenter, separated from each other). This is done to avoid losing the whole Region if one Availability Zone is in trouble. All this complexity is managed by a set of software services collectively called “EBS control plane,” (think of it as a Dispatcher.)

There is also a network that connects all of this, and yes, you guessed right, it is also mirrored. In other words, there are two networks… primary and secondary.

Now, the Play-by-Play

At 12:47 a.m. PDT on April 21, Amazon’s team started a significant, but largely non-eventful, upgrade to its primary network, and in doing so redirected all the traffic to another segment of the network. For some reason (not explained by Amazon), all users traffic was shifted to the secondary network. Now you may reply, “So what…that’s what it’s there for!” Well, not exactly. According to Amazon, the secondary network is a “lower capacity network, used as a back-up,” and hence it was overloaded instantly. This is an appropriate moment for you to ask “Huh?” but I’ll get back to this later.

From this point on, the ECB nodes (aka pieces of storage) lost connections to their mirrored counterparts and assumed their other halves were gone. Now remember that ECB will not let the nodes operate without their mates, so they started frantically looking for storage space to create a new copy, which Amazon eloquently dubbed the “re-mirroring storm.” This is likely when reddit.com, the New York Times and a bunch of others started noticing that something was wrong.

By 5:30 a.m. PDT, the poor ECB nodes started realizing harsh reality of life, i.e., that their other halves were nowhere to be found. In a desperate plea for help, they started sending “WTF?” requests (relax, in IT terminology WTF means “Where is The File?”) to the Dispatcher (aka EBS control pane). So far, the problems had only affected one Availability Zone (remember, a Zone is a partitioned piece of a large datacenter). But the overworked Dispatcher started losing its cool and was about to black out. Realizing the Dispatcher’s meltdown would affect the whole Region, causing an outage with even greater reach, Amazon at 8:20am PDT made the tough, but highly needed, decision to disconnect the affected Availability Zone from the Dispatcher. That’s when reddit.com and others in that Availability Zone lost their websites for the next three days.

Unfortunately, although Amazon sacrificed the crippled Availability Zone to allow other Zones to operate, customers in other Zones started having problems too. Amazon described them as “elevated error rates,” which brings us to our second “Huh?” moment.

By 12:04 p.m. PDT, Amazon got the situation under control and completely localized the problem to one Availability Zone. The team started working on recovery, which proved to be quite challenging in a real production environment. In essence, these IT heroes were changing tires on a moving vehicle, replacing bulbs in live lamp, filling a cavity in a chewing mouth… and my one word for them here is “Respect!”

At this point, as the IT folks started painstakingly finding mates for lonely EBS nodes, two problems arose. First, even when a node was reunited with its second half (or a new one was provided), it would not start operating without sending a happy message to the Dispatcher. Unfortunately, the Dispatcher couldn’t answer the message because it had been taken offline to avoid a Region-wide meltdown. Second, the team ran out of physical storage capacity for new “second halves.”  Our third “Huh?”

Finally, at 6:15 p.m. PDT on April 23, all but 2.2 percent of the most unfortunate nodes were up and running. The system was fully operational on April 24 at 3:00 p.m. PDT, but 0.07 percent of all nodes were lost irreparably. Interestingly, this loss led to the loss of 0.4 percent of all database instances in the Zone. Why the difference? Because when a relational database resides on more than one node, loss of one of them may lead to loss of data integrity for the whole instance. Think of it this way: if you lose every second page in a book, you can’t read the entire thing, right?

Now let’s take a quick look at our three “Huh?s”:

  1. Relying on SLAs alone (which in Amazon’s case is 99.95 percent uptime) can be a risky strategy for critical applications. But as with any technological component of a global services solution, you must understand the supplier’s policies.
  2. Cloud platforms are complex, and outages will happen. One thing we can all be certain of in IT; if it cannot happen, it will happen anyway.
  3. When you give away your architectural control, you, well, lose your architectural control. Unfortunately, Amazon did not have the needed storage capacity at the time it was required. But similar to my first point above, all technological components of a global services solution have upsides and downsides, and all buyer organizations must make their own determinations on what they can live with, and what they can’t.

An interesting, final point: after being down for almost three days, reddit.com stated, “We will continue to use Amazon’s other services as we have been. They have some work to do on the EBS product, and they are aware of that and working on it.” Now, folks, what do you think?

Where Are Enterprises in the Public Cloud? | Gaining Altitude in the Cloud

Amazon Web Services (AWS) recently announced several additional services including dedicated instances of Elastic Compute Cloud (EC2) in three flavors: on demand, one year reserved, and three year reserved. This should come as no surprise to those who have been following Amazon, as the company has been continually launching services such as CloudWatch, Virtual Private Cloud (VPC), and AWS Premium Support in an attempt to position itself as an enterprise cloud provider.

But will these latest offerings capture the attention of the enterprise? To date, much of the workload transitioned to the public cloud has been project-based (e.g., test and development), and peak demand computing-focused. Is there a magic bullet that will motivate enterprises to move their production environments to the public cloud?

In comparison with “traditional” outsourcing, public cloud offerings – whether from Amazon or any other provider – present a variety of real or perceived hurdles that must be overcome before we see enterprises adopt them for production-focused work:

Security: the ability to ensure, to the client’s satisfaction, data protection, data transfer security, and access control in a multi-tenant environment. While the cloud offers many advantages, and offerings continue to evolve to create a more secure computing environment, the perception that multi-tenancy equates to lack of security remains.

Performance and Availability: typical performance SLAs for the computing environment and all related memory and storage in traditional outsourcing relationships are 99.5– 99.9 percent availability, and high availability environments require 99.99 percent or higher. These availability ratings are measured monthly, with contractually agreed upon rebates or discounts kicking in if the availability SLA isn’t met. While some public cloud providers will meet the lower end of these SLAs, some use 12 months of previous service as the measurement timeline, while others define an SLA event as any outage in excess of 30 minutes, and still others use different measurements. This disparity leads to confusion and discomfort among most enterprises, and the perception that the cloud is not as robust as outsourcing services.

Compliance and Certifications: in industries that utilize highly personal and sensitive end-user customer information – such as social security number, bank account details, or credit card information – or those that require compliance in areas including HIPPA or FISMA, providers’ certifications are vital. As most public cloud providers have only basic certification and compliance ratings, enterprises must tread very carefully, and be extremely selective.

Support: a cloud model with little or no support only goes so far. Enterprises must be able to get assistance, when they need it. Some public cloud providers – such as Amazon and Terremark – do offer 24X7 support for an additional fee, but others still need to figure support into their offering equation.

Addressing and overcoming these measuring sticks will encourage enterprises to review their workloads and evaluate what makes sense to move to the cloud, and what will remain in private (or even legacy) environments.

However, enterprises’ workloads are also price sensitive, and we believe, at least today, that the public cloud is not an economical alternative for many production environments. Thus enterprise movement to the cloud could evolve one of several ways. In a hybrid cloud where the bulk of the production environment will be placed in a private cloud and peak demand burst to the public cloud. Or will increased competition, improved asset utilization and workload management continue to drive down pricing, as has happened to Amazon in both of the past two years? If so, will enterprises bypass the hybrid path and move straight to the public cloud as the economics prove attractive?

The ability to meet client demands, creating a comfort level with the cloud and the economics all play a role into how and when enterprises migrate to the cloud. The market is again at an inflection point, and it promises to be an exciting time.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.