Tag: next generation global services

Economic Forecast Calls for More Clouds | Gaining Altitude in the Cloud

Have you ever stopped to think why cloud computing is at the center of any IT-related discussion? In our conversations with clients, from the boardroom to the line manager, cloud is sure to enter into the discussion. Today, many of those conversations are around understanding, and to a lesser degree, implementation. But once the discussion crosses the threshold of understanding, the topic immediately goes to, “How can I get into the cloud?”

Everest Group recently held a webinar on the economics of cloud computing. There were two objectives: 1) Help clarify just how disruptive, in a good way, cloud computing is and can be; and 2) Demonstrate the economic benefits that exist in the cloud economy, and that there are those striving for this competitive advantage today.

The Hole in the Water That You Throw Money Into

One of the key economic drivers that hampers today’s data center environment is the relatively low utilization rate across its resources. Think about it like this: You’ve probably heard the old adage that owning a boat is like having a hole in the water that you throw money into. That is because the majority of boats are seldom used. (Trust me, I know, I used to own one.) The per use cost of a $25,000 (and quickly depreciating) boat that you actually use three or four times a year is quite high, and the reality is you could have rented a boat often for a fraction of the cost. The same thing is happening in your data center. If your utilization is 20 percent, or even 30 percent, you have essentially wasted 70-80 percent of your spend. That is an expensive data center.

Workload Utilizations1

Cloud computing is like that little boat rental shop tucked away in a nice cove on your favorite lake. What if you could get rid of excess capacity, better manage resource peaks and valleys, and rent public capacity when you need it, and not pay for it when you don’t?

What if we leverage public cloud flexibility1

The Economics

As you can see in the graphic below, the economics related to cloud are dramatic, and the key lies in leveraging the public cloud to pay only for what you use, eliminating the issue of excess capacity.

Public cloud options unlock extraordinary enterprise economics

There is a variety of point examples in which this is done today, with the above economics reaped. For instance, Ticket Master leverages the public cloud for large events, loading an environment to the cloud, specifically sized for each given event. The specific event may only last several hours or days, and once complete, Ticket Master takes down the environment and loads the data in its dedicated systems.

There are also enterprises and suppliers working to enable peak bursting more seamlessly. For example, eBay recently showed where they are working with Rackspace and Microsoft Azure to enable hybrid cloud bursting, allowing eBay to reduce its steady state environment (think hole in the water) from 1,900 to 800 servers, saving it $1.1 million per month.

Hybrid economics example eBay

 The Steps to Getting Started

Dedicate yourself to getting rid of your boat (or should I say boat anchor?) Begin a portfolio assessment. Understand what you have, and what is driving utilization. Consolidate applications, offload non-critical usage to the valleys, and look for ways to leverage the public/private cloud. When I unloaded my boat, I freed up capital for the more important things in life, without sacrificing my enjoyment. Doing so in your data center will allow you to take on strategic initiatives that will make you even more competitive.

Einstein’s Definition of Insanity and Its Applicability to the Healthcare Solutions Industry | Sherpas in Blue Shirts

Starting with the premise that all providers and all clients in the healthcare industry enter into large, multi-year arrangements with honesty, fairness, and a desire for a successful engagement…why do only a small handful come in on time, on budget, and with measurable outcomes? I think Albert Einstein’s famous statement, “The definition of insanity is doing the same thing over and over and expecting different results,” is, unfortunately, applicable here.

In financial and clinical implementations, ITO/BPO engagements, ADM solutions, clinical transformations, and clinical trial deployments over the past several decades, the providers and clients have been doing essentially the same thing, again and again, and in most cases the engagements have ranged from lackluster to volatile. Yet with all the increasing governmental requirements, major advances in personal and mobile technologies, and demands from patients, customers, physicians and clinicians – and let’s not forget escalating competition – we need to stop the insanity to ensure the problems of the past aren’t continually replicated in the next generation of healthcare solutions.

For healthcare industry improvement or transformation initiatives to succeed, we need methodologies. We need qualified staff to help guide and manage the projects over a multi-year period. And we need to deal with unplanned turnover, inflexible contracts that don’t allow for any change in the client’s strategic direction, ever-shrinking budgets, and the client’s desire to finally have measurable outcomes. So, assuming both sides want to deal in an environment of honesty and candor, and both parties have strong resources, why do we continue to fail? One single word – misalignment.

Misalignment occurs between clients and providers due to differences in objectives, priorities, and performance. For example, if the provider thinks the major objectives are cost and capital expense avoidance, and the client thinks improved service quality, skills and innovation, and time to delivery are the critical success factors, we have misalignment. And the result is relationship tension that can ultimately derail the engagement. Yes, in a perfect world each party knows the other’s objectives. But we don’t live in a perfect world, and so this doesn’t happen often. Combine that issue with the changing landscape the client deals with over a multi-year engagement, e.g., acquisitions, new product offerings, strategic changes in direction, etc., and the relationship can explode quickly. And even if the two parties agree on the business objectives, it does not mean they both give them the same priorities. For example, I know of one particular instance in which the client wanted significant focus on innovation and meeting strategic objectives, while the supplier felt it was doing its job by using low-rate resources. Without understanding these types of gaps, how can we fix the misalignment? We can’t!

We’ve heard similar discussions before, and each fix had a methodology or clever name. But few were implemented, and even fewer were successful. The result is continuation of Einstein’s definition of insanity. As we deal with next generation solutions for the healthcare industry, let’s get sane and change the results by viewing large, complex or strategic engagements from a holistic point of view.

The Facts, not the Fluff and Puff: What Really Happened with Amazon’s Cloud Outage | Gaining Altitude in the Cloud

In the more than 1,000 articles written about Amazon’s April 21 cloud outage, I found myriad “lessons,” “reasons,” “perspectives,” and a “black eye,” a “wake up call” and a “dawn of cloud computing.”

What I struggled to find was a simple, but factual, explanation of what actually happened at Amazon. Thankfully, Amazon did eventually post a lengthy summary of the outage, which is an amazing read for IT geeks, to which the author of this blog proudly belongs, but may induce temporary insanity in a more normal individual. So this blog is my attempt to describe the event in simple, non-technical terms. But if I slip once or twice into geek-speak, please show some mercy.

The Basics of Amazon’s Cloud Computing Solution

When you use Amazon to get computing resources, you’d usually start with a virtual server (aka EC2 instance). You’d also want to get some disk space (aka storage), which the company provides through Amazon Elastic Block Store (EBS). Although EBS gives you what looks like storage, in reality it’s a mirrored cluster of two nodes … oops, sorry … two different physical pieces of storage containing a copy of the same data. EBS won’t let you work using only one piece of storage (aka one node) because if it goes down you’d lose all your data. Thus in Amazon’s architecture, a lonely node without its second half gets depressed and dedicates all its effort to finding its mate. (Not unlike some humans, I might add.)

Amazon partitions its storage hardware into geographic Regions (think of them as large datacenters) and within Regions to Availability Zones (think of them as small portions of a datacenter, separated from each other). This is done to avoid losing the whole Region if one Availability Zone is in trouble. All this complexity is managed by a set of software services collectively called “EBS control plane,” (think of it as a Dispatcher.)

There is also a network that connects all of this, and yes, you guessed right, it is also mirrored. In other words, there are two networks… primary and secondary.

Now, the Play-by-Play

At 12:47 a.m. PDT on April 21, Amazon’s team started a significant, but largely non-eventful, upgrade to its primary network, and in doing so redirected all the traffic to another segment of the network. For some reason (not explained by Amazon), all users traffic was shifted to the secondary network. Now you may reply, “So what…that’s what it’s there for!” Well, not exactly. According to Amazon, the secondary network is a “lower capacity network, used as a back-up,” and hence it was overloaded instantly. This is an appropriate moment for you to ask “Huh?” but I’ll get back to this later.

From this point on, the ECB nodes (aka pieces of storage) lost connections to their mirrored counterparts and assumed their other halves were gone. Now remember that ECB will not let the nodes operate without their mates, so they started frantically looking for storage space to create a new copy, which Amazon eloquently dubbed the “re-mirroring storm.” This is likely when reddit.com, the New York Times and a bunch of others started noticing that something was wrong.

By 5:30 a.m. PDT, the poor ECB nodes started realizing harsh reality of life, i.e., that their other halves were nowhere to be found. In a desperate plea for help, they started sending “WTF?” requests (relax, in IT terminology WTF means “Where is The File?”) to the Dispatcher (aka EBS control pane). So far, the problems had only affected one Availability Zone (remember, a Zone is a partitioned piece of a large datacenter). But the overworked Dispatcher started losing its cool and was about to black out. Realizing the Dispatcher’s meltdown would affect the whole Region, causing an outage with even greater reach, Amazon at 8:20am PDT made the tough, but highly needed, decision to disconnect the affected Availability Zone from the Dispatcher. That’s when reddit.com and others in that Availability Zone lost their websites for the next three days.

Unfortunately, although Amazon sacrificed the crippled Availability Zone to allow other Zones to operate, customers in other Zones started having problems too. Amazon described them as “elevated error rates,” which brings us to our second “Huh?” moment.

By 12:04 p.m. PDT, Amazon got the situation under control and completely localized the problem to one Availability Zone. The team started working on recovery, which proved to be quite challenging in a real production environment. In essence, these IT heroes were changing tires on a moving vehicle, replacing bulbs in live lamp, filling a cavity in a chewing mouth… and my one word for them here is “Respect!”

At this point, as the IT folks started painstakingly finding mates for lonely EBS nodes, two problems arose. First, even when a node was reunited with its second half (or a new one was provided), it would not start operating without sending a happy message to the Dispatcher. Unfortunately, the Dispatcher couldn’t answer the message because it had been taken offline to avoid a Region-wide meltdown. Second, the team ran out of physical storage capacity for new “second halves.”  Our third “Huh?”

Finally, at 6:15 p.m. PDT on April 23, all but 2.2 percent of the most unfortunate nodes were up and running. The system was fully operational on April 24 at 3:00 p.m. PDT, but 0.07 percent of all nodes were lost irreparably. Interestingly, this loss led to the loss of 0.4 percent of all database instances in the Zone. Why the difference? Because when a relational database resides on more than one node, loss of one of them may lead to loss of data integrity for the whole instance. Think of it this way: if you lose every second page in a book, you can’t read the entire thing, right?

Now let’s take a quick look at our three “Huh?s”:

  1. Relying on SLAs alone (which in Amazon’s case is 99.95 percent uptime) can be a risky strategy for critical applications. But as with any technological component of a global services solution, you must understand the supplier’s policies.
  2. Cloud platforms are complex, and outages will happen. One thing we can all be certain of in IT; if it cannot happen, it will happen anyway.
  3. When you give away your architectural control, you, well, lose your architectural control. Unfortunately, Amazon did not have the needed storage capacity at the time it was required. But similar to my first point above, all technological components of a global services solution have upsides and downsides, and all buyer organizations must make their own determinations on what they can live with, and what they can’t.

An interesting, final point: after being down for almost three days, reddit.com stated, “We will continue to use Amazon’s other services as we have been. They have some work to do on the EBS product, and they are aware of that and working on it.” Now, folks, what do you think?

What Will Complete Integration of Electronic Medical Records Require from Service Providers? | Sherpas in Blue Shirts

Many of us remember funny stories about unknowing users trying to actually speak to ATMs in the early days (after all, the ATMs verbally asked the users for their PIN numbers). And most of us wonder – or should – why many of today’s drive-up ATMs have Braille lettering on them! But the fact is, today we can all very clearly explain what they are, and despite irritation at having to pay service charges, ATMs around the world provide us with cash resources and access to our financial data, and an increasing number even enable us to pay household bills or purchase train or movie tickets… the complete package.

Unfortunately, the same can’t be said for Electronic Medical Records (EMR.) Yes, current EMR implementations enable hospitals with storage access for admissions and creation of patients’ medical histories. But a complete, on-demand picture of an individual’s health records? Not yet. Capabilities within a single Integrated Delivery Network (IDN), much less on a global basis? Not yet. Unfortunately, the complicated nature of healthcare organizations – both providers and payers – and their focus on decisions by diverse committee creates politics that struggle to agree on the definition and rules necessary for complete integration of all information, by all appropriate users (i.e., physicians, clinicians, emergency room workers, pharmacists and patients) wherever they are, by whatever device they choose to use.

Many healthcare organizations have completed the first and most costly step, which is implementing the software able to support basic data entry and tracking to perform clinical workflow, and financial, billing, and decision-making functions. But to achieve complete integration – what constitutes the next generation of services – EMR solutions must include:

  • Optimization of the implemented product
  • Efficient data warehousing
  • Application of business intelligence tools to research operational efficiencies, improve quality and safety, and develop new techniques and protocols
  • Integration of medical device data directly into the EMR
  • Interoperability that allows for accessing data wherever it exists, and creating on demand views (e.g., EHR, PHR, P4P, EPM)
  • Compliance for regulatory and safety standards
  • Ongoing support and maintenance for clinical and financial applications

So what must companies aspiring to be high-value integrators do to become next generation EMR service providers? Understand the client’s needs. Provide services that can leverage leading edge solution sets from internal and external sources. Shape a solution that leverages horizontal services, Centers of Excellence (CoEs) and creatively team with leading edge organizations that provide domain-specific products as a part of the overall solution set. Doing so will allow world class healthcare organizations to depend upon world class Tier One integrators to supply all their technology needs. These are necessary requirements for a world-class provider of services to effectively compete now and into the next decade.

The Smart Metering Wave and Its Impact on Utilities’ Meter-to-Cash Process | Sherpas in Blue Shirts

Meter-to-Cash (M2C) is a significant process for utility companies as it not only represents their revenue cycle but also touches the end customer directly. Essentially M2C is the utility industry’s version of the generic Order-to-Cash (O2C) process. These are times of change for the utility industry due to a variety of reasons, the advent of disruptive technologies such as smart metering being one of them.

While the future of smart metering is still being debated, many facts suggest that it’s no longer an “if” but a “when.” For example, the United States in November 2009 directed US$3.4 billion in federal economic stimulus funding to smart grid development. The European Union in September 2009 enacted a “Third Energy Package,” which aims to see every European electricity meter smart by 2022. A recent study by ABI Research projects the global deployment of smart meters to grow at a CAGR of nearly 25 percent from 2009-2014. A combination of factors including regulatory push, intense competition (particularly in deregulated markets), and the business benefits that smart meters offer to utilities’ operations – in terms of tightened revenue cycles and increased customer satisfaction – are driving the adoption of smart meters.

However, deploying a smart metering infrastructure is no small task for a utility as it brings in many fundamental changes to the M2C operations. Managing the cutover from traditional to smart meters, dealing with new network technologies, the diminished role of field services, and the upgrades required to meter data management systems (MDMS) are just some of the key challenges that utilities undergoing smart metering implementation need to overcome. But the even bigger challenge arrives after implementation – how does a utility manage the massive explosion in meter data in the “smart” world? As opposed to meter reads every month or two, we are now talking about reads once every hour that thousands of smart meters throw back to the utility’s MDMS. Even more importantly, how should a utility leverage the data for meaningful business intelligence purposes?

Many utilities have found the answer in outsourcing some of the M2C functions to external service providers that have jumped on the smart metering wave with services ranging from pre-implementation advisory to post-implementation services such as smart analytics offerings. For example, Capgemini‘s smart energy services offering focuses on the requirements of utilities undergoing smart metering implementation. It recently launched a new smart metering management platform – labeled as Smart Energy Services Platform – for utilities to support all the end-to-end business processes necessary for the deployment and ongoing operation of a smart meter estate.

If you’re an M2C BPO provider that hasn’t yet considered including smart metering services in your portfolio, are you still debating the future of smart metering?

Learn more about M2C BPO at Everest Group’s May 10 webinar.

Will the Sun Come out Tomorrow? | Gaining Altitude in the Cloud

Cloud computing promises increased flexibility, faster time to market, and drastic reduction of costs by better utilizing assets and improving operational efficiency. The cloud further promises to create an environment that is fully redundant, readily available, and very secure. Who isn’t talking about and wanting the promises of the cloud?

Today, however, Amazon’s cloud suffered significant degradation in its Virginia data center following an almost flawless year+ long record. Yes, the rain started pouring out of Amazon’s cloud at about 1:40 a.m. PT when it began experiencing latency and error rates in the east coast U.S. region.

The first status message about the problem stated:

1:41 AM PT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region.

Seven hours later, as Amazon continued to feverishly work on correcting the problem, its update said:

8:54 AM PDT We’d like to provide additional color on what were working on right now (please note that we always know more and understand issues better after we fully recover and dive deep into the post mortem). A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances. We are working as quickly as possible to add capacity to that one Availability Zone to speed up the re-mirroring, and working to restore the control plane issue. We’re starting to see progress on these efforts, but are not there yet. We will continue to provide updates when we have them.

No! Say it’s not so! A cloud outage? The reality is that cloud computing remains the greatest disruptive force we’ve seen in business world since the proliferation of the Internet. What cloud computing will do to legacy environments is similar to what GPS systems did to mapmakers. And when is the last time you picked up a map?

In the future, businesses won’t even consider hosting their own IT environments. It will be an automatic decision to go to the cloud.

So why is Amazon’s outage news?

Only because it affected the 800-pound gorilla. Amazon currently has about 50 percent of the cloud market, and its competitors can only dream of this market share. When fellow cloud provider Coghead failed in 2009, did anyone know?  We certainly didn’t.  But when Amazon hiccups, everybody knows it.

Yes, the outage did affect a number of businesses. But businesses experience outages, disruptions, and degradation of service every day, regardless of whether the IT environment is legacy or next generation, outsourced or insourced.  In response, these businesses scramble, putting in place panicked recovery plans, and having their IT folk work around the clock to get it fixed…but rarely, do these service blips make the news.   So with the spotlight squarely shining on it because of its position in the marketplace, Amazon is scrambling, panicking, and working to get the problem fixed. And it will. Probably long before its clients would or could in their own environments.

Yes, it rained today, but really, it was just a little sprinkle. We believe the future for the cloud is so bright, we all need to be wearing shades.

“The Gambler” and Developing Win-Win Contractual Relationships in the Healthcare Industry | Sherpas in Blue Shirts

As a country music fan, several lines in Kenny Rogers’ hit song “The Gambler” tend to make me think about outsourcing relationships, especially in the healthcare industry, as that’s where I’ve spent the bulk of my career.

“If you’re gonna play the game, boy, ya gotta learn to play it right”

In the 1995 to 2004 timeframe there was a proliferation of outsourcing among healthcare provider and health plan companies. The outsourcing advisory community began to cater to large and complex Integrated Delivery Networks (IDNs) and Academic Medical Centers to ensure they received the same type of world-class outsourcing services that Fortune-rated companies in other industries had already been receiving. As a result, a large number of outsourced ITO, APO and BPO contracts were inked, and the healthcare provider and health plan organizations came to depend on the third-party service provision to more efficiently manage their middle business services and IT needs.

Unfortunately, the healthcare firms got caught in the same conundrum as do all organizations that enter into long-term service delivery contracts. Once SLAs and joint governance models are agreed upon, service providers have little incentive to do anything but satisfy the contractual commitment in the most cost-effective manner. They also rarely get any clear understanding of what more the client may want. On the flip side, as technologies, markets, competitive drivers and growth objectives dynamically evolve over time, service recipients require, and expect, additional value from their providers to meet their continually changing business needs. But they rarely articulate what more they want from their service providers.

This disconnect ultimately leads to a mutual loss, as the original metrics of the contract are quickly outdated, value cannot be measured or realized, the incentives for both parties are misaligned, and a tension-riddled relationship develops.

“You have to know when to walk away and know when to run” (or maybe not)

A case in point: In 2005, a major healthcare provider contracted with a global provider of healthcare technology infrastructure and application sourcing services to support critical Electronic Medical Record, Operating Room, Scheduling and Billing services, all of which are essential for providing patient care and revenue functions. SLAs and a governance structure were negotiated, resulting in a complex, 10-year relationship with delivery defined in the traditional structure. However, as the contract didn’t allow for dynamic and ever-changing needs dictated by the marketplace, enhanced technologies and changes in regulatory compliance requirements, the business value proposition was lost and the relationship was ultimately dissolved. This could have been avoided simply by creating a flexible contracting mechanism that the service provider and service recipient could continually update to meet necessary changes. Yet, when a relationship gets to this point, many buyers believe they must rebid the contract, change providers or bring the services back in-house.

 “You got to know when to hold ’em”

But there is another solution that can result in a win-win situation for both parties. We think of it as service effectiveness, which is a big step up the value chain from traditional service efficiency models. Rather than focusing on things such as unit prices, process output, service levels and delivery risk, service effectiveness addresses those things that are the real priorities for service recipients – business value, process impact and receiving what they truly require, not just what is specified in the contract.

To come to mutual understanding on what service effectiveness means in a given outsourcing engagement – and change the way third-party services are perceived and accepted in the buyer organization – all delivery and recipient stakeholders should provide assessable input on three different dimensions: 

  • Objectives, which include cost savings, improved service quality, focus on core/strategic issues, currency of technology, capital expenditure avoidance, expertise/skills/innovation, and time to delivery.
  • Priorities, which include building trust and confidence, service quality, ease of communication, focus on business objectives, end user satisfaction, win-win collaboration orientation, and strategic involvement.
  • Performance, which includes end user satisfaction, service quality, price competitiveness, relationship effectiveness, and relationship value.

By coming to an agreement on what service effectiveness means to both the provider and the recipient, there’s an immediate return on investment to the bottom line created by keeping current models and relationships in place. This, in turn, avoids organizational upheaval and transitional costs, and creates a mutually beneficial business arrangement via an efficient set of services that provide flexibility, measureable value and terms that ensure success.

How Will the IT/BPO Industry Leaderboard Change? | Sherpas in Blue Shirts

This past weekend, many people were glued to their televisions watching the 2011 Masters Golf Tournament at Augusta National. As the days rolled by, the leaderboard changed in some surprising ways – the young McIlroy slid a long way from Number 1 on Day 1; Tiger Woods finally showed his old spark and stayed steadily within the top 5 throughout the game; and Charl Schwartzel jumped into the front-runner spot to take the Green Jacket.

While we now know the Masters winner, there is significant speculation on the changes in the IT services leaderboard, both today and going forward. The market is rife with questions on where Wipro and Cognizant will end up this season. The discussion on C-level changes at Infosys made a leading Indian newspaper speculate on issues it may be facing, with TCS speeding on and Cognizant being on steroids and catching up quickly. The next day, analysts said TCS would continue to outpace the other TWITCH majors as the quarterly results season starts.

We will know the answers to these questions in the next few weeks, after all companies report their numbers. But the more important long-term question is, what else will change in that leaderboard? Will we see more M&As, new entrants, or exits? And fundamentally, what will the future structure of the IT services industry be, and who will the winners be?

In a recent meeting, a CEO of an IT services company made an interesting point about there being steps at the US$500 million, $1 billion, $5 billion, and $10 billion marks, and that it is progressively challenging to get to the next level. It was clear he was thinking that some, including those in the $2+ billion scale, will struggle to reach the next level, and some will stabilize in their current or adjacent level.

The TWITCH discussion is interesting, but then there are the mid-tier IT players. We are just past the first quarter of 2011, and already three (iGate, Patni, and Headstrong) no longer exist, at least not in their original form. From all we hear or understand, several more may go before the end of 2011.

Then there are continuous speculations about pure play BPO players being shopped about. The rumor that Cognizant will take out Genpact has been around for ages. EXL is up for some action, and the market is abuzz with other speculations. As one of my colleagues recently blogged – will the Indian pure play BPO companies survive in the same shape and form past 2011 or 2012?

Net, net, here is the big picture. Some large Tier 1 players are struggling, mid-sized IT is not necessarily the best place to be, and pure play BPO companies are a vanishing tribe.

All this raises more questions: What is the future structure of the global services industry? Will Accenture, IBM, Dell, the Japanese majors, TCS and probably a few others become the super majors by 2015 or 2020, and will the rest need to find their own places under the sun? What other categories and groups of service providers will exist, and what will their characteristics be, for example, regional specialists, vertical specialists, etc.?

Irrespective of how the industry evolves, consolidation will continue, and the M&A juggernaut will roll. This business generates cash, and doesn’t require a lot to sustain it…so companies will invest in buying capabilities, assets, businesses, and people in attempts to win top spots on the leaderboard.

We certainly are headed for some interesting months ahead. Is anyone betting on who the winners will be at the end of 2011?

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.
This field is for validation purposes and should be left unchanged.