Category: Cloud Infrastructure

Sustainable Business Needs Sustainable Technology: Can Cloud Modernization Help? | Blog

Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.

Current practices are not aligned to sustainable technology

The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:

  • Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
  • Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
  • Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.

The role of cloud modernization in building a sustainable technology ecosystem

In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.

Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.

Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.

Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.

Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.

Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.

When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.

This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].

Demystifying Industry Cloud | Blog

Microsoft recently rolled out its first industry cloud, Microsoft Cloud for Healthcare, combining capabilities across Dynamics 365, Microsoft 365, Power Platform, and Azure, to help improve care management and health data insights. Not so long ago, SAP’s CEO, Christian Klein, counted the company’s newly launched industry cloud among its growth drivers, and rightly so. Since its launch, SAP’s industry cloud has seen a long line of suitors rallying to partner in and build different vertical cloud offerings on top of it. In June 2020, there was Deloitte, then Accenture, and, most recently, Wipro.

Having analyzed this specific market trend throughout 2020, we at Everest Group have realized that, with all kinds of cloud providers jumping on to the industry cloud bandwagon, confusion abounds on what truly is an industry cloud.

So, what’s the buzz about? What is an industry cloud?

Simply put, an industry-specific/vertical cloud is a cloud solution that has been optimized and customized to fit the typical nuances and specific needs of a particular industry vertical’s customers. It is designed to tackle industry-specific constraints such as data protection, retention regulations, and operations with mission-critical applications.

We believe that a true industry cloud solution is characterized by four different layers: the infrastructure and platform layers, followed by an application layer, which is further supplemented by customization and differentiation layers.

The infrastructure layer, dominated by industry-agnostic IaaS players such as AWS, provides the hardware, network, scalability, and compute resources. The platform layer, such as Azure’s PaaS offering, is built over this infrastructure layer and becomes the debugging environment for building applications.

The application layer comprises a horizontal cloud application such as Salesforce CRM and, in several cases, hosts development platforms, such as the Salesforce App Cloud, that become the marketplace for building additional functionalities.

The differentiation layer adds vertical nuances to a horizontal application such as built-in industry regulatory compliance. It is here that we see the industry cloud taking shape, but the offerings are still standard and not customized to the needs of specific enterprises.

The customization layer brings in service providers or technology vendors with their vertical expertise and decades of experience in working with enterprises. They partner with providers of differentiated cloud offerings, and build tools and accelerators to further customize them to suit enterprise needs, adding capabilities such as AI and security, personalized dashboards for analytics, and integration services to build a truly industry-specific/vertical cloud offering.

We have illustrated this architecture in the exhibit below, taking the example of Veeva Systems, a popular provider of industry cloud solutions for life sciences. Veeva started as a CRM application built on the Salesforce platform, designed specifically for the pharmaceutical and biotechnology industries. Salesforce provided the infrastructure, platform, and application layers, while Veeva added a differentiation layer with a data model, application logic, and user interface tailored to support the pharmaceutical and biotechnology industries. It leveraged the standard Salesforce reporting, configuration, and maintenance capabilities.

Exhibit: Understanding the industry cloud architecture through Veeva industry cloud

layers

Over time, Veeva has cultivated an ecosystem of partnerships, including service providers such as Accenture (Veeva CRM partner) and Cognizant (Veeva Vault partner). These partners leverage the Veeva development platform to build additional applications customized to enterprise needs, thereby adding the final customization layer to Veeva’s solutions suite.

Industry cloud is gaining significant traction among industries

Industries such as healthcare and banking – which require rapid and auditable deployment of new features or functionalities to comply with regulatory changes – are rapidly adopting industry cloud. Healthcare continues to lead the charge from a vertical standpoint, but many industries are experiencing an uptick in adoption, including manufacturing, financial services, and retail.

The key reasons for this growth are:

  • Lowered barriers to cloud adoption, along with a ready-to-use environment with tools and services tailored to a specific vertical’s operational requirements
  • Accelerated innovation, lower costs, and reduced risks
  • Efficient handling of data sources and workflows, and compliance with the industry’s unique standards
  • Support for industry-standard APIs, helping companies connect more easily and securely, accelerating the DX economy
  • Access to industry insights or benchmarks through data aggregation from multiple clients within the same industry

What can organizations expect in the near future?

With the one-cloud-fits-all-approach reaching maturity, the next decade will be marked by the depth of vertical expertise and customization capabilities that can complement existing applications and address customer pain points. Different kinds of vendors are developing industry cloud solutions, ranging from hyperscalers such as Azure and GCP, to vertical-specific players such as Veeva. We will cover the industry cloud market in further detail in parts 2 and 3 of this blog series, in which we will answer the following questions:

  • What are the different kinds of industry cloud solution providers and their go-to-market strategies?
  • What should enterprises do, and how should they source industry cloud solutions that best suit their needs?

The battle for industry cloud is only going to get fiercer in the near future. Please follow this space for more blogs on the emerging war and warriors in the industry cloud market. Meanwhile, please feel free to reach out to [email protected] or [email protected] to share any questions and your experiences.

 

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

Reflections on Cloudera Now and the Battle for Data Platform Supremacy | Blog

The enterprise data market is going through a pretty significant category revision, with native technology vendors – like Cloudera, Databricks, and Snowflake – evolving, cloud hyperscalers increasingly driving enterprises’ digital transformation mandates, and incumbent vendors trying to remain relevant (e.g., the 2019 HPE-MapR deal.) This revision has led to leadership changes, acquisitions, and interesting ecosystem partnerships. Is data warehousing the new enterprise data cloud category that will eventually be a part of the cloud-first narrative?

Last month I attended Cloudera Now, Cloudera’s client and analyst event. Read on for my key takeaways from the event and let me know what you think.

  • Diversity and data literacy come to the forefront: Props to Cloudera for addressing key issues up front. In the first session, CEO Rob Bearden and activist and historian Dr. Mary Frances Berry had an honest dialogue about diversity and inclusion in tech. More often than not, tech vendors pay lip service to these issues of the zeitgeist, so it was a refreshing change to see the event kicking off with this important conversation. During the analyst breakout, Rob also took questions on data literacy and how crucial it is going to be as Cloudera aims to become more meaningful to enterprise business users against the backdrop of data democratization.
  • Cloudera seems to be turning around, slowly: After a tumultuous period following its merger with Hortonworks in early 2019, Cloudera has new, yet familiar, leaders in place, with Rob Bearden (previously CEO of Hortonworks) taking over the CEO reins in January 2020. The company reported its FYQ2 2021 results a few weeks before the event, and its revenue increased 9 percent over the previous quarter, its subscription revenue was up 17 percent, and its Annualized Recurring Revenue (ARR) grew 12 percent year-over-year. ARR is going to be really key for Cloudera to showcase stickiness and client retention. While its losses narrowed in FYQ2 2021, it has more ground to cover on profitability.
  • Streaming and ML will be key bets: As the core data warehousing platform market faces more competition, it is important for Cloudera to de-risk its portfolio by expanding revenue from emerging high growth spend areas. It was good to see streaming and Machine Learning (ML) products growing faster than the company. In early October, it also announced its acquisition of Eventador, a provider of cloud-native services for enterprise-grade stream processing, to further augment and accelerate its own streaming platform, named DataFlow. The aim is to bring this all together through Shared Data Experience (SDX), is Cloudera’s integrated offering for security and governance.
  • We are all living in the hyperscaler economy: Not surprisingly, there were a share of discussions around the increasing role of the cloud hyperscalers in the data ecosystem. The hyperscalers’ appetite is voracious; while the likes of Cloudera will partner with these cloud vendors, competition will increase, especially on industry-specific use cases. Will one of the hyperscalers acquire a data warehousing vendor? One can only speculate.
  • Industry-specificity will drive the next wave of the platform growth story: I’ve been saying this for a while – clients don’t buy tools, they buy solutions. Industry-context is becoming increasingly important, especially in more regulated and complex industries. For example, after its recent Vlocity acquisition, Salesforce announced Salesforce Industries to expand its industry product portfolio, providing purpose-built apps with industry-specific data models and pre-built business processes. Similarly, Google Cloud has ramped up its industry solutions team by hiring a slew of senior leaders from SAP and the industry. For the data vendors, focusing on high impact industry-led use cases – on their own and with partners – will be key to unlocking value for clients and driving differentiation. Cloudera showcased some interesting use cases for healthcare and life sciences, financial services, and consumer goods. Building a long-term product roadmap here will be crucial.

By happenstance, the Cloudera event started the same day its primary competitor, cloud-based data warehousing vendor Snowflake made its public market debut and more than doubled on day one, making it the largest ever software IPO. Make of that what you will, but to me it is another sign of the validation of the data and analytics ecosystem. Watch this space for more.

I’d enjoy hearing your thoughts on this space. Please email me at: [email protected].

Full disclosure: Cloudera sent a thoughtful package ahead of the event, which included a few fine specimens from the vineyards in La Rioja. I can confirm I wasn’t sampling them while writing this.

IBM Splits Into Two Companies | Blog

IBM announced this week that it is spinning off its legacy Managed Infrastructure business into a new public company, thus creating two independent companies. I highly endorse this move and, in fact, advocated it for years. IBM is a big, successful, proud organization. But it has been apparent for years that it faced significant challenges in trying to manage two very different businesses and operate within two very different operating models.

Read more in my blog on Forbes

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Cloud Native is Not Enough; Enterprises Need to Think SDN Native | Blog

Over the past few years, cloud-native applications have gained significant traction within organizations. These applications are built to work best in a cloud environment using microservices architecture principles. Everest Group research suggests that 59 percent of enterprises have already adopted cloud-native concepts in their production set-up. However, most enterprises continue to operate traditional networks that are slow and stressed due to data proliferation and an increase in cloud-based technologies. Like traditional datacenters, these traditional networks limit the benefits of cloud native applications.

SDN is the network corollary to cloud

The network corollary to a cloud architecture is a Software Defined Network (SDN.) An SDN architecture allows decoupling of the network control plane from the forwarding plane to enable policy-based, centralized management of the network. In simpler terms, an SDN architecture enables an enterprise to abstract away from the physical hardware and control the entire network through software.

Current SDN performance is sub-optimal

Most of the current SDN adoption is an afterthought, offering limited benefits similar to the lift-and-shift of applications to the cloud. Challenges with the current SDN adoptions include:

  • Limited interoperability – Given the high legacy presence, enterprises find themselves in a hybrid legacy SDN infrastructure, which is very difficult to manage.
  • Limited scalability – As the environment is not designed to be SDN native, applications end up placing a high volume of networking requests on the SDN controller, limiting data flow.
  • Latency issues – Separate control and data planes can introduce latency in the network, especially in very large networks. Organizations need to carry out significant testing activities before any SDN implementation.
  • Security issues – The ad hoc nature of current SDN adoption means that the entire network is more vulnerable to security breaches due to the creation of multiple network segments.

SDN native is not about applications but about the infrastructure

Unlike cloud native, which is more about how applications are architected, being SDN native is about architecting the enterprise network infrastructure to optimize the performance of modern applications running on it. While sporadic adoption of SDN might also deliver certain advantages, an SDN-native deployment requires organizations to overhaul their legacy infrastructure and adopt the SDN-native principles outlined below.

Principles of an SDN-native infrastructure

  • Ubiquity – An SDN-native infrastructure needs to ensure that there is a single network across the enterprise that can connect any resource anywhere. The network should be accessible from multiple edge locations supporting physical, cloud, and mobile resources.
  • Intelligence – An SDN-native infrastructure needs to leverage AI and ML technologies to act as a smart software overlay that can monitor the underlay networks and select the optimum path for each data packet.
  • Speed – To reduce time-to-market for new applications, an SDN-native infrastructure should have the capability to innovate and make new infrastructure capabilities instantly available. Benefits should be universally spread across regions, not limited to specific locations.
  • Multi-tenancy – An SDN-native infrastructure should not be limited by the underlay network providers. Applications should be able to run at the same performance levels regardless of any changes in the underlay network.

Recommendations on how you can become an SDN-native enterprise

Similar to the concept of cloud native, the benefits of an SDN-native infrastructure cannot be gained by porting software and somehow trying to integrate with the cloud. You need to build a network native architecture with all principles ingrained in its DNA from the very beginning. However, most enterprises already carry the burden of legacy networks and cannot overhaul their networks in a day.

So, we recommend the following approach:

  • Start small but think big – Most enterprises start their network transformation journeys by adopting SD-WAN in pockets. This approach is fine to begin with, but to become SDN native, you need to plan ahead with the eventual aim of making everything software-defined in the minimum possible time.
  • Time your transformation right – Your network is a tricky IT infrastructure component to disturb when everything is working well. However, every three to four years, you need to refresh the network’s hardware components. You should plan to use this time to adopt as much SDN as possible while ensuring that you follow SDN-native principles.
  • Leverage next-gen technologies – To follow the principles of SDN-native infrastructure, you need to make use of multiple next-generation technologies. For example, edge computing is essential to ensure ubiquity, AI/ML for intelligence, NetOps tools for speed, and management platforms for multi-tenancy.
  • Focus on business outcomes – The eventual objective of an SDN-native infrastructure is better business outcomes; this objective should not get lost in the technology upgrades. The SDN-native infrastructure should become an enabler of cloud-native application implementation within your enterprise to drive business benefits.

What has your experience been with adoption of an SDN-native infrastructure? Please share your thoughts with me at [email protected].

For more on our thinking on SDN, see our report, Network Transformation and Managed Services PEAK Matrix™ Assessment 2020: Transform your Network or Lie on the Legacy Deathbed

How Companies Unleash Value Creation Through Collaborative Networks | Blog

Ecosystems always existed in businesses, but the importance of collaboration across ecosystems dramatically increases today in the effort to create new value. However, the underlying software architecture – traditional database technology – that supports collaboration was built for a siloed, boundaried approach to business, not a networked approach among multiple enterprises or even multiple industries.

Read more on my blog

Choosing Your Best-fit Cloud Services Delivery Location | Blog

While enterprises around the globe began their steady march toward cloud services well before the outbreak of COVID-19, the pandemic has fueled cloud adoption like never before. Following the outbreak, organizations quickly went digital to enable remote working, maintain data security, and ensure operational efficiencies. Globally, first quarter spend on cloud infrastructure services in 2020 increased 39% over the same period last year.

Given the new realities, as firms make long-term cloud investments, it is vital for them to understand the cloud landscape and how various regions and countries fare in comparison to each other as cloud destinations. In this blog, we evaluate and compare the capabilities of different geographies in delivering cloud services.

The Americas

North America is among the most mature geographies for cloud services delivery. The US and Canada offer excellent infrastructure, a mature cloud ecosystem, high innovation potential, a favorable business environment, and business-friendly rules and regulations. The US is the most mature location in North America, offering a large talent pool and high collaboration prospects due to the presence of multiple technology start-ups, global business services centers, and service providers. However, the cost of operations is significantly high, primarily driven by high labor and real estate costs.

In contrast, most locations in Latin America (LATAM) have less mature cloud markets and ecosystems. While they provide proximity to key source markets in the US and considerable cost savings as compared with established markets (60-80%), they offer low innovation potential, a relatively small talent pool, few government policies to promote cloud computing, and limited breadth and depth of cloud delivery. Mexico is a standout location in LATAM, scoring better than others on parameters such as quality of cloud infrastructure, size of talent pool, and business environment.

Europe

Europe provides a good mix of established and emerging locations for cloud services. Countries in Western Europe have a fairly robust infrastructure to support cloud services, with high cybersecurity readiness, sizable talent pools, high complexity of services, and robust digital agendas and cloud policies. England and Germany are the most favorable locations in the region, driven by a comparatively large talent pool accompanied by high innovation potential, excellent cloud and general infrastructure, and high collaboration prospects due to numerous technology start-ups and enterprises. However, high cloud-adoption maturity has markedly driven up operating costs and intensified competition in these markets.

Countries in Central and Eastern Europe (CEE) offer moderate cost savings (in the 50-70% range) over leading source locations in Western Europe. While they offer a favorable cloud ecosystem, talent availability, greater proximity to key source markets, and lower competitive intensity, they score lower on innovation potential, complexity of services offered, and concentration of technology start-ups and players. The Czech Republic is a prominent location for cloud services in the CEE, while Poland and Romania are emerging destinations.

 Asia Pacific (APAC)

Most locations in APAC have high to moderate maturity for cloud services delivery due to the size of the talent pool and significant cost savings (as high as 70-80%) over source markets such as the US. For example, India offers low operating costs, coupled with a large talent pool adept in cloud skills and a significant service provider and enterprise presence. However, it scores lower on aspects such as innovation potential, infrastructure, and quality of business environment. Singapore is an established location that offers well-developed infrastructure and high innovation potential but also involves steep operating costs (40-45% cost arbitrage with the US).  The Philippines, a popular outsourcing destination, has lower cloud delivery maturity given its low innovation potential and talent availability for cloud services.

Middle East and Africa (MEA)

Israel is an emerging cloud location in the MEA that has achieved high cloud services maturity, but that benefit is accompanied by high operating costs and low cost-savings opportunity (about 10-15%). Other locations in the region have moderate to low opportunity due to small talent pools and lower maturity in terms of cloud services delivery.

Choosing your best-fit cloud services delivery location

Our analysis of locations globally reveals that, while different locations can cater to the increasing cloud demand, there is no single one-size-fits-all destination. Instead, the right choice depends on several considerations and priorities:

  • If operating cost is not a constraint and the key requirements are proximity to key source markets and a favorable ecosystem, the US, Canada, Germany, England, Singapore, and Israel are suitable locations, depending on the demand geography
  • If you are looking for moderate cost savings, proximity to source markets, and a favorable ecosystem, with the acceptable trade off of operations in a relatively low maturity market, countries such as Mexico, the Czech Republic, Hungary, Poland, Ireland, Romania, and Spain are attractive targets
  • However, if cost is driving your decision and proximity to demand geographies is not a priority, India, Malaysia, and China emerge as clear winners

The exhibit below helps clarify and streamline location-related decisions, placing an organization’s key considerations up front and identifying acceptable trade-offs to arrive at the best-fit locations shortlist.

Key considerations for choosing your cloud services delivery location

Cloud Handbook for blog

To learn more about the relative attractiveness of key global locations to support cloud skills, see our recently published Cloud Talent Handbook – Guide to Cloud Skills Across the Globe. The report assesses multiple locations against 15 parameters using our proprietary Enabler-Talent Pulse Framework to determine the attractiveness of locations for cloud delivery. If you have any questions or comments, please reach out to us at Hrishi Raj Agarwalla, Bhavyaa Kukreti, or Kunal Anand.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.