Category: Cloud Infrastructure

Cloud Native is Not Enough; Enterprises Need to Think SDN Native | Blog

Over the past few years, cloud-native applications have gained significant traction within organizations. These applications are built to work best in a cloud environment using microservices architecture principles. Everest Group research suggests that 59 percent of enterprises have already adopted cloud-native concepts in their production set-up. However, most enterprises continue to operate traditional networks that are slow and stressed due to data proliferation and an increase in cloud-based technologies. Like traditional datacenters, these traditional networks limit the benefits of cloud native applications.

SDN is the network corollary to cloud

The network corollary to a cloud architecture is a Software Defined Network (SDN.) An SDN architecture allows decoupling of the network control plane from the forwarding plane to enable policy-based, centralized management of the network. In simpler terms, an SDN architecture enables an enterprise to abstract away from the physical hardware and control the entire network through software.

Current SDN performance is sub-optimal

Most of the current SDN adoption is an afterthought, offering limited benefits similar to the lift-and-shift of applications to the cloud. Challenges with the current SDN adoptions include:

  • Limited interoperability – Given the high legacy presence, enterprises find themselves in a hybrid legacy SDN infrastructure, which is very difficult to manage.
  • Limited scalability – As the environment is not designed to be SDN native, applications end up placing a high volume of networking requests on the SDN controller, limiting data flow.
  • Latency issues – Separate control and data planes can introduce latency in the network, especially in very large networks. Organizations need to carry out significant testing activities before any SDN implementation.
  • Security issues – The ad hoc nature of current SDN adoption means that the entire network is more vulnerable to security breaches due to the creation of multiple network segments.

SDN native is not about applications but about the infrastructure

Unlike cloud native, which is more about how applications are architected, being SDN native is about architecting the enterprise network infrastructure to optimize the performance of modern applications running on it. While sporadic adoption of SDN might also deliver certain advantages, an SDN-native deployment requires organizations to overhaul their legacy infrastructure and adopt the SDN-native principles outlined below.

Principles of an SDN-native infrastructure

  • Ubiquity – An SDN-native infrastructure needs to ensure that there is a single network across the enterprise that can connect any resource anywhere. The network should be accessible from multiple edge locations supporting physical, cloud, and mobile resources.
  • Intelligence – An SDN-native infrastructure needs to leverage AI and ML technologies to act as a smart software overlay that can monitor the underlay networks and select the optimum path for each data packet.
  • Speed – To reduce time-to-market for new applications, an SDN-native infrastructure should have the capability to innovate and make new infrastructure capabilities instantly available. Benefits should be universally spread across regions, not limited to specific locations.
  • Multi-tenancy – An SDN-native infrastructure should not be limited by the underlay network providers. Applications should be able to run at the same performance levels regardless of any changes in the underlay network.

Recommendations on how you can become an SDN-native enterprise

Similar to the concept of cloud native, the benefits of an SDN-native infrastructure cannot be gained by porting software and somehow trying to integrate with the cloud. You need to build a network native architecture with all principles ingrained in its DNA from the very beginning. However, most enterprises already carry the burden of legacy networks and cannot overhaul their networks in a day.

So, we recommend the following approach:

  • Start small but think big – Most enterprises start their network transformation journeys by adopting SD-WAN in pockets. This approach is fine to begin with, but to become SDN native, you need to plan ahead with the eventual aim of making everything software-defined in the minimum possible time.
  • Time your transformation right – Your network is a tricky IT infrastructure component to disturb when everything is working well. However, every three to four years, you need to refresh the network’s hardware components. You should plan to use this time to adopt as much SDN as possible while ensuring that you follow SDN-native principles.
  • Leverage next-gen technologies – To follow the principles of SDN-native infrastructure, you need to make use of multiple next-generation technologies. For example, edge computing is essential to ensure ubiquity, AI/ML for intelligence, NetOps tools for speed, and management platforms for multi-tenancy.
  • Focus on business outcomes – The eventual objective of an SDN-native infrastructure is better business outcomes; this objective should not get lost in the technology upgrades. The SDN-native infrastructure should become an enabler of cloud-native application implementation within your enterprise to drive business benefits.

What has your experience been with adoption of an SDN-native infrastructure? Please share your thoughts with me at [email protected].

For more on our thinking on SDN, see our report, Network Transformation and Managed Services PEAK Matrix™ Assessment 2020: Transform your Network or Lie on the Legacy Deathbed

How Companies Unleash Value Creation Through Collaborative Networks | Blog

Ecosystems always existed in businesses, but the importance of collaboration across ecosystems dramatically increases today in the effort to create new value. However, the underlying software architecture – traditional database technology – that supports collaboration was built for a siloed, boundaried approach to business, not a networked approach among multiple enterprises or even multiple industries.

Read more on my blog

Choosing Your Best-fit Cloud Services Delivery Location | Blog

While enterprises around the globe began their steady march toward cloud services well before the outbreak of COVID-19, the pandemic has fueled cloud adoption like never before. Following the outbreak, organizations quickly went digital to enable remote working, maintain data security, and ensure operational efficiencies. Globally, first quarter spend on cloud infrastructure services in 2020 increased 39% over the same period last year.

Given the new realities, as firms make long-term cloud investments, it is vital for them to understand the cloud landscape and how various regions and countries fare in comparison to each other as cloud destinations. In this blog, we evaluate and compare the capabilities of different geographies in delivering cloud services.

The Americas

North America is among the most mature geographies for cloud services delivery. The US and Canada offer excellent infrastructure, a mature cloud ecosystem, high innovation potential, a favorable business environment, and business-friendly rules and regulations. The US is the most mature location in North America, offering a large talent pool and high collaboration prospects due to the presence of multiple technology start-ups, global business services centers, and service providers. However, the cost of operations is significantly high, primarily driven by high labor and real estate costs.

In contrast, most locations in Latin America (LATAM) have less mature cloud markets and ecosystems. While they provide proximity to key source markets in the US and considerable cost savings as compared with established markets (60-80%), they offer low innovation potential, a relatively small talent pool, few government policies to promote cloud computing, and limited breadth and depth of cloud delivery. Mexico is a standout location in LATAM, scoring better than others on parameters such as quality of cloud infrastructure, size of talent pool, and business environment.

Europe

Europe provides a good mix of established and emerging locations for cloud services. Countries in Western Europe have a fairly robust infrastructure to support cloud services, with high cybersecurity readiness, sizable talent pools, high complexity of services, and robust digital agendas and cloud policies. England and Germany are the most favorable locations in the region, driven by a comparatively large talent pool accompanied by high innovation potential, excellent cloud and general infrastructure, and high collaboration prospects due to numerous technology start-ups and enterprises. However, high cloud-adoption maturity has markedly driven up operating costs and intensified competition in these markets.

Countries in Central and Eastern Europe (CEE) offer moderate cost savings (in the 50-70% range) over leading source locations in Western Europe. While they offer a favorable cloud ecosystem, talent availability, greater proximity to key source markets, and lower competitive intensity, they score lower on innovation potential, complexity of services offered, and concentration of technology start-ups and players. The Czech Republic is a prominent location for cloud services in the CEE, while Poland and Romania are emerging destinations.

 Asia Pacific (APAC)

Most locations in APAC have high to moderate maturity for cloud services delivery due to the size of the talent pool and significant cost savings (as high as 70-80%) over source markets such as the US. For example, India offers low operating costs, coupled with a large talent pool adept in cloud skills and a significant service provider and enterprise presence. However, it scores lower on aspects such as innovation potential, infrastructure, and quality of business environment. Singapore is an established location that offers well-developed infrastructure and high innovation potential but also involves steep operating costs (40-45% cost arbitrage with the US).  The Philippines, a popular outsourcing destination, has lower cloud delivery maturity given its low innovation potential and talent availability for cloud services.

Middle East and Africa (MEA)

Israel is an emerging cloud location in the MEA that has achieved high cloud services maturity, but that benefit is accompanied by high operating costs and low cost-savings opportunity (about 10-15%). Other locations in the region have moderate to low opportunity due to small talent pools and lower maturity in terms of cloud services delivery.

Choosing your best-fit cloud services delivery location

Our analysis of locations globally reveals that, while different locations can cater to the increasing cloud demand, there is no single one-size-fits-all destination. Instead, the right choice depends on several considerations and priorities:

  • If operating cost is not a constraint and the key requirements are proximity to key source markets and a favorable ecosystem, the US, Canada, Germany, England, Singapore, and Israel are suitable locations, depending on the demand geography
  • If you are looking for moderate cost savings, proximity to source markets, and a favorable ecosystem, with the acceptable trade off of operations in a relatively low maturity market, countries such as Mexico, the Czech Republic, Hungary, Poland, Ireland, Romania, and Spain are attractive targets
  • However, if cost is driving your decision and proximity to demand geographies is not a priority, India, Malaysia, and China emerge as clear winners

The exhibit below helps clarify and streamline location-related decisions, placing an organization’s key considerations up front and identifying acceptable trade-offs to arrive at the best-fit locations shortlist.

Key considerations for choosing your cloud services delivery location

Cloud Handbook for blog

To learn more about the relative attractiveness of key global locations to support cloud skills, see our recently published Cloud Talent Handbook – Guide to Cloud Skills Across the Globe. The report assesses multiple locations against 15 parameters using our proprietary Enabler-Talent Pulse Framework to determine the attractiveness of locations for cloud delivery. If you have any questions or comments, please reach out to us at Hrishi Raj Agarwalla, Bhavyaa Kukreti, or Kunal Anand.

Politics and Technology in the Post-Pandemic World Will Drive Sovereign Cloud Adoption | Blog

Discussions around data and cloud sovereignty have been gaining momentum over the past decade for multiple reasons, including unsecured digital documents, varied data-related laws in different countries, scrutiny and liability issues, the dominance of US hyperscalers, and low visibility into and control of data. However, the biggest trigger for increased focus on sovereign cloud is the US CLOUD Act of 2018, which gives US law enforcement agencies the power to unilaterally demand access to data from companies regardless of the geography in which data is stored.

What is a sovereign cloud?

A sovereign cloud essentially aims to maintain the sovereignty of data in all possible ways for any entity (country, region, enterprise, government, etc.). Thus, it demands that all data should reside locally, the cloud should be managed and governed locally, all data processing – including API calls –  should happen within the country/geography, the data should be accessible only to residents of the same country, and the data should not be accessible under foreign laws or from any outside geography.

Multiple sovereign cloud initiatives have been undertaken in the past

In 2012, France launched a national cloud called Andromede. It failed to gain traction and was split into two clouds – Cloudwatt and Numergy. After struggling, Numergy was acquired by a telecommunications company in 2019, and CloudWatt was shut down in 2020. In another such initiative, Microsoft set up a dedicated German cloud in 2015 but decommissioned it in 2018 due to limited traction.

In 2019, in response to the US CLOUD Act, Germany, in partnership with France, launched Project GAIA-X, with the aim of creating a high performance, competitive, secure, and trustworthy infrastructure for Europe. The two governments invited many companies, including Bosch, Deutsche Telekom, Festo, Orange Business Services, and SAP, to collaborate on the project. While the original intent was quite aspirational and intended to create a completely independent hyperscale cloud, everyone quickly realized that this goal was illusory, and the project evolved into building something that brings the best of all worlds with “European DNA” at its center.

A novel mix of politics and technology makes GAIA-X interesting

While previous sovereign cloud initiatives have not been very successful, project GAIA-X promises to be different. When the project was first launched in 2019, it was largely seen as a political gimmick aimed at driving nationalistic sentiments in Europe. Many enterprises believed that the project could be completely scrapped if the government changed. Most were unconvinced of its potential success given their previous experiences with national clouds. The hyperscalers, AWS and Azure, dismissed GAIA-X’s potential to scale and become competitive, and called out that a national cloud was interesting only in theory.

However, the story that has unfolded and continues to evolve is completely different. While some of the intent may be political, various governments across Europe are strongly backing this initiative and coming together to create Europe’s own digital ecosystem. By June 2020, the GAIA-X Foundation had expanded to 22 organizations – including digital leaders, industrials, academia and associations – and the project is being supported by more than 300 businesses.

On the technology front, GAIA-X is now much clearer than when it began. It released a detailed technical architecture in June 2020, and primarily focuses on building common standards that enable transparency and interoperability by aligning network and interconnection providers, Cloud Solution Providers (CSP), High Performance Computing (HPC), and sector-specific clouds and edge systems. It has built use cases across industries including finance, health, public sector, agriculture, and energy, and horizontal use cases spanning Industry 4.0, smart living, and mobility.

There are still a few issues – including an unclear roadmap, unrealistically aggressive timelines, a lack of detail on hybrid or multi-cloud architectures, lower performance and higher costs, and a gap in the requisite skills – that need to be sorted for GAIA-X to succeed.

Nationalism and the coronavirus will push sovereign clouds beyond theory

Growing nationalistic sentiments sweeping various geographies/nations such as the US, Europe, and India, will add fuel to the sovereign cloud discussion. The Huawei ban in the US, the ban on Chinese apps in India, and reducing dependencies on China across sectors are indicators of emerging nationalistic sentiments. At the same time, the COVID-19 pandemic has forced enterprises to reevaluate their external exposure. The fact that cloud is the core pillar of enterprises’ data strategies falls right at the center of this entire discussion, and the idea of a national or sovereign cloud is destined to become more than mere theory.

Recommendations on how you can begin to incorporate elements of sovereign cloud in your cloud strategy

Enterprises around the globe need to pay immediate attention to this sovereign cloud trend. Here are our recommendations on how your organization can stay ahead of the curve.

  • Be aware of the GAIA-X trends in Europe: While the project is still in its initial stages, it is evolving very quickly and gaining significant traction. You should proactively track and understand GAIA-X adoption as it will be an excellent preview of what could happen in the rest of the world.
  • Factor the political angle into your future cloud strategy: Europe is an excellent example of the political influence in enterprise cloud decisions. You need to be aware of the political environment in your country and whether or not it could pose risks to your existing cloud strategy. The idea of a nationalistic or sovereign cloud will be driven by the government, which might pass laws enforcing some kind of implementation. You should be prepared for this potential eventuality.
  • Ensure technological flexibility in your cloud architecture: Cloud architecture flexibility will be key to integrating sovereign cloud with the rest of your environment. Hyperscalers like AWS and Azure are increasingly leveraging techniques such as increased licensing prices, full stack services, contractual terms, data transfer charges, and limited interoperability to lock in enterprises. You need to be wary of these techniques and adopt an agnostic, interoperable, multi-cloud strategy to ensure easy migration to sovereign cloud in the future.

How do you think sovereign cloud will evolve in Europe and other geographies? Please share your thoughts with me at [email protected].

COVID-19 Will Accelerate Private 5G Adoption | Blog

COVID-19 has disrupted the business landscape, forcing enterprises to find new work models to ensure business continuity. There is, however, a silver lining to the otherwise painful episode: support for accelerating 5G adoption.

Several factors are significantly driving the business case for adoption of a faster and more reliable network: the spike in work-at-home, increased demand for digital delivery of applications and content, and the realization that digital-ready enterprises are better prepared to navigate crises. These challenges put 5G firmly in the forefront of future digital transformation.

However, in the medium term, recessionary pressures, constrained capital, and heavy debt may discourage telcos from widespread deployment of public 5G. As a result, telecom providers are increasingly exploring the prospect of private 5G for enterprises as a means of generating steady revenue. Telecom providers’ interest, coupled with enterprise enthusiasm – now truly appreciating the importance of digital readiness and advocating for Industry 4.0 adoption, of which 5G forms the foundation – make now the right time for private 5G adoption.

Effect of COVID-19

Consumer, or B2C, data consumption is likely to increase as social distancing continues, at least until the release of a viable vaccine. Moreover, as firms pivot to digital models and operate virtually, data consumption will continue to rise, establishing a connectivity-centric ecosystem while compounding the load on 4G systems.

To maintain service quality and ease network congestion, telecom providers have begun to invest in 5G networks. These networks can be classified into two broad buckets: public 5G and private 5G. Public 5G refers to consumer cellular networks deployed across the world for telcos’ B2C customers. Private 5G is a kind of restricted network, often used by enterprises on their premises, to take advantage of 5G’s low latency and high bandwidth for accelerated Industry 4.0 adoption.

Slow progress of public 5G deployments

Most telcos are highly leveraged, given the capital-intensive nature of the business. Some market leaders, such as AT&T, have also taken on additional debt in their pursuit to transform into media conglomerates. Telcos that follow that strategy will most likely prioritize optimization of existing consumer networks to cope with the load and deploy consumer grade 5G only in highly urban pockets where the return on their investment is sizeable.

To increase revenue, we expect a push from telecom operators for B2B private 5G networks.

Rise of private 5G

Private 5G is a means for an enterprise to modernize its internal broadband and wireless communications infrastructure with high speed 5G cellular networks. A cellular network can provide better coverage within an enterprise’s work location, enable better bandwidth, and support low latency requirements. Deploying 5G in controlled work environments like shop floors, power plants, and healthcare facilities can also negate some of the key technology issues of mmWaves such as the effects of noise and attenuation, and difficulty in penetrating dense objects. We expect telcos to aggressively push private 5G use cases as an integral part of Industry 4.0 transformations as they look to improve revenue share from their B2B businesses.

Private 5G adoption by industry

Given its use and benefits, some industries are more likely to leverage private 5G more quickly than others.

  • Healthcare: COVID-19 has highlighted the stark reality of insufficient healthcare personnel to care for patients. As the use of virtual health consultations rises, telemedicine and telehealth will soon become the norm in the industry. With its significantly higher bandwidth and lower latency, private 5G adoption will accelerate with the rest of the virtual healthcare model.
  • Manufacturing and energy & utilities: Manufacturing firms are facing ongoing pressure on demand and production due to lockdowns. With most factories requiring their workforces to work remotely, some of these firms have had to go months without production. To reduce the number of workers on-site, firms will explore automation and adopt technologies such as digital twins, robot assistance, and IoT. Private 5G will enable and accelerate the adoption of these technologies.
  • Media and entertainment: 5G will unlock the potential of immersive reality; for example, stadiums and theme parks are investing in the technology to improve user experience.
  • BFSI: With the movement toward digital and the proliferation of data, the banking industry will explore 5G network slicing, whereby firms can apply specific security policies to various network slices. Moreover, the combination of edge computing and 5G will enable faster and more secure processing of data. For insurance companies, 5G will play an important role in improving customer experience through telemetry.

Challenges remain

Despite its promise for many industries, challenges remain for the adoption of private 5G.

  • Capital intensive: No matter the customer (B2B or B2C), 5G remains a capital-intensive transformation. However, the COVID-19 crisis will further strengthen the business case for Industry 4.0. The manufacturing and energy & utilities industries, which have often lagged in the adoption of emerging technologies, will be more inclined to spend on digital following the crisis. The partial shift of this investment to enterprise customers in private 5G will appeal to telcos, which are otherwise supporting high debt.
  • Geopolitics: Political uncertainties also loom large as countries re-examine China’s accountability on cyber security and data privacy. These concerns have been exacerbated by ongoing geopolitical tensions. 5G deployment may see slight delays as these issues cause supply chain disruptions and increase pressure on other global players.

How you can drive 5G adoption in your organization

You can follow a five-stage structured path to drive private 5G adoption in your organization.

  1. Business case: First you must understand the benefits and challenges of private 5G. It is also imperative to understand the key alternatives to 5G such as Zigbee, WiFi-6, and SIGFOX. You can work with technology partners to identify key use cases for your industry and select effective cases to pilot.
  2. Feasibility study: In this stage, you assess the technologies that could work with your existing landscape and outline the changes that would be necessary to adopt them. You can then run pilots on selected use cases and understand the key parameters that may need adjusting, such as security and scalability. Following the pilot, you must ascertain the business outcomes achievable and model potential ROI.
  3. Pre-implementation: Once you have a fair understanding of the necessary changes, you can begin changing and upgrading your existing landscape to be compatible. Because private 5G is a hardware-intensive and long-term investment, you should plan to spend considerable time in selecting the right vendors for implementation.
  4. Implementation: At this stage, the OEM and service partners begin the network transformation. Because 5G is a disruptive technology, you must give considerable attention to improving processes and change management to realize its full benefits.
  5. Post-implementation: Following initial adoption, you must continuously monitor the technology landscape and assess how you can adopt new use cases to maximize your ROI.

If you wish to learn more about the 5G landscape and private 5G in particular, please connect with us at [email protected], [email protected], and [email protected].

AWS Outpost, Azure Stack, Google Anthos, and IBM Satellite: The Race to Edge-to-Cloud Architecture Utopia | Blog

In earlier blog posts, I have discussed chapter 1 and chapter 2  of the cloud wars. Now, let’s look at chapter 3 of this saga, where cloud vendors want to be present across the edge to cloud.

Since the days of mainframes and Java virtual machines, enterprise technology leaders have been yearning for one layer of architecture to build once, deploy anywhere, and bind everything. However, Everest Group research indicates that 85 percent of enterprises believe their technology environment has become more complex and challenging to maintain and modernize.

As the cloud ecosystem becomes increasingly central to enterprise technology, providers like Amazon Web Services, Google Cloud, IBM, and Microsoft Azure are building what they call “Edge-to-Cloud” architectural platforms. The aspiration of these providers’ offerings is to have a consistent architectural layer underpinning different workloads. And the aim is to run these workloads anywhere enterprises desire, without meaningful changes.

Will this approach satisfy enterprises’ needs? The hopes are definitely high as there are some key enablers that weren’t there in earlier attempts.

The architecture is sentient

This is a topic we discussed a few years back in an application services research report. Although evolutionary architecture has been around for some time, architectural practices continue to be rigid. However, the rapid shortening of application delivery cycles does not provide architects with the traditional luxury of months to arrive at the right architecture. Rather, as incremental delivery happens, they are building intelligent and flexible architectures that can sense changing business requirements and respond accordingly.

Open source is the default

As containers and Kubernetes orchestration become default ways to modernize and build applications, the portability challenge is taken care of, at least to some extent. Applications can be built and ported, as long as the host supports the OS kernel version.

Multi-cloud is important

We discussed this in earlier research. Regardless of what individual cloud providers believe or push their clients toward, enterprises require multi-cloud solutions for their critical workloads. This strategy requires them to build workloads that are portable across architectures.

The workload is abstracted

The stack components for workloads are being decoupled. This decoupling is not only about containerizing the workload, but building services components that can run on their own stack. This capability helps to change parts of workloads when needed, and different components can be deployed across scale-distributed systems.

With all this, the probability of achieving architectural portability may indeed be different this time. However, enterprise technology leaders need to be aware of and plan for several things:

  • Evaluate the need for one architecture: In the pursuit of operational consistency, organizations should not push their developers and enterprise architects toward a single common underlying architecture. Different workloads need different architectures. The focus should be on seamless orchestration of different architectural choices.
  • Focus on true architectural consistency versus “provider stack” consistency: This consistency issue has been the bane of enterprise technology. Workloads work well as long as the underlying platforms belong to one provider. That is the reason most of the large technology providers are building their own hybrid offerings for Edge-to-Cloud. Although many are truly built on open source technologies, experienced architects know very well that workloads porting across platforms always require some changes.
  • Manage overwhelming choices: Enterprise architects are struggling with the number of choices they now have across different clouds, terminologies, designs, and infrastructures, which makes their job of building a unified architecture extremely difficult. They need automated solutions that can suggest architectural patterns and choices to drive consistency and reliability.

So, what should enterprise architects now do?

Enterprise architects have always been envied as the guardians of systems, as they set the broad-based strategic underpinning for technology systems. Going forward, they will be bombarded with more choices and provider-influenced material to choose their underlying platforms to build workloads. All of the platforms will be built on a strong open source foundation and will claim interoperability and portability. The architects will need to be discerning enough to understand the nuances and the trade-offs they have to make. At the same time, they should also be open and unsuspicious of technology choices. They must have transparent discussions with technology providers, evaluate the offerings against their business needs, and assess the drivers for a unified architecture.

What is your take on unified architecture from Edge-to-Cloud? Please share your thoughts with me at [email protected].

Capturing Business Advantage After The COVID-19 Crisis | Blog

The crashing global economy caused by the COVID-19 pandemic now wreaks havoc on businesses. But the pandemic eventually will end, and there will be compelling opportunities at that time. As I explained in my prior blog, companies need to take steps now that enable them to accelerate through the pandemic curve so they can grab opportunities when the pandemic ends. In this blog, I’ll detail how to establish the necessary infrastructure that enables surviving a recession and thriving after the pandemic. This infrastructure is a top priority.

The pandemic is causing a pause in commercial activity for the next few months. But once the pause is over, the underlying fundamentals for moving to digital at scale are still positive. And it will happen quickly at that point – for companies that have the infrastructure for high velocity and productivity.

Read my blog on Forbes

Role Transition for Cloud Vendors in OTT Media Streaming | Blog

Over-the-top (OTT) streaming – or, simply, delivering media content directly over the internet – has redefined the media content consumption landscape. In 2019, the number of active global monthly OTT video subscribers surpassed 750 million, accounting for more than 30 percent of digital video viewers globally.

Cloud vendors have significantly contributed to this exponential growth by providing core cloud-native delivery infrastructure to OTT players at lower costs, making it much easier for them to reach global audiences and dynamically scale their workloads with just a few clicks. In fact, over the years, the role of cloud vendors has shifted from infrastructure providers to prime drivers of technology for the OTT industry – so much that they now lead media technology altogether.

Initially, cloud vendors’ core offerings comprised storage, processing, transmission, packaging, and transcoding, which enabled OTT players to gain scale, cost, and flexibility benefits. Now, the cloud has become the default infrastructure provider for OTT delivery. In fact, all of the flagship OTT players have migrated to cloud-based OTT workflows. For example, Netflix completed its migration to Amazon Web Services (AWS) in 2017, and Spotify completed its Google Cloud Platform adoption in 2018.

The major cloud vendors, such as Amazon, Google, and Microsoft, lead the global technology landscape, and they are leveraging their expertise in advanced technologies to offer not only their core functional offerings but also compelling value-added services over the cloud. These value-added services include:

  • Direct content ingestion
    Cloud providers like AWS and Azure offer direct content ingestion capabilities to OTT players, enabling them to either ingest content directly from a camera to a cloud-based management system or stream it live to various platforms. They also enable content creators to shoot videos through smartphones and send them via mobile networks to production and content management systems operating in the cloud, bypassing camcorders and live production trucks.
  • Native language translation
    Cloud providers such as Google offer application programming interfaces (APIs) with natural language processing (NLP) capabilities for native language translation, which allows audiovisual content localization and translation, making it convenient for OTT players to expand their reach globally.
  • AI-powered encoding
    Vendors like AWS and IBM have integrated AI with their cloud-based offerings, and cloud-based OTT workflows intensively leverage AI to provide a better viewing experience. AI helps to better monitor network traffic, improves compression techniques, and offers adaptive encoding techniques to stream HD videos over low bandwidth networks.
  • Video indexing
    Video indexing services, such as Azure’s video indexer, automatically extract advanced metadata from audio and video content, including spoken words, written text, faces, brands, and scenes. OTT players can leverage the extracted data to generate insights and increase the discoverability of their content, improve the user experience, and enhance monetization opportunities.
  • Advanced targeting
    Cloud providers like AWS and IBM leverage advanced analytics services such as device ID-based content tagging to provide recommendations for better viewer targeting, which enables advertisers to reach out to specific, targeted, and identified audiences. OTT players can utilize these recommendations for better content monetization.

These additional services have become core differentiators for cloud vendors versus traditional Independent Software Vendors (ISVs) that offer media streaming solutions. They’re also enabling OTT players to create true differentiation in their offerings. Additionally, the cloud has become a great leveler for players who are entering the OTT market relatively late, as it provides them the latest cutting-edge technology at the click of a button, saving them precious time in getting up and running.

It will be interesting to see how the market shapes up in the next 12-18 months, as more content and production houses start setting up their OTT platforms and make the existing battle of viewer acquisition and retention fiercer.

For more industry-leading insights on the OTT industry, please reach out to Akshat Vaid and Shivank Narula.

Impact Of Coronavirus Threat To The IT Services Industry | Blog

Clearly, the coronavirus (COVID-19) already has an impact on the global economy and broader market. Companies are cancelling conferences and events. They are closing their campuses to outsiders. Travel is restricted. And in some instances, companies impose a work-from-home policy. In the IT and BPO services industry, decision-making is stalled, and we already see clients cancelling planned contracts.

How bad is the disruption to the services industry? It will have a negative impact on revenue growth in the next quarter (ending in June 2020) and potentially in several quarters to come. New projects will be postponed or cancelled. This is because, first, companies simply cannot buy complicated services without some form of travel. Second, any large initiatives require executive support and energy, and they won’t have time to push contracts forward during the next few months.

Read more in my blog on Forbes

Visit our COVID-19 resource center to access all our COVD-19 related insights.

Let the Cloud Wars Begin: Notes from Oracle OpenWorld Europe 2020

Oracle held the European edition of its flagship event, OpenWorld, in London recently. Against the backdrop of cloud wars, leadership changes in the ecosystem (Mark Hurd’s untimely demise and the change of guard at SAP), and blazing growth by hyperscalers (the two boutique firms in Seattle), the market is keenly watching what Oracle has in store.

Here are my take-aways from the event.

1. Cloud FOMO: Oracle is investing heavily in its datacenter footprint and expects to have 36 regions by the end of the year, with a datacenter opening every 23 days. It claims it will have more regions than AWS by the end of 2020.

1

This is turning out to be a common trend among hyperscalers and cloud vendors, creating an asset bubble. Capital spending is at an all-time high, as the exhibit below shows. Will this create further price wars and overcapacity in the market? Only time will tell.

2

2. Doubling down on data: Oracle announced a slew of initiatives aimed at infusing data and, to a lesser extent, AI across its offering stack:

    • Expanded DataFox’s data pool across AI and managed data. Oracle acquired DataFox in 2018 because of its sizable data assets covering ~2.8 million public and private businesses to enable predictive decision making. Now, DataFox natively integrates across the Oracle SaaS stack, sourcing over a billion data points annually to improve the data quality of Eloqua and Sales Cloud as well as third-party applications.
    • Launched a new Oracle Cloud Data Science Platform to build and deploy AI and ML models.
    • Expanded its Autonomous Database offering to support the integration of algorithms within databases and added new ML capabilities, with support for Python and automated ML.

3. Ecosystem bets in a multi-cloud world are crucial: Oracle is now sharpening its focus on partnerships and the ecosystem to compete in the multi-cloud environment – this is on the back of its Azure and VMware partnerships. With Microsoft Azure, it announced a new interconnect facility based in Amsterdam. Because Amsterdam is a crucial European datacenter location and hub for Oracle, this facility will help companies in the region share cross-application data and move on-premise workloads to the cloud, according to Oracle.

4. Cloud interoperability – are we there yet?: With Google Anthos and Azure Arc, interoperability is back. While the partnership with Azure did highlight some degree of interoperability progress, I didn’t see enough. This is likely a prickly concern for enterprises as cloud vendors start erecting their own walled fortresses, hindering true interoperability. We have opined on cloud interoperability before, and it’s going to be a key issue for the ecosystem to solve over the next 18-24 months, especially as the cloud-native conversations gather momentum.

5. The dawn of the new CEO mindset: One of the highlights of the event was a client showcase. The CEO of Italian coffee major, illycaffè, Massimiliano Pogliani, spoke to Oracle CEO Safra Catz about a critical aspect of modern business – the changing role of the new CEO. He described it as being the activator of collective intelligence across the organization’s human capital. He also described his company’s mission around three themes: good (product obsession), goodness (sustainability), and beauty (the experience.) We are seeing greater recognition by some forward-looking CEOs of their purpose and impact, including Novartis CEO Vas’ focus on the journey to unboss and Salesforce chief Marc Benioff’s call for a new type of capitalism.

 The cloud landscape is becoming very interesting as all segments attack the opportunity: hyperscalers continue to invest in expanding their datacenter footprint; enterprise platform providers are focusing on verticalization (e.g., ServiceNow under Bill, Salesforce acquiring Vlocity); and system integrators are trying to keep up with the massive implementation opportunity while battling a talent shortage. We are going to see share shifts as these changes gather steam.

From an enterprise perspective, the cloud conversation is now veering toward journey-in-the-cloud versus journey-to-the-cloud, aka lift-and-shift. This shift is bringing total cost of ownership (TCO) back into the picture. We are in for interesting times ahead.

What’s your take on today’s cloud wars? Please share your thoughts with me at [email protected].

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.