Category: Cloud & Infrastructure

GAIA-X Summit 2020: Key Takeaways and the Future of Data Sovereignty in Europe | Blog

GAIA-X Summit 2020: Key Takeaways and the Future of Data Sovereignty in Europe

In an earlier blog on sovereign cloud, we explained how GAIA-X – an ambitious project led by France and Germany, aimed at creating a high-performance and trustworthy data infrastructure for Europe – is set to play a pivotal role in the evolution of the European cloud market in the coming years. Since then, GAIA-X has experienced a significant increase in the number of member firms, cutting across geographies and industry verticals, in its endeavor to secure data sovereignty.

We participated in the GAIA-X summit, which was held virtually on November 18 and 19, 2020, and highlighted the increasing relevance of GAIA-X both within and outside Europe and for the broader Industry 4.0 agenda. Here are some of the notable insights from the summit.

GAIA-X has experienced a significant increase in member firms and countries – including non-European countries

GAIA-X was launched with 22 member firms (11 French, 11 German) and an initial focus on France and Germany, but, within one year, the project has grown into a colossal multi-industry project with over 181 members from 18 countries.

Notably, on October 15, 2020, all 27 European member states signed a joint declaration supporting the European Commission’s cloud and data strategy, which mentioned GAIA-X as a leading example of a public-private initiative for European-federated data infrastructure.

Economists believe GAIA-X will create long-term economic benefits for Europe

The GAIA-X project is expected to reap benefits for European countries in the long run, as several economists, including the likes of Jacques Cremer, have placed their bets on a better data sharing platform that reduces transaction costs. The key ways in which GAIA-X can help improve the European economy are:

  1. Better data sharing platforms will result in quality AI training data and, thus, a better AI-enabled infrastructure, propelling more automation and cost savings
  2. It will reduce codependence among subcontractors by curbing the use of proprietary data exchange methodologies

 

GAIA-X will help better implement European compliance laws

Compliance and data sovereignty are some of the key goals of the GAIA-X project, and the project is expected to address some of these challenges related to European laws. As it is designed on the core European values of openness, transparency, and collaboration, GAIA-X is expected to better implement European laws along with the draft European Data Governance Act, which will play a crucial role in defining data flows in Europe.

GAIA-X can play a key role in enabling infrastructure for Industry 4.0

Industries in Europe are in a transformative state, with increasing emphasis on data access, data security, and data sovereignty. The next step in IoT is AI-enabled IoT (AIoT), and industry experts are confident that GAIA-X will enable the data infrastructure needed for this transformation.

Several potential use cases across multiple data ecosystems – within and outside Europe

GAIA-X already has use cases across multiple industries and sectors, including finance, health, public sector, agriculture, energy, mobility, smart living, and Industry 4.0. Some of these use cases include collaborative condition monitoring, shared production, financial big data clusters, smart health connect, smart infrastructure management, digital parking management, and agricultural ecosystem, among many others.

Cloud adoption in Europe will increase through a collaborative data ecosystem

A key value proposition of GAIA-X is a collaborative data ecosystem that can help build a structure with innovation at its core. GAIA-X aims to increase cloud usage in the EU from 24% to 60% in the coming years by developing common standards with other cloud providers that will accelerate market uptake and simultaneously enhance data portability.

Strong financial support for GAIA-X to help the EU tackle data sovereignty and data privacy challenges

The German government is planning to make EUR200 million available for GAIA-X, while the EU is planning to invest EUR2 billion in the digital transformation space, some of which will find its way to the GAIA-X project.

With access to these funds, GAIA-X will emerge as a strong contender for hyperscalers in Europe and will impact the overall cloud market. The GAIA-X roll-out will further help the EU address the core issues of data privacy and data sovereignty – which have long been a challenge.

Consolidating successes and looking ahead

We believe that GAIA-X has experienced significant growth in the last year, primarily for the following reasons:

  • The project has a strong legal framework with open source software promoting transparency
  • Companies across industries have committed to it
  • It is starting to build a strong customer base with use cases identified, again across industries

We expect GAIA-X to continue its momentum in 2021, driven by the European Data Governance Act and constant legal battles between US tech giants, such as Facebook, Google, and Apple, and the EU. The project also plans to launch six regional hubs by the end of 2020, while another six are in the pipeline for the first half of 2021. These regional hubs will act as entry points for anyone who wants to use GAIA-X and will further strengthen GAIA-X’s data sovereignty proposition, even as more firms and countries hop on the GAIA-X bandwagon.

If you’d like to know more about the GAIA-X project or have any questions or observations, please write to us at [email protected] or [email protected].

Sustainable Business Needs Sustainable Technology: Can Cloud Modernization Help? | Blog

Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.

Current practices are not aligned to sustainable technology

The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:

  • Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
  • Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
  • Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.

The role of cloud modernization in building a sustainable technology ecosystem

In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.

Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.

Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.

Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.

Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.

Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.

When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.

This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected].

Demystifying Industry Cloud | Blog

Microsoft recently rolled out its first industry cloud, Microsoft Cloud for Healthcare, combining capabilities across Dynamics 365, Microsoft 365, Power Platform, and Azure, to help improve care management and health data insights. Not so long ago, SAP’s CEO, Christian Klein, counted the company’s newly launched industry cloud among its growth drivers, and rightly so. Since its launch, SAP’s industry cloud has seen a long line of suitors rallying to partner in and build different vertical cloud offerings on top of it. In June 2020, there was Deloitte, then Accenture, and, most recently, Wipro.

Having analyzed this specific market trend throughout 2020, we at Everest Group have realized that, with all kinds of cloud providers jumping on to the industry cloud bandwagon, confusion abounds on what truly is an industry cloud.

So, what’s the buzz about? What is an industry cloud?

Simply put, an industry-specific/vertical cloud is a cloud solution that has been optimized and customized to fit the typical nuances and specific needs of a particular industry vertical’s customers. It is designed to tackle industry-specific constraints such as data protection, retention regulations, and operations with mission-critical applications.

We believe that a true industry cloud solution is characterized by four different layers: the infrastructure and platform layers, followed by an application layer, which is further supplemented by customization and differentiation layers.

The infrastructure layer, dominated by industry-agnostic IaaS players such as AWS, provides the hardware, network, scalability, and compute resources. The platform layer, such as Azure’s PaaS offering, is built over this infrastructure layer and becomes the debugging environment for building applications.

The application layer comprises a horizontal cloud application such as Salesforce CRM and, in several cases, hosts development platforms, such as the Salesforce App Cloud, that become the marketplace for building additional functionalities.

The differentiation layer adds vertical nuances to a horizontal application such as built-in industry regulatory compliance. It is here that we see the industry cloud taking shape, but the offerings are still standard and not customized to the needs of specific enterprises.

The customization layer brings in service providers or technology vendors with their vertical expertise and decades of experience in working with enterprises. They partner with providers of differentiated cloud offerings, and build tools and accelerators to further customize them to suit enterprise needs, adding capabilities such as AI and security, personalized dashboards for analytics, and integration services to build a truly industry-specific/vertical cloud offering.

We have illustrated this architecture in the exhibit below, taking the example of Veeva Systems, a popular provider of industry cloud solutions for life sciences. Veeva started as a CRM application built on the Salesforce platform, designed specifically for the pharmaceutical and biotechnology industries. Salesforce provided the infrastructure, platform, and application layers, while Veeva added a differentiation layer with a data model, application logic, and user interface tailored to support the pharmaceutical and biotechnology industries. It leveraged the standard Salesforce reporting, configuration, and maintenance capabilities.

Exhibit: Understanding the industry cloud architecture through Veeva industry cloud

layers

Over time, Veeva has cultivated an ecosystem of partnerships, including service providers such as Accenture (Veeva CRM partner) and Cognizant (Veeva Vault partner). These partners leverage the Veeva development platform to build additional applications customized to enterprise needs, thereby adding the final customization layer to Veeva’s solutions suite.

Industry cloud is gaining significant traction among industries

Industries such as healthcare and banking – which require rapid and auditable deployment of new features or functionalities to comply with regulatory changes – are rapidly adopting industry cloud. Healthcare continues to lead the charge from a vertical standpoint, but many industries are experiencing an uptick in adoption, including manufacturing, financial services, and retail.

The key reasons for this growth are:

  • Lowered barriers to cloud adoption, along with a ready-to-use environment with tools and services tailored to a specific vertical’s operational requirements
  • Accelerated innovation, lower costs, and reduced risks
  • Efficient handling of data sources and workflows, and compliance with the industry’s unique standards
  • Support for industry-standard APIs, helping companies connect more easily and securely, accelerating the DX economy
  • Access to industry insights or benchmarks through data aggregation from multiple clients within the same industry

What can organizations expect in the near future?

With the one-cloud-fits-all-approach reaching maturity, the next decade will be marked by the depth of vertical expertise and customization capabilities that can complement existing applications and address customer pain points. Different kinds of vendors are developing industry cloud solutions, ranging from hyperscalers such as Azure and GCP, to vertical-specific players such as Veeva. We will cover the industry cloud market in further detail in parts 2 and 3 of this blog series, in which we will answer the following questions:

  • What are the different kinds of industry cloud solution providers and their go-to-market strategies?
  • What should enterprises do, and how should they source industry cloud solutions that best suit their needs?

The battle for industry cloud is only going to get fiercer in the near future. Please follow this space for more blogs on the emerging war and warriors in the industry cloud market. Meanwhile, please feel free to reach out to [email protected] or [email protected] to share any questions and your experiences.

 

Cloud Wars Chapter 5: Alibaba, IBM, and Oracle Versus Amazon, Google, and Microsoft. Is There Even a Fight? | Blog

Few companies in the history of the technology industry have aspired to dominate the way public cloud vendors Microsoft Azure, Amazon Web Services, and Google Cloud Platform currently are. I’ve covered the MAGs’ (as they’re collectively known) ugly fight across other blogs on industry cloud, low-code, and market share.

However, enterprises and partners increasingly appear to be demanding more options. That’s not because these three cloud vendors have done something fundamentally wrong or their offerings haven’t kept pace. Rather, it’s because enterprises are becoming more aware of cloud services and their potential impact on their businesses, and because Alibaba, IBM, and Oracle have introduced meaningful offerings that can’t be ignored any longer.

What’s changed?

Our research shows that enterprises have moved only about 20 percent of their workloads to the cloud. They started with simple workloads like web portals, collaboration suites, and virtual machines. After this first phase of their journey to the cloud, they realized that they needed to do a significant amount of preparation to be successful. Many enterprises – some believe more than 90 percent – have repatriated at least one workload from public cloud, which opened enterprise leaders’ eyes to the importance of fit-for-purpose in addition to a generic cloud. So, before they move more complex workloads to the cloud, they want to be absolutely sure they get their architectural choices and cloud partner absolutely right.

Is the market experiencing public cloud fatigue?

When AWS is clocking over US$42 billion in revenue and growing at  about 30 percent, Google Cloud has about US$15 billion in revenue and is growing at over 40 percent, and Azure is growing at over 45 percent, it’s hard to argue that there’s public cloud fatigue. However, some enterprises and service partners believe these vendors are engaging in strongarm tactics to own the keys to the enterprise technology kingdom. In the race to migrate enterprise workloads to their cloud platforms, these vendors are willing to proactively invest millions of dollars – that is, on the condition that the enterprise goes all-in and builds architecture for workloads on their specific cloud platform. Notably, while all these vendors extol multi-cloud strategies in public, their actual commitment is questionable. At the same time, this isn’t much different than any other enterprise technology war where the vendor wants to own the entire pie.

Enter Alibaba, IBM, and Oracle (AIO)

In an earlier blog, I explained that we’re seeing a battle between public cloud providers and software vendors such as Salesforce and SAP. However, this isn’t the end of it. Given enterprises’ increasing appetite for cloud adoption, Alibaba, IBM, and Oracle have meaningfully upped the ante on their offerings. The move to the public cloud space was obvious for IBM and Oracle, as they’re already deeply entrenched in the enterprise technology landscape. While they probably took a lot more time than they should have in building meaningful cloud stories, they’re here now. They’re focused on “industrial grade” workloads that have strategic value for enterprises and on building open source as core to their offering to propagate multi-cloud interoperability. IBM has signed multiple cloud engagements with companies including Schlumberger, Coca Cola European Partners, and Broadridge. Similarly, Oracle has signed with Nissan and Zoom. And Oracle, much like Microsoft, has the added advantage of having offerings in the business applications market. Alibaba, despite its strong focus on China and Southeast Asia, is increasingly perceived as one of the most technologically advanced cloud platforms.

What will happen now, and what should enterprises do?

As enterprises move deeper into their cloud journeys, they must carefully vet and bet on cloud vendors. As infrastructure abstraction rises at a fever pitch with serverless, event-driven applications and functions-as-a-service, it becomes relatively easier to meet the lofty ideals of Service Oriented Architecture to have a fully abstracted underlying infrastructure, which is what a true multi-cloud environment also embodies. The cloud vendors realize that, as they provide more abstracted infrastructure services, they risk being easily replaced with API calls applications can make to other cloud platforms. Therefore, cloud vendors will continue to make high value services that are difficult to switch from, as I argued in a recent blog on multi-cloud interoperability.

It appears the MAGs are going to dominate this market for a fairly long time. But, given the rapid pace of technology disruption, nothing is certain. Moreover, having alternatives on the horizon will keep MAGs on their toes and make enterprise decisions with MAGs more balanced. Enterprises should keep investing in in-house business and technology architecture talent to ensure they can correctly architect what’s needed in the future and migrate workloads off a cloud platform when and if the time comes. Enterprises should also realize that preferring multi-cloud and actually building internal capabilities for multi-cloud are two very different things. In the long run, most enterprises will have one strategic cloud vendor and two to three others for purpose-built workloads. However, they shouldn’t be suspicious of the cloud vendors and shouldn’t resist leveraging the brilliant native services MAGs and AIO have built.

What has your experience been working with MAGs and AIO? Please share with me [email protected].

Reflections on Cloudera Now and the Battle for Data Platform Supremacy | Blog

The enterprise data market is going through a pretty significant category revision, with native technology vendors – like Cloudera, Databricks, and Snowflake – evolving, cloud hyperscalers increasingly driving enterprises’ digital transformation mandates, and incumbent vendors trying to remain relevant (e.g., the 2019 HPE-MapR deal.) This revision has led to leadership changes, acquisitions, and interesting ecosystem partnerships. Is data warehousing the new enterprise data cloud category that will eventually be a part of the cloud-first narrative?

Last month I attended Cloudera Now, Cloudera’s client and analyst event. Read on for my key takeaways from the event and let me know what you think.

  • Diversity and data literacy come to the forefront: Props to Cloudera for addressing key issues up front. In the first session, CEO Rob Bearden and activist and historian Dr. Mary Frances Berry had an honest dialogue about diversity and inclusion in tech. More often than not, tech vendors pay lip service to these issues of the zeitgeist, so it was a refreshing change to see the event kicking off with this important conversation. During the analyst breakout, Rob also took questions on data literacy and how crucial it is going to be as Cloudera aims to become more meaningful to enterprise business users against the backdrop of data democratization.
  • Cloudera seems to be turning around, slowly: After a tumultuous period following its merger with Hortonworks in early 2019, Cloudera has new, yet familiar, leaders in place, with Rob Bearden (previously CEO of Hortonworks) taking over the CEO reins in January 2020. The company reported its FYQ2 2021 results a few weeks before the event, and its revenue increased 9 percent over the previous quarter, its subscription revenue was up 17 percent, and its Annualized Recurring Revenue (ARR) grew 12 percent year-over-year. ARR is going to be really key for Cloudera to showcase stickiness and client retention. While its losses narrowed in FYQ2 2021, it has more ground to cover on profitability.
  • Streaming and ML will be key bets: As the core data warehousing platform market faces more competition, it is important for Cloudera to de-risk its portfolio by expanding revenue from emerging high growth spend areas. It was good to see streaming and Machine Learning (ML) products growing faster than the company. In early October, it also announced its acquisition of Eventador, a provider of cloud-native services for enterprise-grade stream processing, to further augment and accelerate its own streaming platform, named DataFlow. The aim is to bring this all together through Shared Data Experience (SDX), is Cloudera’s integrated offering for security and governance.
  • We are all living in the hyperscaler economy: Not surprisingly, there were a share of discussions around the increasing role of the cloud hyperscalers in the data ecosystem. The hyperscalers’ appetite is voracious; while the likes of Cloudera will partner with these cloud vendors, competition will increase, especially on industry-specific use cases. Will one of the hyperscalers acquire a data warehousing vendor? One can only speculate.
  • Industry-specificity will drive the next wave of the platform growth story: I’ve been saying this for a while – clients don’t buy tools, they buy solutions. Industry-context is becoming increasingly important, especially in more regulated and complex industries. For example, after its recent Vlocity acquisition, Salesforce announced Salesforce Industries to expand its industry product portfolio, providing purpose-built apps with industry-specific data models and pre-built business processes. Similarly, Google Cloud has ramped up its industry solutions team by hiring a slew of senior leaders from SAP and the industry. For the data vendors, focusing on high impact industry-led use cases – on their own and with partners – will be key to unlocking value for clients and driving differentiation. Cloudera showcased some interesting use cases for healthcare and life sciences, financial services, and consumer goods. Building a long-term product roadmap here will be crucial.

By happenstance, the Cloudera event started the same day its primary competitor, cloud-based data warehousing vendor Snowflake made its public market debut and more than doubled on day one, making it the largest ever software IPO. Make of that what you will, but to me it is another sign of the validation of the data and analytics ecosystem. Watch this space for more.

I’d enjoy hearing your thoughts on this space. Please email me at: [email protected].

Full disclosure: Cloudera sent a thoughtful package ahead of the event, which included a few fine specimens from the vineyards in La Rioja. I can confirm I wasn’t sampling them while writing this.

IBM Splits Into Two Companies | Blog

IBM announced this week that it is spinning off its legacy Managed Infrastructure business into a new public company, thus creating two independent companies. I highly endorse this move and, in fact, advocated it for years. IBM is a big, successful, proud organization. But it has been apparent for years that it faced significant challenges in trying to manage two very different businesses and operate within two very different operating models.

Read more in my blog on Forbes

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Cloud Native is Not Enough; Enterprises Need to Think SDN Native | Blog

Over the past few years, cloud-native applications have gained significant traction within organizations. These applications are built to work best in a cloud environment using microservices architecture principles. Everest Group research suggests that 59 percent of enterprises have already adopted cloud-native concepts in their production set-up. However, most enterprises continue to operate traditional networks that are slow and stressed due to data proliferation and an increase in cloud-based technologies. Like traditional datacenters, these traditional networks limit the benefits of cloud native applications.

SDN is the network corollary to cloud

The network corollary to a cloud architecture is a Software Defined Network (SDN.) An SDN architecture allows decoupling of the network control plane from the forwarding plane to enable policy-based, centralized management of the network. In simpler terms, an SDN architecture enables an enterprise to abstract away from the physical hardware and control the entire network through software.

Current SDN performance is sub-optimal

Most of the current SDN adoption is an afterthought, offering limited benefits similar to the lift-and-shift of applications to the cloud. Challenges with the current SDN adoptions include:

  • Limited interoperability – Given the high legacy presence, enterprises find themselves in a hybrid legacy SDN infrastructure, which is very difficult to manage.
  • Limited scalability – As the environment is not designed to be SDN native, applications end up placing a high volume of networking requests on the SDN controller, limiting data flow.
  • Latency issues – Separate control and data planes can introduce latency in the network, especially in very large networks. Organizations need to carry out significant testing activities before any SDN implementation.
  • Security issues – The ad hoc nature of current SDN adoption means that the entire network is more vulnerable to security breaches due to the creation of multiple network segments.

SDN native is not about applications but about the infrastructure

Unlike cloud native, which is more about how applications are architected, being SDN native is about architecting the enterprise network infrastructure to optimize the performance of modern applications running on it. While sporadic adoption of SDN might also deliver certain advantages, an SDN-native deployment requires organizations to overhaul their legacy infrastructure and adopt the SDN-native principles outlined below.

Principles of an SDN-native infrastructure

  • Ubiquity – An SDN-native infrastructure needs to ensure that there is a single network across the enterprise that can connect any resource anywhere. The network should be accessible from multiple edge locations supporting physical, cloud, and mobile resources.
  • Intelligence – An SDN-native infrastructure needs to leverage AI and ML technologies to act as a smart software overlay that can monitor the underlay networks and select the optimum path for each data packet.
  • Speed – To reduce time-to-market for new applications, an SDN-native infrastructure should have the capability to innovate and make new infrastructure capabilities instantly available. Benefits should be universally spread across regions, not limited to specific locations.
  • Multi-tenancy – An SDN-native infrastructure should not be limited by the underlay network providers. Applications should be able to run at the same performance levels regardless of any changes in the underlay network.

Recommendations on how you can become an SDN-native enterprise

Similar to the concept of cloud native, the benefits of an SDN-native infrastructure cannot be gained by porting software and somehow trying to integrate with the cloud. You need to build a network native architecture with all principles ingrained in its DNA from the very beginning. However, most enterprises already carry the burden of legacy networks and cannot overhaul their networks in a day.

So, we recommend the following approach:

  • Start small but think big – Most enterprises start their network transformation journeys by adopting SD-WAN in pockets. This approach is fine to begin with, but to become SDN native, you need to plan ahead with the eventual aim of making everything software-defined in the minimum possible time.
  • Time your transformation right – Your network is a tricky IT infrastructure component to disturb when everything is working well. However, every three to four years, you need to refresh the network’s hardware components. You should plan to use this time to adopt as much SDN as possible while ensuring that you follow SDN-native principles.
  • Leverage next-gen technologies – To follow the principles of SDN-native infrastructure, you need to make use of multiple next-generation technologies. For example, edge computing is essential to ensure ubiquity, AI/ML for intelligence, NetOps tools for speed, and management platforms for multi-tenancy.
  • Focus on business outcomes – The eventual objective of an SDN-native infrastructure is better business outcomes; this objective should not get lost in the technology upgrades. The SDN-native infrastructure should become an enabler of cloud-native application implementation within your enterprise to drive business benefits.

What has your experience been with adoption of an SDN-native infrastructure? Please share your thoughts with me at [email protected].

For more on our thinking on SDN, see our report, Network Transformation and Managed Services PEAK Matrix™ Assessment 2020: Transform your Network or Lie on the Legacy Deathbed

How Companies Unleash Value Creation Through Collaborative Networks | Blog

Ecosystems always existed in businesses, but the importance of collaboration across ecosystems dramatically increases today in the effort to create new value. However, the underlying software architecture – traditional database technology – that supports collaboration was built for a siloed, boundaried approach to business, not a networked approach among multiple enterprises or even multiple industries.

Read more on my blog

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

  • Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.