Category: Blog

Reflections on Cloudera Now and the Battle for Data Platform Supremacy | Blog

The enterprise data market is going through a pretty significant category revision, with native technology vendors – like Cloudera, Databricks, and Snowflake – evolving, cloud hyperscalers increasingly driving enterprises’ digital transformation mandates, and incumbent vendors trying to remain relevant (e.g., the 2019 HPE-MapR deal.) This revision has led to leadership changes, acquisitions, and interesting ecosystem partnerships. Is data warehousing the new enterprise data cloud category that will eventually be a part of the cloud-first narrative?

Last month I attended Cloudera Now, Cloudera’s client and analyst event. Read on for my key takeaways from the event and let me know what you think.

  • Diversity and data literacy come to the forefront: Props to Cloudera for addressing key issues up front. In the first session, CEO Rob Bearden and activist and historian Dr. Mary Frances Berry had an honest dialogue about diversity and inclusion in tech. More often than not, tech vendors pay lip service to these issues of the zeitgeist, so it was a refreshing change to see the event kicking off with this important conversation. During the analyst breakout, Rob also took questions on data literacy and how crucial it is going to be as Cloudera aims to become more meaningful to enterprise business users against the backdrop of data democratization.
  • Cloudera seems to be turning around, slowly: After a tumultuous period following its merger with Hortonworks in early 2019, Cloudera has new, yet familiar, leaders in place, with Rob Bearden (previously CEO of Hortonworks) taking over the CEO reins in January 2020. The company reported its FYQ2 2021 results a few weeks before the event, and its revenue increased 9 percent over the previous quarter, its subscription revenue was up 17 percent, and its Annualized Recurring Revenue (ARR) grew 12 percent year-over-year. ARR is going to be really key for Cloudera to showcase stickiness and client retention. While its losses narrowed in FYQ2 2021, it has more ground to cover on profitability.
  • Streaming and ML will be key bets: As the core data warehousing platform market faces more competition, it is important for Cloudera to de-risk its portfolio by expanding revenue from emerging high growth spend areas. It was good to see streaming and Machine Learning (ML) products growing faster than the company. In early October, it also announced its acquisition of Eventador, a provider of cloud-native services for enterprise-grade stream processing, to further augment and accelerate its own streaming platform, named DataFlow. The aim is to bring this all together through Shared Data Experience (SDX), is Cloudera’s integrated offering for security and governance.
  • We are all living in the hyperscaler economy: Not surprisingly, there were a share of discussions around the increasing role of the cloud hyperscalers in the data ecosystem. The hyperscalers’ appetite is voracious; while the likes of Cloudera will partner with these cloud vendors, competition will increase, especially on industry-specific use cases. Will one of the hyperscalers acquire a data warehousing vendor? One can only speculate.
  • Industry-specificity will drive the next wave of the platform growth story: I’ve been saying this for a while – clients don’t buy tools, they buy solutions. Industry-context is becoming increasingly important, especially in more regulated and complex industries. For example, after its recent Vlocity acquisition, Salesforce announced Salesforce Industries to expand its industry product portfolio, providing purpose-built apps with industry-specific data models and pre-built business processes. Similarly, Google Cloud has ramped up its industry solutions team by hiring a slew of senior leaders from SAP and the industry. For the data vendors, focusing on high impact industry-led use cases – on their own and with partners – will be key to unlocking value for clients and driving differentiation. Cloudera showcased some interesting use cases for healthcare and life sciences, financial services, and consumer goods. Building a long-term product roadmap here will be crucial.

By happenstance, the Cloudera event started the same day its primary competitor, cloud-based data warehousing vendor Snowflake made its public market debut and more than doubled on day one, making it the largest ever software IPO. Make of that what you will, but to me it is another sign of the validation of the data and analytics ecosystem. Watch this space for more.

I’d enjoy hearing your thoughts on this space. Please email me at: [email protected].

Full disclosure: Cloudera sent a thoughtful package ahead of the event, which included a few fine specimens from the vineyards in La Rioja. I can confirm I wasn’t sampling them while writing this.

IBM Splits Into Two Companies | Blog

IBM announced this week that it is spinning off its legacy Managed Infrastructure business into a new public company, thus creating two independent companies. I highly endorse this move and, in fact, advocated it for years. IBM is a big, successful, proud organization. But it has been apparent for years that it faced significant challenges in trying to manage two very different businesses and operate within two very different operating models.

Read more in my blog on Forbes

Health Insurance Open Enrollment Period (OEP) 2021: Key Changes, Challenges, and Opportunities | Blog

“It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.” – Charles Darwin

Charles Darwin’s famous words aptly describe what healthcare payers need to do in the times of COVID-19. The pandemic’s disruptive nature has forced the industry to accelerate adoption of many concepts –  such as telehealth – that had earlier been considered at least half a decade away from becoming mainstream. The changes that the US Centers for Medicare and Medicaid Services (CMS) has proposed for the health insurance Open Enrollment Period (OEP) 2021 are a clear indicator of these transformative times. The objective of these changes – such as the expansion of telehealth coverage, more transparency through regulations such as the Interoperability and Patient Access rule, and changes in risk adjustment / star ratings calculations – is accelerating the CMS toward its goals of universal coverage, transparency, member satisfaction, interoperability, and resilience.

As the OEP is a time when healthcare payers strategize about how to increase their enrollment numbers (in the short term) and achieve operational and business transformation (in the long term), it is imperative that payers not only understand the upcoming changes but embrace them through the right investments. OEP 2021 becomes effective November 2020, so healthcare payers are in the midst of the planning season.

In this blog, we take a look at the key changes CMS has proposed for OEP 2021 and analyze their impact on healthcare payers.

Exhibit 1: OEP 2021 proposed changes

Key changes suggested by CMS for OEP 2021

The impact of CMS-proposed changes on healthcare payers

CMS’ recommended changes for OEP 2021 are likely to impact healthcare payers in multiple ways:

  • Shift in membership and profit pools: The change in healthcare payers’ membership bases due to factors such as rising unemployment (which has reduced the employer-sponsored plan base) and the enrollment of End-stage Renal Disease (ESRD) patients in Medicare Advantage (MA) plans is likely to increase healthcare payer costs.
  • Member transparency and control measures: OEP 2021 has a slew of changes aimed at ensuring transparency through data sharing with members/patients via APIs and third-party apps. These changes include mandating the use of Real Time Benefit Tools (RTBT) for Part D plans and rules requiring plans to disclose the measures used to evaluate network pharmacy performance. It is clear that the CMS wants health plans (particularly MA Part D in this case) to invest in technology, data sharing, and reporting to enable the next phase of member-centricity in healthcare.
  • Medical Loss Ratio rebates support: Administrative Loss Ratio (ALR) / MLR has always been a pain point for healthcare payers, as an unfavorable ratio implies refunds and complex readjustments. With the CMS offering some rebates to payers in terms of how they calculate MLR, payers are likely to invest in improving care delivery initiatives.
  • CMS reporting dilemmas: With the CMS pushing healthcare payers to share actual member/patient experience data for Star Ratings and Risk Adjustment score calculations, healthcare payers will need to invest more in member satisfaction.
  • Shifts in health plan benefit inclusions: Telehealth services are only one set of inclusions that payers need to think about incorporating in plan benefits. Many other areas merit attention, such as member support, personalized communications, reorganization of provider network, and plan tiering.

How can payers navigate the changes and what are the likely sourcing implications?

While OEP 2021 is just another milestone for the CMS to drive healthcare efficiency, it is also notable that the changes are happening in the backdrop of the COVID-19 pandemic. The timing presents healthcare payers with both challenges and opportunities. In fact, industry experts believe that if ever there was a time for payers to change, it is now. This means that payers need to prepare strategies quickly to navigate the CMS-proposed changes, as well as changes arising from the COVID-19 disruption. The strategies will, in turn, lead to changes in their sourcing practices, thereby creating opportunities for outsourcing them to service providers.

Exhibit 2 lists the strategies that, we believe, healthcare payers will adopt in the coming months and the sourcing implications for each of them.

Exhibit 2: Payer mitigation efforts and sourcing implications

Payer mitigation efforts and sourcing implications

For the outsourcing and third-party vendor community, this is the right time to help mitigate the impact of OEP 2021 and the pandemic on healthcare payers. Service providers should align their offerings with payer needs.

If you’d like to know more about OEP 2021 and its wide-ranging impact, please read our recently published viewpoint Open Enrollment 2021 Primer: What to Expect and How to Navigate in the Wake of COVID-19. You can also reach out to me directly at [email protected] if you have any questions or observations.

Digital Reality Episode #13 | Productivity in IT Does Not Come Easy

In this podcast Cecilia Edwards, Jimit Arora and their special guest, Ashwin Venkatesan, take a look at how productivity levels have changed for both the better and the worse over the course of 2020. They discuss a repeatable process IT organizations can use to drive and maintain increased productivity.

Explore all episodes of the Digital Reality Podcast

Jimit Arora:

Welcome to the 13th episode of Digital Reality, Everest Group’s monthly podcast that moves beyond theory and beyond technology to discuss the realities of doing business in a digital-first world. I’m Jimit Arora …

Cecilia Edwards:

… and I’m Cecilia Edwards. Each month we bring you a discussion that digs into the details of what it means fundamentally to execute a digital transformation that creates real business results. Today, I’m pleased that we have another guest with us, Ashwin Venkatesan, who also goes by AV. AV leads our IT and digital transformation research program. And we’re really glad to have him join us today. Welcome, AV.

Ashwin Venkatesan:

Hey, Cecilia. Hey, Jimit. Thank you so much. Really excited to be on.


So today we’re going to talk about productivity. This is an issue that continues to be top of mind for a number of IT leaders, given that we’re still in this kind of remote working timeframe. We’ve been tracking productivity through the pandemic and, not surprisingly, we’ve seen an evolution in terms of how this has panned out. In the early days, we did see a bit of productivity loss when folks were scrambling to get the right infrastructure to enable remote working. Bandwidth and infrastructure were clearly issues, but they were solved fairly quickly.

Once those issues were resolved, we began to see productivity increases. When we polled our user base, across the board we found that productivity went up on average by about 13%. The drivers of those increases were varied. Some of it was attributed to fewer distractions since people had nowhere to go. Some of it was due to less time commuting and some of it might’ve been due to people just working extra hours out of fear of job security.

So now the pendulum is beginning to swing back. We’re seeing a bit of a plateau and even some diminishing productivity, due potentially to fatigue that’s starting to set in or potentially some burnout from the starting intensity levels that people have been maintaining. So today, we want to start a discourse on how organizations are managing productivity – the productivity of their IT teams – and what techniques and strategies we’ve seen be successful in this next new normal. AV, I’d like to start with you. I know you’ve been examining this issue for a number of enterprises. So what’s your take on what’s happening with productivity right now?


Thanks Cecilia, that was great context. And it seems productivity factors are the flavor of the month. Earlier this week, in fact, JP Morgan was claiming that it has noticed productivity decline amongst employees, potentially because of the work from home lifestyle itself. So yes, structurally dealing with the productivity issue has become very important, and a number of factors are actually contributing to this perfect storm for organizations, especially in an IT context. And at the heart of this is something both you and Jimit were speaking about earlier, which is everything is on the table.

So we are at a point where companies have significantly accelerated their digital agendas. But guess what? There’s an associated conundrum to this. On the one hand, IT budgets are limited, and on the other hand, stakeholder expectations are simply skyrocketing. So consequently, people are actually having to think of this as an and function: If we think of stakeholders for IT, be it end users or our customers, these stakeholders want both speed and cost effectiveness. They want experience and efficiency. So it’s almost a duality that needs to be balanced.

And this is where CIOs are struggling, because they’re not able to make the trade-offs. Because, on one hand they need to survive and show value to the business. But on the other hand, the CIO also needs to run a more efficient IT shop. And this is exactly where the concept of productivity comes. So at a very basic level, productivity is at the heart of what every CIO wants, and it is all about doing more with less. We at Everest Group call this hyper-productivity, by which we mean that it’s about getting an order of magnitude in terms of the outcomes. And this is done by optimizing people, optimizing processes, and making changes to your technology stacks amongst other things.

So that’s kind of setting the context around productivity. And, Jimit, at this point, I would like to bring you into the conversation. So you and I have partnered to help quite a few clients, especially over the last few weeks, on this productivity issue. So what do you see as some of the starting points for enterprises on this productivity journey?


Sure, AV. I think the first step, thankfully, is a simple one. You want to improve productivity – start measuring it. I know it sounds very basic and very simple, but I think for so long we’ve been thinking about this concept that most enterprises, most IT groups, have made a big deal of this. And they don’t really have a simple measures that allow them to make progress on this. So when you complicate the measurement too much, it becomes a problem. It becomes an impediment. So step one, you need to make sure you start measuring it. And to do that, you need to select metrics that are very simple, and they need to be holistic and capture multiple aspects of impact.

So let me share some examples of what we are seeing, what companies are looking to quantify as they’re looking for their productivity journeys to get ramped up. Most often, it’s starting with what’s the speed or velocity of the output. So, how quickly did I deploy that new code? What’s the quality of output? How many changes are we seeing? So that becomes a good measure of quality indirectly.

Change failure rate is another example of something organizations are measuring and tracking quite effectively. How do you think of business alignment? And this one trips people up quite significantly, but we are seeing some very simple measures around NPS, for example, coming back in. So what’s the employee NPS on the business side? And that becomes something that you want to measure. And as you’ll see, it’s on a velocity metric, which is how we tend to think of productivity, but it is a business impact metric. And then the fourth tends to be cost. And this is where most people have simply fixated.

And as you’ll see, as we’ve expanded this definition of productivity, you want to keep this holistic. So looking at speed, which is what most companies think of. But you’re bundling in quality, you’re bringing in business alignment and cost to operate. Once you start to combine some of these is where you start to see this concept of hyper-productivity kick in. So make them simple, make them complete, and obviously make sure that they are quantitative and can be measured at frequent intervals. And here’s a big secret and in some ways a challenge to conventional thinking that we found, as teams are striving toward greater performance, higher productivity: don’t benchmark it versus the market. Do not seek arbitrary, “Here’s what good looks like in my industry or here’s what Google is doing, therefore that should be my productivity threshold.”

We don’t think there’s a single right answer or a single right industry benchmark in terms of the number of story points an ideal board can deliver in two weeks. Context is what really matters in this productivity journey. Each team and each journey will be different. And I think that’s where most organizations truly trip up. Once you measure what your metric is, you just set the next milestone in a manner that shows progress for yourself, for your team, versus trying at this stage to benchmark it in the industry. Each print or each iteration, you’re trying to drive incremental improvements over the previous baseline, over your previous baseline. And that is the approach that delivers great results. It sounds super simple because in theory it is super simple. Measure it, measure it versus yourself, keep improving.


Yeah, super simple. That’s what everybody’s looking for. So I know that I’ve been surprised that people come in thinking that there’s some secret sauce to this. Productivity to them means, “Hey, let’s bring in the smartest developers, move to low code development, and deploy a variety of tools.” So, sure, these things might help, but they don’t necessarily create a material or sustainable change in the culture. And I think that when we’re talking about looking at that process that you just mentioned, Jimit, and not thinking about an industry benchmark, but having that context, that that’s really what this is about. You’ve got to make some cultural changes.

So the heart of it really is the simple yet repeatable process. And at the heart of this approach is your context, as you mentioned. So to the extent that you’re not trying to meet these kind of arbitrary industry thresholds and designing the program around your environment, your ecosystem. And the thing we often forget is your people as well, that always works. So the key is to not force some mandated set of answers, but to really create a sense of excitement by making the team feel valued and empowering them to think, debate, and solve those problems.

I think that we’ll be really surprised at what teams come up with. I mean, it could be everything from the need to take breaks, right? There’re a lot of articles that I’ve been reading lately on the Tabata method, work for 25 minutes, take a break. Whether or not the team does team norming, how do you recreate the water cooler in a remote environment? It’s interesting that sometimes the breaks are the very things that can actually increase the productivity. So give the team the flexibility to think holistically about what’s going to work in their context for them.


You’re right, Cecilia. And just reflecting on what both you and Jimit have been saying, there is no single answer to this, there’s no silver bullet at the end of the day. There are so many variables here that we can play with to really drive progression along this entire concept of productivity. So this is going to be a multi-pronged approach.

We have obviously been researching numerous client environments over the past few years, and based on all of these experiences, we have identified a few levers that an organization can consider as it embarks on this productivity journey. And we see six of these as key dimensions where changes can be facilitated, and a productivity improvements can be achieved.

So just going through them, and in no specific order: first is the organization structure itself. Breaking silos becomes very, very important. So many IT environments have actually grown into a space and a size where it becomes really hard to bring teams together. But the focus needs to be around how you make the organization structure end-to-end. And how you enable and enhance coordination amongst the business teams and the IT teams. It’s one of the most fundamental requirements, in our view, to help drive the productivity mandate.


And, AV, I think that that’s one where we’re going to see a lot of challenges, right? Because that silo mentality is really strong. And it’s probably strongest not with the junior people, but the leaders who have their kind of empires that they’re building, their domain that they’re trying to protect. And so encouraging that collaborative versus siloed approach is really something that we’ve seen as a challenge, but it’s one of the most effective things that an organization can do.


Absolutely, Cecilia. And you point out a very important aspect, that especially when it comes to organizational structures, it has to be a top-down mandate. You need someone to bring the organization together. It’s a very valid point.

The second, then, is talent and skills. And I don’t think there are any surprises over here, it is potentially one of the huge talking points when it comes to enabling a successful IT operating model at this point in time. So how you build your pyramid, whether you have the cross-functional skills, all of this becomes very, very important.


AV, I think on that talent and skills piece, and I know we’ve spoken about this and I struggle whether it’s an org structure issue or a talent and skills issue, I think it’s somewhere in between. What’s associated with that is also the culture. And I think that, I believe it was Google that came up with this research, where it wasn’t the smartest developers who really enabled high productivity, but it was essentially elements of culture, the ability to work together as a team, the cohesion of that unit, that really determined how successful they were. And it wasn’t about you had eight black hat grade developers in your ecosystem. So, do you see that as an important dimension to the culture?


Absolutely. I think it’s a cliche, we use it all the time, but the fact is that culture eats strategy for breakfast. And in today’s IT environment where we have essentially a proliferation and the kind of themes and the functions that are getting built in.


I think that there’s a basketball analogy that works really well on this. If you think about the all-star games, they’re never really good. You take the absolute best talent from across the MBA, you throw them all together briefly on the court and the games aren’t very good, right? The teams that win are those that have a good mix of people. So they’re going to have the superstars and then not so superstars, but they’ve learned how to work together in really effective ways. Those games are a whole lot more interesting. They produce a whole lot higher scoring and everything. So I think that that’s a good analogy to what we’re talking about here. That it can’t just be about having the best, it’s about how they work together as well.


Sure. LeBron needs a team around him. Absolutely, Cecilia.




Yeah. But that’s a good point. And coming back to this. So absolutely, the organization’s structure, talent and skills, culture sits somewhere in between, but there are other important elements as well.

So the third one, and we simply can’t overlook this, this is about technology and platforms. It’s about making the right choice. And what we see in many cases is there’s almost a fixation to go for what is seen as the best-of-breed technology. We really want to invest in a particular tool because this seems to be the one that the market is telling us is the best. But it’s also about ensuring that that particular tool or solution is very much compatible in your environment and can speak to your existing investments as well. So as I think of technology and platforms, do ensure that the answer is not always about going for the best tool, but it’s about how it fits into the broader environment and the existing investments that you have made. It becomes really important.

The fourth one is the service delivery process. Again, it’s very important in the sense that we have been having IT function around services for many, many years now. Again, to use a cliched term, but the focus has always been on keeping the lights green. But when you are talking about making your own environment productive, there needs to be a flip in the way you measure outcomes from a service delivery standpoint. And this is where we see that the design needs to be expedience- and business-outcomes oriented, rather than being SLA first. It’s an important pivot. Again, things around culture and people and training, everything comes into the picture over here. But this is going to be another key element that if you can get right, it’s going to help you drive significant productivity within your estates.

Location, the fifth one, and it’s an interesting angle. So the debate around onshore and offshore has gone on for quite some time, but it’s a lever that you can again use within this context.

And then finally the sixth one, which in our view is a little overlooked, is reusability. And this is essentially tied to the knowledge management angle. So how do you institutionalize your experiences and learnings so that you do not reinvent the wheel, so you save input and consequently drive productivity? Now, typically what we have seen is there’s a lot of innovation that happens across different parts of the organization. It happens in silos. So one part is obviously the cultural part of it and the org part of it. But also a key requirement becomes how do you have a knowledge management platform that underpins all of this and then helps people learn best practices, previously used tools and accelerators, and whatnot? So reusability, a potentially overlooked item, but an important element.

And the key point here is if you look at all six levers, you have a variety of choices and adjustments that you can make. And in this, your context is what’s going to dictate what’s best for you.

Let me offer a couple of examples where we have seen some of these success stories. So Cisco as an example, sometime back, they claimed that they had reduced 40% of defects in their subscription billing platform. Now this was done by just launching three agile release streams and making them work together. So it was simply an organizational tweak that helped Cisco reduce defects by 40%. And it was almost like quite a bit of time and productivity being built into their entire platform and consequently helping impact revenues.


Airbnb, very recently actually, claimed that it improved its product development by erecting a single environment wherein it brought designers, engineers, and even the researchers, everyone who’s involved in the product development process, so that they could pull in ideas and create synergies. So this one, again, is an org and a process design lever being pulled together. But again, helping drive productivity.

Now, going back to Jimit’s points: these are good examples, but we’re not saying that what worked for Cisco or Airbnb is going to work for you. We can offer a set of standard guidelines, but we want you to remember this that it is always going to be very contextual and what worked for another organization won’t be working for you. But if you want to get started on this entire enterprise productivity journey, and especially if you want a copy of a framework to get started with it, feel free to ping us. We’ll be very happy to send this across and maybe have a product conversation.


Perfect. Hey, AV, thanks for that. I think this was a very productive discussion, if I may. And I know we’ve played around with one graphic that I like fairly well, which is: think of this as an equalizer. It is a graphic equalizer where each of these individual levers serves as a toggle switch, which you can move up and down. Every switch individually will create impact once you reach the most optimal configuration for your environment, that’s when you get to beautiful music. And so I learned a fair bit, and there’s obviously a lot of lessons for us as we think through how organizations need to create a culture of high performance, high productivity – hyper productivity – and effectively achieve more with less. These lessons we call digital reality checkpoints, so there’s a few that I wanted to sum up with.

First one, don’t let perfect be the enemy of good. Don’t try to create the right perfect productivity plan. Take the first step, identify a set of simple metrics, measure them, get started. Second, recognize that each organization will have this variety of choices. You do have multiple levers to continue optimizing to deliver greater impact and increase productivity. So it’s not a one and done. And each journey is different. For those of you who’ve been with us on this journey, you know that we consider digital transformation to be a journey. Of course it is, but it’s your journey. Understand your context, try not to seek industry benchmarks as you’re starting. Eventually, yes, we’ll get to that. But don’t start by going after arbitrary numbers, which in some cases might create a false sense of progress, “Wow, I’m doing better than the industry. So I don’t need to improve.” Or, become such a big mountain to climb that they result in early failures. So it’s your journey, make sure that you progress along it in your context.


So I’d like to add my thanks to AV for joining us today. And I’d like to thank you for listening to this episode of Digital Reality. Please check us out at or follow us on LinkedIn at Jimit Arora and Cecilia Edwards. If you’d like to share your company’s story or have a digital topic that you’d like us to explore, reach out to us at [email protected].

Explore all episodes of the Digital Reality Podcast

The Shift From ERP To NRP For Ecosystem Value Creation | Blog

We live in an increasingly distributed business world in which organizations rely on timing and components in different parts of the world and in different time zones. To deliver greater value to customers, organizations now must combine assets and capabilities across companies into a single collaborative platform where they can create new value. This makes organizations more dependent on networked ecosystems, but traditional ERP systems cause huge constraints in the way networks can operate.

Read more in my blog on Forbes

Cloud Wars, Chapter 4: Own the “Low-Code,” Own the Client | Blog

In my series of cloud war blog posts, I’ve covered why the war among the top three cloud vendors is so ugly, the fight for industry cloud, and edge-to-cloud. Chapter 4 is about low-code application platforms. And none of the top cloud vendors – Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP) – want to be left behind.

What is a low-code platform?

Simply put, a platform provides the run time environment for applications. The platform takes care of applications’ lifecycle as well as other aspects around security, monitoring, reliability, etc. A low-code platform, as the name suggests, makes all of this simple so that applications can be built rapidly. It generally relies on a drag and drop interface to build applications, and the background is hidden. This setup makes it easy to automate workflows, create user-facing applications, and build the middle layer. Indeed, the key reason low-code platforms have become so popular is that they enable non-coders to build applications – almost like mirroring the good old What You See is What You Get (WYSIWYG) platforms of the HTML age.

What makes cloud vendors so interested in this?

Cloud vendors realize their infrastructure-as-a-service has become so commoditized that clients can easily switch away, as I discussed in my blog, Multi-cloud Interoperability: Embrace Cloud Native, Beware of Native Cloud. Therefore, these vendors need to have other service offerings to make their propositions more compelling for solving business problems. It also means creating offerings that will drive better stickiness to their cloud platform. And as we all know, nothing drives stickiness better than an application built over a platform. This understanding implies that cloud vendors have to move from infrastructure offerings to platform services. However, building applications on a traditional platform is not an easy task. With talent in short supply, necessary financial investments, and rapid changes in business demand, enterprises struggle to build applications on time and within budget.

Enter low-code platforms. This is where cloud vendors, which already host a significant amount of data for their clients, become interested. A low-code platform that runs on their cloud not only enables clients to build applications more quickly, but also helps create stickiness because low-code platforms are notorious for “non interoperability” – it’s very difficult to migrate from one to another. But this isn’t just about stickiness. It’s one more initiative in the journey toward owning enterprises’ technology spend by building a cohesive suite of services. In GCP’s case, it’s realizing that its API-centric assets offer a goldmine by stitching together applications. For example, Apigee helps in exposing APIs from different sources and platforms. Then GCP’s AppSheet, which it acquired last year, can use the data exposed by the APIs to build business applications. Microsoft, on the other hand, is focusing on its Power platform to build its low-code client base. Combine that with GitHub and GitLab, which have become the default code stores for modern programming, and there’s no end to the innovation that developers or business leaders can create. AWS is still playing catch-up, but its launch of Honeycode has business written all over it.

What should other vendors and enterprises do?

Much like the other cloud wars around grabbing workload migration, deploying and building applications on specific cloud platforms, and the fight for industry cloud, the low-code battle will intensify. Leading vendors such as Mendix and OutSystems will need to think creatively about their value propositions around partnerships with mega vendors, API management, automation, and integration. Most vendors support deployment on cloud hyperscalers and now – with a competing offering – they need to tread carefully. Larger vendors like Pega (Infinity Platform,) Salesforce (Lightning Platform,) and ServiceNow (Now Platform) will need to support the development capabilities of different user personas, add more muscle to their application marketplaces, generate more citizen developer support, and create better integration. The start-up activity in low-code is also at a fever pitch, and it will be interesting to see how it shapes up with the mega cloud vendors’ increasing appetite in this area. We covered this in an earlier research initiatives, Rapid Application Development Platform Trailblazers: Top 14 Start-ups in Low-code Platforms – Taking the Code Out of Coding.

Enterprises are still figuring out their cloud transformation journeys. This additional complexity further exacerbates their problems. Many enterprises make their infrastructure teams lead the cloud journey. But these teams don’t necessarily understand platform services – much less low-code platforms – very well. So, enterprises will need to upskill their architects, developers, DevOps, and SRE teams to ensure they understand the impact to their roles and responsibilities.

Moreover, as low-code platforms give more freedom to citizen developers within an enterprise, tools and shadow IT groups can proliferate very quickly. Therefore, enterprises will have to balance the freedom of quick development with defined processes and oversight. Businesses should be encouraged to build simpler applications, but complex business applications should be channeled to established build processes.

Low-code platforms can provide meaningful value if applied right. They can also wreak havoc in an enterprise environment. Enterprises are already struggling with the amount of their cloud spend and how to make sense of the spend. Cloud vendors introducing low-code platform offerings into their cloud service mix is going to make their task even more difficult.

What has your experience been with low-code platforms? Please share with me at [email protected].

Digital Trust – the Key to Secure Customer Engagement and Stickiness | Blog

In an age of pervasive cyberthreats and attacks, enterprises increasingly realize that ensuring trust and privacy is vital in the customer journey. In fact, CXOs now view cyber risks as business risks that can prevent them from establishing strong customer relationships, and they are proactively trying to find ways to address privacy or security gaps in their customer engagements.

In this context, the goal of digital trust is to instill confidence among enterprise customers, business partners, and employees in an organization’s ability to maintain secure systems, infrastructure, and perimeters, as well as to provide a secure, reliable, and consistent experience. Today, digital trust underpins businesses’ success directly by creating confidence among customers and other stakeholders.

Users at the core of digital trust

Establishing digital trust goes beyond the creation of a secure application or enforcement of stringent regulations to avoid cyberattacks. It is about leveraging the right combination of tools and technologies to create a superior digital experience for users that not only protects their privacy but also exceeds their service expectations.

To create such an unparalleled and smooth user experience through their digital transformation initiatives, enterprises should ensure and embed digital trust seamlessly in their processes and systems. Organizations need to understand that they can achieve 360-degree trust only if they keep the user at the center of digital transformation initiatives and build enterprise security controls around user attributes such as device, data, applications, and user environment.

To make digital trust a reality, enterprises should comply with privacy regulations to have the right data security controls across environments, employ usage-based security controls across the IT estate, provide secure access to user devices, understand user behavior through behavior and entity analytics, and monitor user activity to create secure access across applications, devices, and networks.

Building digital trust the right way

In a 2019 Everest Group survey of 200 CIOs, about 71% said they believe that they lacked centralized visibility across their IT estate, almost 42% said they were unable to measure and quantify end user experience, and 53% were unable to leverage essential technologies to improve end-user experience. About 70% of enterprises still lacked the capabilities of a unified threat detection system to prevent, detect, and manage unknown threats. These figures point to the glaring gaps in enterprises’ IT security infrastructures and understanding of their users’ experiences.

The concept of digital trust ties together business objectives and business resilience goals and ensures that the right user with the right intent is granted the right set of access and permissions for the right purpose. To build digital trust among users, organizations need to consider specific action items for different cybersecurity segments to create 360-degree digital trust, as outlined in the exhibit below.

Digital Trust – the Key to Secure Customer Engagement and Stickiness

Instead of implementing discrete security controls across the organization, enterprises need to take a holistic, outcome-oriented approach to cybersecurity. When organizations approach cybersecurity with the objective of creating a seamless user experience, it facilitates a sense of mutual and complete trust.

Digital trust in the age of COVID-19

The COVID-19 pandemic has led to a massive shift from offline to online channels. Such digital business extensions have created unprecedented security concerns worldwide. Users are concerned about the security of their private data and how organizations handle it. To build trust, enterprises must focus on building an empathetic and secure organization. If they can get this right, they will be able to win customer loyalty and trust, thereby laying the foundation of a future-proof sustainable business. As the world fights the pandemic, digital trust could well be the glue that binds customers to them.

To learn more about the need to think of IT security as the key enabler of digital trust among users and customers, please see our latest report, Digital Trust – The Cornerstone of Creating a Resilient and Truth-based Digital Enterprise. You could also reach out to us directly at [email protected] or [email protected] to explore this concept further.

Cloud Native is Not Enough; Enterprises Need to Think SDN Native | Blog

Over the past few years, cloud-native applications have gained significant traction within organizations. These applications are built to work best in a cloud environment using microservices architecture principles. Everest Group research suggests that 59 percent of enterprises have already adopted cloud-native concepts in their production set-up. However, most enterprises continue to operate traditional networks that are slow and stressed due to data proliferation and an increase in cloud-based technologies. Like traditional datacenters, these traditional networks limit the benefits of cloud native applications.

SDN is the network corollary to cloud

The network corollary to a cloud architecture is a Software Defined Network (SDN.) An SDN architecture allows decoupling of the network control plane from the forwarding plane to enable policy-based, centralized management of the network. In simpler terms, an SDN architecture enables an enterprise to abstract away from the physical hardware and control the entire network through software.

Current SDN performance is sub-optimal

Most of the current SDN adoption is an afterthought, offering limited benefits similar to the lift-and-shift of applications to the cloud. Challenges with the current SDN adoptions include:

  • Limited interoperability – Given the high legacy presence, enterprises find themselves in a hybrid legacy SDN infrastructure, which is very difficult to manage.
  • Limited scalability – As the environment is not designed to be SDN native, applications end up placing a high volume of networking requests on the SDN controller, limiting data flow.
  • Latency issues – Separate control and data planes can introduce latency in the network, especially in very large networks. Organizations need to carry out significant testing activities before any SDN implementation.
  • Security issues – The ad hoc nature of current SDN adoption means that the entire network is more vulnerable to security breaches due to the creation of multiple network segments.

SDN native is not about applications but about the infrastructure

Unlike cloud native, which is more about how applications are architected, being SDN native is about architecting the enterprise network infrastructure to optimize the performance of modern applications running on it. While sporadic adoption of SDN might also deliver certain advantages, an SDN-native deployment requires organizations to overhaul their legacy infrastructure and adopt the SDN-native principles outlined below.

Principles of an SDN-native infrastructure

  • Ubiquity – An SDN-native infrastructure needs to ensure that there is a single network across the enterprise that can connect any resource anywhere. The network should be accessible from multiple edge locations supporting physical, cloud, and mobile resources.
  • Intelligence – An SDN-native infrastructure needs to leverage AI and ML technologies to act as a smart software overlay that can monitor the underlay networks and select the optimum path for each data packet.
  • Speed – To reduce time-to-market for new applications, an SDN-native infrastructure should have the capability to innovate and make new infrastructure capabilities instantly available. Benefits should be universally spread across regions, not limited to specific locations.
  • Multi-tenancy – An SDN-native infrastructure should not be limited by the underlay network providers. Applications should be able to run at the same performance levels regardless of any changes in the underlay network.

Recommendations on how you can become an SDN-native enterprise

Similar to the concept of cloud native, the benefits of an SDN-native infrastructure cannot be gained by porting software and somehow trying to integrate with the cloud. You need to build a network native architecture with all principles ingrained in its DNA from the very beginning. However, most enterprises already carry the burden of legacy networks and cannot overhaul their networks in a day.

So, we recommend the following approach:

  • Start small but think big – Most enterprises start their network transformation journeys by adopting SD-WAN in pockets. This approach is fine to begin with, but to become SDN native, you need to plan ahead with the eventual aim of making everything software-defined in the minimum possible time.
  • Time your transformation right – Your network is a tricky IT infrastructure component to disturb when everything is working well. However, every three to four years, you need to refresh the network’s hardware components. You should plan to use this time to adopt as much SDN as possible while ensuring that you follow SDN-native principles.
  • Leverage next-gen technologies – To follow the principles of SDN-native infrastructure, you need to make use of multiple next-generation technologies. For example, edge computing is essential to ensure ubiquity, AI/ML for intelligence, NetOps tools for speed, and management platforms for multi-tenancy.
  • Focus on business outcomes – The eventual objective of an SDN-native infrastructure is better business outcomes; this objective should not get lost in the technology upgrades. The SDN-native infrastructure should become an enabler of cloud-native application implementation within your enterprise to drive business benefits.

What has your experience been with adoption of an SDN-native infrastructure? Please share your thoughts with me at [email protected].

For more on our thinking on SDN, see our report, Network Transformation and Managed Services PEAK Matrix™ Assessment 2020: Transform your Network or Lie on the Legacy Deathbed

Is It Open Season for RPA Acquisitions? | Blog

Robotic Process Automation (RPA) is a key component of the automation ecosystem and has been a rapidly growing software product category, making it an interesting space for potential acquisitions for a while now. While acquisitions in the RPA market have been happening over the last several years, three major RPA acquisitions have taken place in quick succession over the past few months: Microsoft’s acquisition of Softomotive in May, IBM’s acquisition of WDG Automation in July, and Hyland’s acquisition of Another Monday in August.

These acquisitions highlight a broader trend in which smaller RPA vendors are being acquired by different categories of larger technology market players:

  • Big enterprise tech product vendors like Microsoft and SAP
  • Service providers such as IBM
  • Larger automation vendors like Appian, Blue Prism, and Hyland.

Recent RPA acquisitions timeline:

RPA Robotic Process Automation

Why is this happening?

The RPA product market has grown rapidly over the past few years, rising to about US$ 1.2 billion in software license revenues in 2019. The market seems to be consolidating, with some of the larger players continuing to gain market share. As in any such maturing market, mergers and acquisitions are a natural outcome. However, we see multiple factors in the current environment leading to this frenetic uptick in RPA acquisitions:

Acquirers’ perspective – In addition to RPA being a fast-growing market, new category acquirers – meaning big tech product vendors, service providers, and larger automation vendors – see potential in merging RPA capabilities with their own core products to provide more unified automation solutions. These new entrants will be able to build pre-packaged solutions combining RPA with other existing capabilities at lower cost. COVID-19 has created an urgency for broader automation in enterprises, and the ability to offer packaged solutions that provide a quick ROI can be a game-changer in this scenario. Additionally, the adverse impact of the pandemic on the RPA vendors’ revenues, which may have dropped their valuations down to more realistic levels, is making them more attractive for the acquiring parties.

Sellers’ perspective – There is now a general realization in the market that RPA alone is not going to cut it. RPA is the connective tissue, but you still need the larger services, big tech/Systems-of-Record and/or intelligent automation ecosystem to complete the picture. RPA vendors that don’t have the ability to invest in building this ecosystem will be looking to be acquired by larger players that offer some of these complementary capabilities. In addition, investor money may no longer be flowing as freely in the current environment, meaning that some RPA vendors will be looking for an exit.

What can we expect going forward?

The RPA and broader intelligent automation space will continue to evolve quickly, accelerated by the predictable rise in demand for automation and the changes brought on by the new entrants in the space. We expect to see the following trends in the short term:

  • More acquisitions – With the ongoing market consolidation, we expect more acquisitions of smaller automation players – including RPA, Intelligent Document Processing (IDP), process orchestration, Intelligent Virtual Agents (IVA), and process mining players – by the above-mentioned bigger categories as they seek to build more complete transformational solutions.
  • Services imperative – Scaling up automation initiatives is an ongoing challenge for enterprises, with questions lingering around bot license utilization and the ability to fill an automation pipeline. Services that can help overcome these challenges will become more critical and possibly even differentiating in the RPA space, whether the product vendors themselves or their partners provide them.
  • Evolution of the competitive landscape – We expect the market landscape to undergo considerable transformation:
    • In the attended RPA space, while there will be co-opetition among RPA vendors and the bigger tech players, the balance may end up being slightly tilted in favor of the big tech players. Consider, for instance, the potential impact if Microsoft were to provide attended RPA capabilities embedded with its Office products suite. Pure-play RPA vendors, however, will continue to encourage citizen development, as this can unearth low-hanging fruit that can serve as an entry point into the wider enterprise organization.
    • In the unattended RPA space, pure-play RPA vendors will likely have an advantage as they do not compete directly with big tech players and so can invest in solutions across different systems of record. Pure-play RPA vendors might focus their efforts here and form an ecosystem to link in missing components of intelligent automation to provide integrated offerings.

There are several open questions on how some of these dynamics will play out over time. You can expect a battle for the soul (and control) of automation, with implications for all stakeholders in the automation ecosystem. Questions remain:

  • How will enterprises approach automation evolution, by building internal expertise or utilizing external services?
  • How will the different approaches automation vendors are currently following play out – system of record-led versus platform versus best of breed versus packaged solutions?
  • Where will the balance between citizen-led versus centralized automation lie?

Only time will tell how this all plays out.

But in the meantime, we’d love to hear your thoughts. Please share them with us at [email protected], [email protected], and  [email protected].

Have a question?

Please let us know how we can help you.

Contact us

Email us

How can we engage?

Please let us know how we can help you on your journey.