Author: Yugal Joshi

Bored of Directors: No Technologists on Board = Impending Doom | Sherpas in Blue Shirts

The US Congress’ recent grilling of Facebook CEO Mark Zuckerberg led to a flurry of articles on how the “oldies” asking questions had no idea how Facebook worked or what meaningful questions to ask. Most people stated that the congressmen had zero background in technology, and were asking generic feel good questions that didn’t require incisive answers or meaningful preparation.

Juxtapose this to any large enterprise in the world. Market interactions suggest that less than 5 percent have a technologist on their board of directors. Their high echelon spots are filled with management, finance, or, at best, operational executives. So, how can the board members advise or question their companies around their technology advancement? Can they conceive of or initiate discussions around the enterprise becoming a platform business? (Would they even understand what that means?) How can they critique or support such technology-heavy discussions?

The obvious answer is, they can’t.

Although board members aren’t required to actively build strategy for the company – that is left for the CEO and the team – they are certainly required to intervene when they see the company is losing direction or possibly isn’t doing enough. Because they have no clue about what is happening in the technology world in the digital age, they can’t ask questions around digital strategy. In turn, they can’t be fully effective in their roles. And that can spell doom for the company.

Who’s to Blame?

While some of it falls to the board members, the technologists in the company – such as CIOs and CTOs – must share the blame for not being invited to the board, or at least regular boardroom discussions. They haven’t been able to succinctly explain digital disruption in a business sense that gets the board’s attention. Instead, they primarily focus on cost-centricity or supporting the business in newer initiatives. And they explain minute details around technologies and vendor management, which don’t give the board members the grounding they need (and honestly, aren’t interested in.)

What Should Technologists Do?

In order to provide boards with what they really need to know, technologists need to up their game and focus on the business impact of technologies, not just the business case.

First and foremost, they need to change their cost center mindset…something that’s been said, attempted, and failed in the past. However, in today’s environment, with digital technologies transforming, enhancing, and destroying businesses, IT has a real chance to become a force to reckon with. It needs to enhance its self-perception and treat itself as a business driver, not a support center. Though running the business activities may continue to take most of IT team members’ time, IT leaders must proactively suggest and address the change-and-transform activities.

Technologists will also be well-served by investing time in learning “story telling.” Board members don’t have the time, patience, or need to understand a long-winded argument. They are interested in learning the story behind the argument, and how it helps the business. Technologists who learn to use stories will be much more adept at driving their point home. This will ensure that the board has a relook at technologists’ role, and sooner than later invite them to join the board.

What Should Enterprises Do?

A board of directors’ role continues to be steering a company in the right direction. However, the days of developing a long-term strategy and intervening at exception are truly over. In the digital age, enterprises need iterative and evolutionary strategies that are dynamic and flexible enough to both respond to changing market dynamics and create newer dynamics.

For this to happen, company management needs to move beyond getting members of the “old boy’s school club” on their board. It must challenge the culture of celebrating technology ignorance. And it must vigorously look for gaps in current members’ understanding of technology disruption, and whether or not they are capable of deliberating technology disruption and how the company can harness it for competitive advantage.

Board members should be selected – indeed retained – only if they truly understand the business issues in today’s digital age. If they don’t, the enterprise they represent is doomed.

AI as-a-service: Big Tech Has Provided Platforms, But Where Will the Apps Come From? | Sherpas in Blue Shirts

Our digital services research suggests that 40 percent of enterprises have adopted AI in some shape or form. Of course, they’re relying on the foundational platforms from BigTech firms like Amazon, Google, Microsoft, and TenCent – and even from smaller tech start-ups –to drive meaningful business cases.

But while they can leverage Amazon Sage Maker or Microsoft Bot Framework to do the heavy lifting, they still need a meaningful application that operates on the platform in order to solve their business problems.

Enterprise Challenges with AI

Granted, tech vendors like Oracle, Salesforce, and SAP have made initial progress in integrating AI into their application platforms. But their products are very broad and focus on their own planned areas. And enterprises have multiple, complex requirements that fall outside the purview of these generic applications. Therefore, most enterprises must also build their own AI engines to get meaningful insights from these large-scale applications.

Essentially left on their own, enterprises have to build their own applications to address their needs. But Everest Group digital services research indicates that 60 percent of leading digital adopters struggle for the right talent. And because they lack high-caliber AI talent, they can only take scratch some of the surfaces necessary to create truly valuable apps that can deliver specific business outcomes.

Can Start-ups Help?

We believe this leaves the market wide open to an impending burst of start-ups that can build AI-led niche applications to solve industry-specific business problems. Areas like fraud detection in insurance, compliance management in financial services, and industry-oriented employee engagement and customer experience can significantly benefit from these types of applications. But the key to success here – for both enterprises and these start-ups themselves – will be a focus on building applications for specific business use cases, rather than broad-based platforms. Indeed, AI applications focused start-ups need to commoditize the platform and focus squarely on the application logic that leverages AI.

Enterprises will need to partner or invest in these start-ups to incentivize suitable AI-led applications. Going forward these enterprises should focus to procure off the shelf applications to drive business outcomes than over investing in AI platforms. Unlike today, which requires massive bandwidth to build on top of BigTech AI platforms, these applications will be easy to configure, train, and consume.

The Role of System Integrators

Given that system integrators (SIs) have a strong enterprise DNA and understand business processes, systems, and technologies very well, they can build these applications for enterprises leveraging a BigTech platform. Some of them have made early inroads in areas such as service desk, customer support, and IT operations. However, there is a massive opportunity for business applications and processes. SIs will need to develop point as well as platform-led AI applications that can be plug-and-play in an enterprise set-up. These applications must be pre-trained on industry-fed data for quick deployment and better time to value.

The Road Ahead

It is apparent that enterprises cannot leverage the power of AI on their own. They need to rely not only on large technology vendors, but start-ups and their service partners as well. Though each enterprise must have a pool of valued AI resources, they should not go overboard in investing in them. As AI is not enterprises’ core business, they’re better off letting it be done by companies that are experts.

However, if the AI industry continues to generate next-generation smarter platforms that are do heavy lifting for AI without creating meaningful applications, we will surely see one more AI winter in the near horizon.

AI Helping DevOps: Don’t Ask, Don’t Assume – KNOW What Users Really Want | Sherpas in Blue Shirts

With DevOps’ core goal of putting applications in users’ hands more quickly, it’s no surprise that many enterprises have started to release and deploy software up to five times a month, instead of their earlier once-a-quarter schedule. Everest Group research suggests that over 25% of enterprises believe DevOps will become the de-facto application delivery model.

However, there continues to be a disconnect between what business users want and what they get. To be fair to developers and IT teams, this disconnect is due, in part, to end-users’ difficulty in articulating their needs and wants.

Enter AI Systems

AI Systems have strong potential to help product management teams cut through the noise and zero-in on the features their users truly find most valuable. Consider these areas of high impact:

  1. Helping developers at run time: Instead of developers having to slog through requirements, feature files, and feedback logs – and likely miss half the input – AI-led “code assister” bots can help them, during the actual coding process, to ensure that the requested functionality is created
  2. Prioritizing feedback: Rather than wasting time on archaic face-to-face meetings to prioritize features requested in the dizzying amount of feedback received from users, enterprises should build an AI system to prioritize requests from high to low, and dynamically change them as needed based on new incoming data
  3. Stress testing feedback: After prioritization, AI systems should help enterprises segregate the features users really want, versus those they think they want. AI can do this by crunching the massive volume of feedback data though machine learning and finding recurring patterns that suggest consensus. The feedback data should also be fed back to business users to educate them on market alignment of demanded and desired features
  4. Aligning development, QA, and production: Through its inherently neutral perspective, an AI system can smooth through the dissonance among the different teams by crunching all the data across the feedback systems to outline disconnects and create the alignment needed to satisfy end-user needs
  5. Predicting features: While this is still far-fetched, enterprises and technology vendors should work toward AI solutions that can predict features that will be requested in the next sprint based on historical data. In fact, AI systems should be able to analyze data across other enterprises as well to suggest relevant features to developers. The predictions could then be validated with real feedback from beta users, and the AI system further trained based on the validations

There are multiple other areas in which AI can potentially assist in understanding what the users want. For example, as we discussed in earlier research, AI can help developers create secure, performance-tuned, and production-ready code without being bogged down by typical feedback on features from the field.

What about Budget?

The good news is such an AI system will not burn a massive hole in enterprises’ budgets and should not require the zillions of data points that most typical, complex AI systems do. I believe these systems can be based on simple log data, performance feedback cycles, feature files databases, requirements catalogues, and other already existing assets. If that’s the case, they have great potential to help enterprises develop software their end-users really want.

Have you deployed AI in your Agile DevOps delivery cycle? I’d love to hear about it at [email protected].

Will AI Take the “H” Out of HR? Not if Done Well | Sherpas in Blue Shirts

Most people talk about how AI will transform both the transactional and strategic HR functions across recruiting, performance management, career guidance, and operations. Technology vendors such as Belong.co, Glider.ai, Hirevue, MontageTalent, and Talla, are often quoted as transforming the HR functions across different facets.

So the burning question here is, will AI technologies eventually transform the HR function for good? Or will it dehumanize it? Let’s look at some fundamental issues.

HR Works within the Enterprise’s Constraints

Focuses on creating individual-centric training, incentives, performance management, and career development plans are noble. However, HR may well not have the budget, and the organization’s processes may well not allow these in reality. Most organizations have a fixed number of roles (bands) and employees are fit into them. And there is a fixed L&D budget, which is treated as a cost that prevents meaningful investment in programs for individual employees.

HR Hardly Understands Technology

Most of today’s enterprises are looking to hire “digital HR” specialists who understand the confluence of technologies and HR. Because very few exist, the businesses themselves need to teach and handhold non-digital HR people about the value of AI principles in their mundane tasks, such as CV/resume shortlisting, as well as in their creative work, such as performance management and employee engagement.

Senior Leadership’s Flawed Perception of HR

While every enterprise claims that their employees are their greatest asset, they don’t always perceive HR to be a strategic function. Many senior executives view HR as a department they need to deal with when team members are joining or leaving the organization, and that everything in between is transactional. This perception does not allow meaningful investments in HR technologies, much less AI-based services. As AI systems are comparatively expensive, they require senior leadership’s full support for business case and execution, and HR will likely not be on the radar screen.

HR’s Flawed Perception of Itself

Most HR departments consider themselves to be recruitment, training, and performance management engines. They fail to strategically think about their role as a crucial enabler of a digital business. Because most HR executives don’t perceive themselves to be C-level material, their view becomes self-fulfilling. Many HR executives also silently fear, that their relevance in the organization will be eliminated if seemingly rote activities are automated by AI.

I believe that AI systems provide tremendous opportunities for HR transformation – if the HR function is willing to transform. It needs to make a strong business case for adopting AI based on hard facts, such as delay in employee hiring, number of potential candidates missed due to timelines constraints, poor retention because of gaps in performance management, inferior employee engagement due to limited visibility into what they really want, and compliance issues.

However, there is a tightrope to be walked here. As HR is fundamentally about humans, AI should be assisting the function, not driving it. A chatbot, which may become the face of HR operations, is just a chatbot. AI should be leveraged to automate rote transactional activities and mundane HR operations, and help enhance the HR organization.

Unfortunately, many enterprises myopically and dangerously believe that AI should lead HR. Because HR is not about AI, those that do are bound to dehumanize HR and drive their own demise.

HR’s broader organizational mandate will have to change for AI adoption to truly succeed without dehumanizing the function and its purposes. Doing so will not be easy. Various enterprises may take a shortcut, such as deploying chatbots for simple HR operations, to appease their desire for a transformational moniker. But in today’s digital age, these organizations will be short lived. Enterprises that weave AI into their HR functions – akin to ambient technology – to fundamentally enhance employee experience, engagement, and creativity, will succeed.

Dig-It-All Enterprise: Dressing up Legacy Technology for Digital Won’t Work Anymore | Sherpas in Blue Shirts

I have long been a proponent of valuing the legacy environment, and I am still a great believer in legacy technologies. But despite the huge investments enterprises have made in their legacy environment, even though they’re desperately trying to use bolt-ons and lift and shift to avoid going the last mile, and regardless of their belief that their core business functions shouldn’t be disrupted, time is running out for piecemeal digital transformation where old systems are dressed up to support new initiatives. It simply won’t work any more. Why?

Digital enterprises need different operating models

Enterprises are finally realizing that there’s dissonance between the execution rhythm of a digital business and its legacy technology. Although they can spend millions to make the legacy technology run the treadmill to keep up with digital transformation, the enabling processes and people skills will never catch up. For this, enterprises will have to invest in fundamentally different operating models in the way technology is created and consumed, the way in which people are hired and reskilled, and the way in which organizational culture is evolving towards speed and agility.

Legacy technology is breeding legacy people

Our research suggests that 80 percent of modernization initiatives are simply lift and shift to newer infrastructure. In those that impact applications, less than 30 percent of the code is upgraded. Therefore, most technology shops within enterprises take comfort in the fact that their business can never move out of specific legacy technologies. They believe the applications and processes are so intertwined and complex that the business will never have the courage, or the budget, to transform it. This makes them lethargic, resulting in a large mass of people without incentive to innovate. Such established blind rules need to be challenged. Enterprises need to set examples that everything is on the table and a candidate for transformation. The transformation may be phased, but it will be done for sure. This will keep people on their toes, and incentivize them to upskill themselves and drive better outcomes for the business.

Legacy technology is simply not up to the challenge

Enterprises are realizing that there is a limit to which they can patch their technologies to beautify them for the digital world. Our research suggests that every one to two years enterprises realize their mistakes as the refurbished legacy technology becomes legacy again. They are now believing they will either have to take the hard route of going the last mile in transforming, or shut out their legacy technology and start from a blank slate. This is a difficult conundrum, as 60 percent of enterprises lack a strong digital vision and, therefore, are confused about their legacy technology future.

Organizations that continue to believe they can put band-aids on their legacy technology and call it digital have lessons to learn from Digital Pinnacle Enterprises. Our research suggests that these businesses, which are deriving meaningful benefits of their digital initiatives, are 36 percent more mature in adopting digital technologies than their peers. These enterprises understand the limitation legacy technologies put on their business. Though they realize they cannot get rid of the legacy technology overnight, they also understand they have to move fast or get outdone in the market.

The courageous enterprises that understand that legacy technology is hard to change, is built on monolithic architectures, requires humongous investment to run, and doesn’t allow the business the flexibility to adapt to market demand, and are willing to “Dig-It-All” for digital, will succeed in the long run.

What has your experience been with legacy technologies in digital transformation initiatives? It would be great to hear your views, whether good, bad, or ugly. Please do share with me at [email protected].

PaaS, be Warned: APIs are Here and Containers Are Coming | Sherpas in Blue Shirts

A few months ago, Workday, the enterprise HCM software company, entered into the Platform-as-a-Service (PaaS) world by launching (or opening, as it said) its own platform offering. This brings back the debate of whether using PaaS to develop applications is the right way to go as an enterprise strategy.

Many app developers turning from PaaS to APIs

While there are multiple arguments in favor of PaaS, an increasing number of application developers believe that APIs may be a better and quicker way to develop applications. Pro-API points include:

  • PaaS is difficult and requires commitment, whereas any API can be consumed by developers with simple documentation from the provider
  • Developers can realistically master only a couple of PaaS platforms. This limits their abilities to create exciting applications
  • PaaS involves significant developer training, unlike APIs
  • PaaS creates vendor platform lock-in, whereas APIs are fungible and can be replaced when needed

Containers moving from PaaS enablers to an alternative approach

In addition, the rise of containers and orchestration platforms, such as Kubernetes, are bringing more sleepless nights to the Platform-as-a-Service brigade. Most developers believe containers’ role of standardizing the operating environment casts strong shadows on the traditional role of PaaS.

While containers were earlier touted as PaaS enablers, they will increasingly be used as an alternative approach to application development. The freedom they provide to developers is immense and valuable. Although PaaS may offer more environment control to enterprise technology shops, it needs to evolve rapidly to become a true development platform that allows developers focus on application development. And while PaaS promised elasticity, automated provisioning, security, and infrastructure monitoring, it requires significant work from the developer’s end. This work frustrates developers, and is a possible cause for the rise of still nascent, but rapidly talked about, serverless architecture. This is evident by the fact that most leading PaaS providers, such as Microsoft Azure, CloudFoundry, and OpenShift, are introducing Kubernetes support.

As containers get deployed for production at scale, they are moving out of the PaaS layer and directly providing infrastructure control to the developers. This is helping developers to consume automated operations at scale, a promise that PaaS couldn’t fulfill due to higher abstraction. Kubernetes and other orchestration platforms can organize these containers to deliver portable, consistent, and standardized infrastructure components.

All is not lost for PaaS

However, given strong enterprise adoption, all is not lost for PaaS. Enterprises will take significant time to test containers as an alternative to a PaaS environment. Moreover, given that no major PaaS or IaaS vendor other than Google owns container technology, there is an inherent interest among large cloud providers such as AWS and Azure to build something as an alternative to containers. No wonder most of them are now pushing their serverless offerings in the market as an alternate architectural choice.

Which of these architectural preferences will eventually become standard, if at all, is a difficult pick as of today. Yet, while it’s a certainty that infrastructure operations will completely change in the next five years, most enterprise shops aren’t investing meaningfully in the new tools and skills that are required to make this shift. Thus, the futuristic enterprises that realize this tectonic shift will trample their competition. No ifs, ands, or buts about it.

What has been your experience with containers, APIs, microservices, serverless, and Platforms-as-a-Service? Do you think you need all of them, or do you have preferences? Do share with me at [email protected].

Digital Transformation: Is Design Thinking Failing us? No, We Are Failing It | Sherpas in Blue Shirts

In addition to Everest Group’s work with enterprises on design thinking, I have recently participated in more than a dozen design thinking-focused discussions and analyst events with digital service providers, including as design companies, system integrators, and consultancies.

All the providers talk about the great work they have been doing with clients leveraging design thinking. But it is very clear that they are missing the larger context of design thinking. This, in turn, is impacting the value they can generate for their clients. And unless they embrace a different approach, they will not be able to help their clients become world-class digital adopters.

Three issues with the way design thinking principles are leveraged in client work

  1. Obsessed with persona: Most digital service providers focus on solving the problems of one specific persona in an enterprise – e.g., doctors, sales agents, pilots, or shop floor managers – and largely ignore the ecosystem around that persona. Realizing the solution they designed for that persona, creates complexities for others in the ecosystem, they design solutions for each of those personas. This becomes a never-ending loop that not only frustrates the client but also fails to create the intended value. In the worse cases, the digital solution designed is impractical, and cannot be deployed by the enterprise. This defeats the entire design thinking initiative, and wastes considerable time and money investments.
  2. Over-focused on the “known”: Most design thinking workshops focus on users’ evident, current problems, but fail to address unarticulated needs. There are three reasons for this. First, because the workshops typically carry a crunched timeline. Second, because the digital service providers believe it can be difficult to explain and get funding for unarticulated needs. Third, because the users themselves are more focused on their tangible challenges than issues they cannot visualize. But this sole focus on the known limits the impact a truly successful design thinking initiative can create for an enterprise.
  3. Driven in closed rooms: Only 20% of design thinking workshops are carried out in users’ real working environments. As the rest are conducted in closed conference rooms, user input based on memory and perception, rather than real time observation of their day-to-day activities. Thus, the resulting solution cannot help but fall short of expectations and address only part of the problem, when it is implemented in the real world.

Aspiring world-class digital enterprises must make design thinking the epicenter of their transformation initiatives. To gain all the benefits and value of design thinking, I strongly recommend enterprises:

  • Have a broad perspective of the problems they are trying to address, rather than obsessing on specific user requirements
  • Require their service providers observe their users in in their real working environment, and draw a map to the other stakeholders with which they frequently engage
  • Tie the digital service providers’ financial incentives to the outcomes of their digital initiative

Have you run or attended a design thinking workshop? Experienced a highly successful, or miserably failed, design thinking initiative? Please share with me at [email protected].

Digital Transformation: the Future of SDLC is not DevOps, it is “no SDLC” | Sherpas in Blue Shirts

One of the keys to successful digital initiatives is quick release of better application functionalities. Enter DevOps, the philosophy of automating the entire infrastructure set-up, quality assurance, environment provisioning, and similar processes to quicken the pace of application delivery. Everest Group research suggests that 25 percent of enterprises believe that DevOps will become the leading model for their software development lifecycle (SDLC) in the next three to five years. While this number appears small, I am quite upbeat seeing it.

But, let’s step back for a moment. What if SDLC was no longer a lifecycle with different demarcated teams but instead a point in time that kept repeating itself? This “no SDLC” concept would make discrete processes within the traditional SDLC disappear or collapse (not compress) into one. Enterprise shops wouldn’t have to fret over these processes anymore.

I believe a no SDLC future will come faster than we expect. Here are four key indicators:

  1. Serverless: Amazon’s introduction of Lambda a few years ago and now Microsoft’s introduction of Event Grids to enhance Azure functions, has brought serverless into the spotlight. Everest Group research suggests that 10-15 percent of new application projects leverage serverless architecture to a certain extent. The philosophy behind serverless – getting the required infrastructure as and when needed rather than provisioned initially – is very strong. So, if we don’t need long-drawn infrastructure set-up, why would we need DevOps? The developers can focus on coding rather than managing the infrastructure their applications will run on. True that this is a strong theoretical argument that needs to be tested. However, the value it brings to enterprises cannot be ignored.
  2. Artificial Intelligence: Why can’t applications create themselves and drive everything on their own? Most software projects begin by understanding business requirements. What if AI systems could understand past user behavior and automatically predict the requirements, making the traditional phase redundant? These AI systems could also manage the underlying infrastructure in an event-driven manner, instead of architecting the applications for specific infrastructure.
  3. Automation: Automation across the SDLC is becoming pervasive, moving into previously unexplored areas. Enterprises are experimenting with automation to convert business requirements to technical specifications, Agile teams are automating user stories, and IT operations teams are automating the provisioning, configuration, and management of infrastructure. Adding cloud, the old hat, into the mix creates a fundamentally different SDLC. Given the application infrastructure (middleware), infrastructure management, and updates, moving to the cloud fundamentally reduces an application team’s need for downstream activities.
  4. Lowcode: Companies such as Appian, K2, and Outsystems have made a strong case for low-code development. This has less to do with making SDLC a point-in-time activity, but it is still very relevant. If developers (or business users with less coding experience) can leverage these platforms and rapidly deliver applications, the functionalities will be created and deployed at run time. The key reason we have downstream management is that once features are built, they are protected and managed by the IT operations team. Low-code allows developers to deliver functionalities rapidly, even possibly on-the-fly, and reduces the need for large-scale management.

All the above will eventually tie to the nirvana of the as-a-service economy. It’s the software development equivalent of buying a car and maintaining it for years, versus getting a new, worry-free car every day (sounds like Uber, right?).

The current no SDLC wave is about compressing the processes to blur the lines between the different phases. Once the above future is achieved, no SDLC will expand to make the processes unnecessary, rather than shifting responsibilities outside the enterprise. Organizations that really want to succeed in the digital world must strive towards this goal and commit to a no SDLC world.

Explainable AI? Why Should Humans Decide Robots are Right or Wrong? | Sherpas in Blue Shirts

I have been a vocal proponent of making artificial intelligence (AI) systems more white box – able to “explain” why and how they came to a particular decision – than they are today. Therefore, I am heartened to see increasing debate around this. For example, Eric Brynjolfsson and Andrew McAfee, the well-known authors of the book, “The Second Machine Age,” have increasingly spoken about this.

However, sometimes I debate with myself on various aspects of explainable AI.

What are we explaining? If we have a white box AI system, a human can understand “why” the system made a certain decision. But who decides whether or not the decision was correct? For example, in the movie “I, Robot,” the lead character (played by actor Will Smith) thought the humanoid robot should have saved a child instead of him. Was the robot wrong? The robot certainly did not think so. If humans make the “right or wrong” decision for robots, doesn’t that defeat the self-learning purpose of AI?

Why are we explaining? Humans seek explanation when they lack understanding of something or confidence in the capabilities of other humans. Similarly, we seek explanation from artificial intelligence because, at some level, we aren’t sure about the capabilities of these systems. (Of course, there’s also humans’ control freak characteristic.) Why? Because these are “systems” and systems have problems. But humans also have “problems,” and what each individual person considers “right” is defined by their own value system, surroundings, and situations. What’s right for one person may be wrong for another. Should this contextual “rightness or wrongness” also apply to AI systems?

Who are we explaining? Should an AI system’s outcome be analyzed by humans or by another AI system? Why do we believe we have what it takes to assess the outcome, and who gave us the right to do so? Can we become more trustworthy, and believe that an AI system can assess another? Perhaps, but this defeats the very debate on explainable AI.

Who do we hold responsible for the decisions made by Artificial Intelligence?

The mother lode of complexity here is around responsibility. Due to human’s beliefs that AI systems won’t be “responsible” in their decisions – based on individuals’ biased definition of what is right or wrong – regulations are being developed to hold humans responsible for the decisions of AI systems. Of course, these humans will want an explanation from the AI system.

Regardless of which side of the argument you are on, or even if you pivot daily, there’s one thing I believe we can agree on…if we end up making better machines than better humans through artificial intelligence, we have defeated ourselves.

Voice-enabled Enterprise Applications: NOT a Good Idea | Sherpas in Blue Shirts

With the increasing proliferation of voice-enabled personal digital assistants (e.g., Cortana, Google Assistant, Samsung’s Bixby, and Siri) enterprise application vendors are considering jumping into the fray. In fact, some vendors have gone so far as to believe that, in the near future, 90 percent of interactions with their enterprise applications will be through voice or digital assistants.

But would these vendors actually be solving a real business problem with these capabilities, or perhaps instead getting intoxicated by drinking their own Kool-aid?

True that there are many potent arguments for voice-enabled interaction, including elimination of the need to train users on how to operate a given application, and the implication of higher productivity as the volume of data a person needs to physically enter into a system is greatly reduced.

But consider the realities of the user experience. Picture this: you’re sitting in your office area writing a report for a customer, when all of a sudden you hear your colleague saying to his laptop or smart phone, “OK SAP/Oracle, create an invoice.” How disruptive would this be to your work? Vendors could potentially tune their applications to such frequency that, when coupled with an additional device, a user’s speech couldn’t be heard by others. But that sounds too cumbersome and meaningless.

So what’s the right enterprise application vision for vendors?

The vision vendors should drive toward is one of no interaction between users and the enterprise application. Given that people engage with enterprise applications because they “have” to, not because they “want” to, how about a future where user activities are tracked, and all the related processes execute on their own?

For example, what if an enterprise’s system could manage all aspects of its employees’ travel expenses, rather than each individual filing expenses, and an army of staff reconciling and paying them? Such a nirvana of automated business processes would have tremendous impact on business agility, cost savings, and the user experience.

Voice assistance could potentially be of value in the back-end of enterprises’ systems. Their support staff could get a tremendous boost if they could “speak-fix” a problem instead of debugging complex code every time something went wrong. System designers and builders might also derive some value from voice interactions.

Enterprise applications vendors need to carefully consider whether they are solving the right problem with voice enablement. Could their smart and expensive developers be deployed elsewhere to solve complex and pertinent business problems, rather than creating a potentially unnecessary user experience? I believe that, at the end of the day, voice enablement is likely not the right, broad-based overhaul for the user experience with enterprise applications. Vendors need to focus on what users need, rather than getting caught up in the fancy of using the latest shiny digital toys.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.