Author: Yugal Joshi

AI Helping DevOps: Don’t Ask, Don’t Assume – KNOW What Users Really Want | Sherpas in Blue Shirts

With DevOps’ core goal of putting applications in users’ hands more quickly, it’s no surprise that many enterprises have started to release and deploy software up to five times a month, instead of their earlier once-a-quarter schedule. Everest Group research suggests that over 25% of enterprises believe DevOps will become the de-facto application delivery model.

However, there continues to be a disconnect between what business users want and what they get. To be fair to developers and IT teams, this disconnect is due, in part, to end-users’ difficulty in articulating their needs and wants.

Enter AI Systems

AI Systems have strong potential to help product management teams cut through the noise and zero-in on the features their users truly find most valuable. Consider these areas of high impact:

  1. Helping developers at run time: Instead of developers having to slog through requirements, feature files, and feedback logs – and likely miss half the input – AI-led “code assister” bots can help them, during the actual coding process, to ensure that the requested functionality is created
  2. Prioritizing feedback: Rather than wasting time on archaic face-to-face meetings to prioritize features requested in the dizzying amount of feedback received from users, enterprises should build an AI system to prioritize requests from high to low, and dynamically change them as needed based on new incoming data
  3. Stress testing feedback: After prioritization, AI systems should help enterprises segregate the features users really want, versus those they think they want. AI can do this by crunching the massive volume of feedback data though machine learning and finding recurring patterns that suggest consensus. The feedback data should also be fed back to business users to educate them on market alignment of demanded and desired features
  4. Aligning development, QA, and production: Through its inherently neutral perspective, an AI system can smooth through the dissonance among the different teams by crunching all the data across the feedback systems to outline disconnects and create the alignment needed to satisfy end-user needs
  5. Predicting features: While this is still far-fetched, enterprises and technology vendors should work toward AI solutions that can predict features that will be requested in the next sprint based on historical data. In fact, AI systems should be able to analyze data across other enterprises as well to suggest relevant features to developers. The predictions could then be validated with real feedback from beta users, and the AI system further trained based on the validations

There are multiple other areas in which AI can potentially assist in understanding what the users want. For example, as we discussed in earlier research, AI can help developers create secure, performance-tuned, and production-ready code without being bogged down by typical feedback on features from the field.

What about Budget?

The good news is such an AI system will not burn a massive hole in enterprises’ budgets and should not require the zillions of data points that most typical, complex AI systems do. I believe these systems can be based on simple log data, performance feedback cycles, feature files databases, requirements catalogues, and other already existing assets. If that’s the case, they have great potential to help enterprises develop software their end-users really want.

Have you deployed AI in your Agile DevOps delivery cycle? I’d love to hear about it at [email protected].

Will AI Take the “H” Out of HR? Not if Done Well | Sherpas in Blue Shirts

Most people talk about how AI will transform both the transactional and strategic HR functions across recruiting, performance management, career guidance, and operations. Technology vendors such as Belong.co, Glider.ai, Hirevue, MontageTalent, and Talla, are often quoted as transforming the HR functions across different facets.

So the burning question here is, will AI technologies eventually transform the HR function for good? Or will it dehumanize it? Let’s look at some fundamental issues.

HR Works within the Enterprise’s Constraints

Focuses on creating individual-centric training, incentives, performance management, and career development plans are noble. However, HR may well not have the budget, and the organization’s processes may well not allow these in reality. Most organizations have a fixed number of roles (bands) and employees are fit into them. And there is a fixed L&D budget, which is treated as a cost that prevents meaningful investment in programs for individual employees.

HR Hardly Understands Technology

Most of today’s enterprises are looking to hire “digital HR” specialists who understand the confluence of technologies and HR. Because very few exist, the businesses themselves need to teach and handhold non-digital HR people about the value of AI principles in their mundane tasks, such as CV/resume shortlisting, as well as in their creative work, such as performance management and employee engagement.

Senior Leadership’s Flawed Perception of HR

While every enterprise claims that their employees are their greatest asset, they don’t always perceive HR to be a strategic function. Many senior executives view HR as a department they need to deal with when team members are joining or leaving the organization, and that everything in between is transactional. This perception does not allow meaningful investments in HR technologies, much less AI-based services. As AI systems are comparatively expensive, they require senior leadership’s full support for business case and execution, and HR will likely not be on the radar screen.

HR’s Flawed Perception of Itself

Most HR departments consider themselves to be recruitment, training, and performance management engines. They fail to strategically think about their role as a crucial enabler of a digital business. Because most HR executives don’t perceive themselves to be C-level material, their view becomes self-fulfilling. Many HR executives also silently fear, that their relevance in the organization will be eliminated if seemingly rote activities are automated by AI.

I believe that AI systems provide tremendous opportunities for HR transformation – if the HR function is willing to transform. It needs to make a strong business case for adopting AI based on hard facts, such as delay in employee hiring, number of potential candidates missed due to timelines constraints, poor retention because of gaps in performance management, inferior employee engagement due to limited visibility into what they really want, and compliance issues.

However, there is a tightrope to be walked here. As HR is fundamentally about humans, AI should be assisting the function, not driving it. A chatbot, which may become the face of HR operations, is just a chatbot. AI should be leveraged to automate rote transactional activities and mundane HR operations, and help enhance the HR organization.

Unfortunately, many enterprises myopically and dangerously believe that AI should lead HR. Because HR is not about AI, those that do are bound to dehumanize HR and drive their own demise.

HR’s broader organizational mandate will have to change for AI adoption to truly succeed without dehumanizing the function and its purposes. Doing so will not be easy. Various enterprises may take a shortcut, such as deploying chatbots for simple HR operations, to appease their desire for a transformational moniker. But in today’s digital age, these organizations will be short lived. Enterprises that weave AI into their HR functions – akin to ambient technology – to fundamentally enhance employee experience, engagement, and creativity, will succeed.

Dig-It-All Enterprise: Dressing up Legacy Technology for Digital Won’t Work Anymore | Sherpas in Blue Shirts

I have long been a proponent of valuing the legacy environment, and I am still a great believer in legacy technologies. But despite the huge investments enterprises have made in their legacy environment, even though they’re desperately trying to use bolt-ons and lift and shift to avoid going the last mile, and regardless of their belief that their core business functions shouldn’t be disrupted, time is running out for piecemeal digital transformation where old systems are dressed up to support new initiatives. It simply won’t work any more. Why?

Digital enterprises need different operating models

Enterprises are finally realizing that there’s dissonance between the execution rhythm of a digital business and its legacy technology. Although they can spend millions to make the legacy technology run the treadmill to keep up with digital transformation, the enabling processes and people skills will never catch up. For this, enterprises will have to invest in fundamentally different operating models in the way technology is created and consumed, the way in which people are hired and reskilled, and the way in which organizational culture is evolving towards speed and agility.

Legacy technology is breeding legacy people

Our research suggests that 80 percent of modernization initiatives are simply lift and shift to newer infrastructure. In those that impact applications, less than 30 percent of the code is upgraded. Therefore, most technology shops within enterprises take comfort in the fact that their business can never move out of specific legacy technologies. They believe the applications and processes are so intertwined and complex that the business will never have the courage, or the budget, to transform it. This makes them lethargic, resulting in a large mass of people without incentive to innovate. Such established blind rules need to be challenged. Enterprises need to set examples that everything is on the table and a candidate for transformation. The transformation may be phased, but it will be done for sure. This will keep people on their toes, and incentivize them to upskill themselves and drive better outcomes for the business.

Legacy technology is simply not up to the challenge

Enterprises are realizing that there is a limit to which they can patch their technologies to beautify them for the digital world. Our research suggests that every one to two years enterprises realize their mistakes as the refurbished legacy technology becomes legacy again. They are now believing they will either have to take the hard route of going the last mile in transforming, or shut out their legacy technology and start from a blank slate. This is a difficult conundrum, as 60 percent of enterprises lack a strong digital vision and, therefore, are confused about their legacy technology future.

Organizations that continue to believe they can put band-aids on their legacy technology and call it digital have lessons to learn from Digital Pinnacle Enterprises. Our research suggests that these businesses, which are deriving meaningful benefits of their digital initiatives, are 36 percent more mature in adopting digital technologies than their peers. These enterprises understand the limitation legacy technologies put on their business. Though they realize they cannot get rid of the legacy technology overnight, they also understand they have to move fast or get outdone in the market.

The courageous enterprises that understand that legacy technology is hard to change, is built on monolithic architectures, requires humongous investment to run, and doesn’t allow the business the flexibility to adapt to market demand, and are willing to “Dig-It-All” for digital, will succeed in the long run.

What has your experience been with legacy technologies in digital transformation initiatives? It would be great to hear your views, whether good, bad, or ugly. Please do share with me at [email protected].

PaaS, be Warned: APIs are Here and Containers Are Coming | Sherpas in Blue Shirts

A few months ago, Workday, the enterprise HCM software company, entered into the Platform-as-a-Service (PaaS) world by launching (or opening, as it said) its own platform offering. This brings back the debate of whether using PaaS to develop applications is the right way to go as an enterprise strategy.

Many app developers turning from PaaS to APIs

While there are multiple arguments in favor of PaaS, an increasing number of application developers believe that APIs may be a better and quicker way to develop applications. Pro-API points include:

  • PaaS is difficult and requires commitment, whereas any API can be consumed by developers with simple documentation from the provider
  • Developers can realistically master only a couple of PaaS platforms. This limits their abilities to create exciting applications
  • PaaS involves significant developer training, unlike APIs
  • PaaS creates vendor platform lock-in, whereas APIs are fungible and can be replaced when needed

Containers moving from PaaS enablers to an alternative approach

In addition, the rise of containers and orchestration platforms, such as Kubernetes, are bringing more sleepless nights to the Platform-as-a-Service brigade. Most developers believe containers’ role of standardizing the operating environment casts strong shadows on the traditional role of PaaS.

While containers were earlier touted as PaaS enablers, they will increasingly be used as an alternative approach to application development. The freedom they provide to developers is immense and valuable. Although PaaS may offer more environment control to enterprise technology shops, it needs to evolve rapidly to become a true development platform that allows developers focus on application development. And while PaaS promised elasticity, automated provisioning, security, and infrastructure monitoring, it requires significant work from the developer’s end. This work frustrates developers, and is a possible cause for the rise of still nascent, but rapidly talked about, serverless architecture. This is evident by the fact that most leading PaaS providers, such as Microsoft Azure, CloudFoundry, and OpenShift, are introducing Kubernetes support.

As containers get deployed for production at scale, they are moving out of the PaaS layer and directly providing infrastructure control to the developers. This is helping developers to consume automated operations at scale, a promise that PaaS couldn’t fulfill due to higher abstraction. Kubernetes and other orchestration platforms can organize these containers to deliver portable, consistent, and standardized infrastructure components.

All is not lost for PaaS

However, given strong enterprise adoption, all is not lost for PaaS. Enterprises will take significant time to test containers as an alternative to a PaaS environment. Moreover, given that no major PaaS or IaaS vendor other than Google owns container technology, there is an inherent interest among large cloud providers such as AWS and Azure to build something as an alternative to containers. No wonder most of them are now pushing their serverless offerings in the market as an alternate architectural choice.

Which of these architectural preferences will eventually become standard, if at all, is a difficult pick as of today. Yet, while it’s a certainty that infrastructure operations will completely change in the next five years, most enterprise shops aren’t investing meaningfully in the new tools and skills that are required to make this shift. Thus, the futuristic enterprises that realize this tectonic shift will trample their competition. No ifs, ands, or buts about it.

What has been your experience with containers, APIs, microservices, serverless, and Platforms-as-a-Service? Do you think you need all of them, or do you have preferences? Do share with me at [email protected].

Digital Transformation: Is Design Thinking Failing us? No, We Are Failing It | Sherpas in Blue Shirts

In addition to Everest Group’s work with enterprises on design thinking, I have recently participated in more than a dozen design thinking-focused discussions and analyst events with digital service providers, including as design companies, system integrators, and consultancies.

All the providers talk about the great work they have been doing with clients leveraging design thinking. But it is very clear that they are missing the larger context of design thinking. This, in turn, is impacting the value they can generate for their clients. And unless they embrace a different approach, they will not be able to help their clients become world-class digital adopters.

Three issues with the way design thinking principles are leveraged in client work

  1. Obsessed with persona: Most digital service providers focus on solving the problems of one specific persona in an enterprise – e.g., doctors, sales agents, pilots, or shop floor managers – and largely ignore the ecosystem around that persona. Realizing the solution they designed for that persona, creates complexities for others in the ecosystem, they design solutions for each of those personas. This becomes a never-ending loop that not only frustrates the client but also fails to create the intended value. In the worse cases, the digital solution designed is impractical, and cannot be deployed by the enterprise. This defeats the entire design thinking initiative, and wastes considerable time and money investments.
  2. Over-focused on the “known”: Most design thinking workshops focus on users’ evident, current problems, but fail to address unarticulated needs. There are three reasons for this. First, because the workshops typically carry a crunched timeline. Second, because the digital service providers believe it can be difficult to explain and get funding for unarticulated needs. Third, because the users themselves are more focused on their tangible challenges than issues they cannot visualize. But this sole focus on the known limits the impact a truly successful design thinking initiative can create for an enterprise.
  3. Driven in closed rooms: Only 20% of design thinking workshops are carried out in users’ real working environments. As the rest are conducted in closed conference rooms, user input based on memory and perception, rather than real time observation of their day-to-day activities. Thus, the resulting solution cannot help but fall short of expectations and address only part of the problem, when it is implemented in the real world.

Aspiring world-class digital enterprises must make design thinking the epicenter of their transformation initiatives. To gain all the benefits and value of design thinking, I strongly recommend enterprises:

  • Have a broad perspective of the problems they are trying to address, rather than obsessing on specific user requirements
  • Require their service providers observe their users in in their real working environment, and draw a map to the other stakeholders with which they frequently engage
  • Tie the digital service providers’ financial incentives to the outcomes of their digital initiative

Have you run or attended a design thinking workshop? Experienced a highly successful, or miserably failed, design thinking initiative? Please share with me at [email protected].

Digital Transformation: the Future of SDLC is not DevOps, it is “no SDLC” | Sherpas in Blue Shirts

One of the keys to successful digital initiatives is quick release of better application functionalities. Enter DevOps, the philosophy of automating the entire infrastructure set-up, quality assurance, environment provisioning, and similar processes to quicken the pace of application delivery. Everest Group research suggests that 25 percent of enterprises believe that DevOps will become the leading model for their software development lifecycle (SDLC) in the next three to five years. While this number appears small, I am quite upbeat seeing it.

But, let’s step back for a moment. What if SDLC was no longer a lifecycle with different demarcated teams but instead a point in time that kept repeating itself? This “no SDLC” concept would make discrete processes within the traditional SDLC disappear or collapse (not compress) into one. Enterprise shops wouldn’t have to fret over these processes anymore.

I believe a no SDLC future will come faster than we expect. Here are four key indicators:

  1. Serverless: Amazon’s introduction of Lambda a few years ago and now Microsoft’s introduction of Event Grids to enhance Azure functions, has brought serverless into the spotlight. Everest Group research suggests that 10-15 percent of new application projects leverage serverless architecture to a certain extent. The philosophy behind serverless – getting the required infrastructure as and when needed rather than provisioned initially – is very strong. So, if we don’t need long-drawn infrastructure set-up, why would we need DevOps? The developers can focus on coding rather than managing the infrastructure their applications will run on. True that this is a strong theoretical argument that needs to be tested. However, the value it brings to enterprises cannot be ignored.
  2. Artificial Intelligence: Why can’t applications create themselves and drive everything on their own? Most software projects begin by understanding business requirements. What if AI systems could understand past user behavior and automatically predict the requirements, making the traditional phase redundant? These AI systems could also manage the underlying infrastructure in an event-driven manner, instead of architecting the applications for specific infrastructure.
  3. Automation: Automation across the SDLC is becoming pervasive, moving into previously unexplored areas. Enterprises are experimenting with automation to convert business requirements to technical specifications, Agile teams are automating user stories, and IT operations teams are automating the provisioning, configuration, and management of infrastructure. Adding cloud, the old hat, into the mix creates a fundamentally different SDLC. Given the application infrastructure (middleware), infrastructure management, and updates, moving to the cloud fundamentally reduces an application team’s need for downstream activities.
  4. Lowcode: Companies such as Appian, K2, and Outsystems have made a strong case for low-code development. This has less to do with making SDLC a point-in-time activity, but it is still very relevant. If developers (or business users with less coding experience) can leverage these platforms and rapidly deliver applications, the functionalities will be created and deployed at run time. The key reason we have downstream management is that once features are built, they are protected and managed by the IT operations team. Low-code allows developers to deliver functionalities rapidly, even possibly on-the-fly, and reduces the need for large-scale management.

All the above will eventually tie to the nirvana of the as-a-service economy. It’s the software development equivalent of buying a car and maintaining it for years, versus getting a new, worry-free car every day (sounds like Uber, right?).

The current no SDLC wave is about compressing the processes to blur the lines between the different phases. Once the above future is achieved, no SDLC will expand to make the processes unnecessary, rather than shifting responsibilities outside the enterprise. Organizations that really want to succeed in the digital world must strive towards this goal and commit to a no SDLC world.

Explainable AI? Why Should Humans Decide Robots are Right or Wrong? | Sherpas in Blue Shirts

I have been a vocal proponent of making artificial intelligence (AI) systems more white box – able to “explain” why and how they came to a particular decision – than they are today. Therefore, I am heartened to see increasing debate around this. For example, Eric Brynjolfsson and Andrew McAfee, the well-known authors of the book, “The Second Machine Age,” have increasingly spoken about this.

However, sometimes I debate with myself on various aspects of explainable AI.

What are we explaining? If we have a white box AI system, a human can understand “why” the system made a certain decision. But who decides whether or not the decision was correct? For example, in the movie “I, Robot,” the lead character (played by actor Will Smith) thought the humanoid robot should have saved a child instead of him. Was the robot wrong? The robot certainly did not think so. If humans make the “right or wrong” decision for robots, doesn’t that defeat the self-learning purpose of AI?

Why are we explaining? Humans seek explanation when they lack understanding of something or confidence in the capabilities of other humans. Similarly, we seek explanation from artificial intelligence because, at some level, we aren’t sure about the capabilities of these systems. (Of course, there’s also humans’ control freak characteristic.) Why? Because these are “systems” and systems have problems. But humans also have “problems,” and what each individual person considers “right” is defined by their own value system, surroundings, and situations. What’s right for one person may be wrong for another. Should this contextual “rightness or wrongness” also apply to AI systems?

Who are we explaining? Should an AI system’s outcome be analyzed by humans or by another AI system? Why do we believe we have what it takes to assess the outcome, and who gave us the right to do so? Can we become more trustworthy, and believe that an AI system can assess another? Perhaps, but this defeats the very debate on explainable AI.

Who do we hold responsible for the decisions made by Artificial Intelligence?

The mother lode of complexity here is around responsibility. Due to human’s beliefs that AI systems won’t be “responsible” in their decisions – based on individuals’ biased definition of what is right or wrong – regulations are being developed to hold humans responsible for the decisions of AI systems. Of course, these humans will want an explanation from the AI system.

Regardless of which side of the argument you are on, or even if you pivot daily, there’s one thing I believe we can agree on…if we end up making better machines than better humans through artificial intelligence, we have defeated ourselves.

Voice-enabled Enterprise Applications: NOT a Good Idea | Sherpas in Blue Shirts

With the increasing proliferation of voice-enabled personal digital assistants (e.g., Cortana, Google Assistant, Samsung’s Bixby, and Siri) enterprise application vendors are considering jumping into the fray. In fact, some vendors have gone so far as to believe that, in the near future, 90 percent of interactions with their enterprise applications will be through voice or digital assistants.

But would these vendors actually be solving a real business problem with these capabilities, or perhaps instead getting intoxicated by drinking their own Kool-aid?

True that there are many potent arguments for voice-enabled interaction, including elimination of the need to train users on how to operate a given application, and the implication of higher productivity as the volume of data a person needs to physically enter into a system is greatly reduced.

But consider the realities of the user experience. Picture this: you’re sitting in your office area writing a report for a customer, when all of a sudden you hear your colleague saying to his laptop or smart phone, “OK SAP/Oracle, create an invoice.” How disruptive would this be to your work? Vendors could potentially tune their applications to such frequency that, when coupled with an additional device, a user’s speech couldn’t be heard by others. But that sounds too cumbersome and meaningless.

So what’s the right enterprise application vision for vendors?

The vision vendors should drive toward is one of no interaction between users and the enterprise application. Given that people engage with enterprise applications because they “have” to, not because they “want” to, how about a future where user activities are tracked, and all the related processes execute on their own?

For example, what if an enterprise’s system could manage all aspects of its employees’ travel expenses, rather than each individual filing expenses, and an army of staff reconciling and paying them? Such a nirvana of automated business processes would have tremendous impact on business agility, cost savings, and the user experience.

Voice assistance could potentially be of value in the back-end of enterprises’ systems. Their support staff could get a tremendous boost if they could “speak-fix” a problem instead of debugging complex code every time something went wrong. System designers and builders might also derive some value from voice interactions.

Enterprise applications vendors need to carefully consider whether they are solving the right problem with voice enablement. Could their smart and expensive developers be deployed elsewhere to solve complex and pertinent business problems, rather than creating a potentially unnecessary user experience? I believe that, at the end of the day, voice enablement is likely not the right, broad-based overhaul for the user experience with enterprise applications. Vendors need to focus on what users need, rather than getting caught up in the fancy of using the latest shiny digital toys.

Software Eats World, AI Eats Software … Ethics Eats AI? | Sherpas in Blue Shirts

Marc Andreessen’s famous quote about software eating the world popped up often in the last couple of years. However, the fashionable and fickle technology industry is now relying on artificial intelligence to drive similar interest. Most people following AI would agree that there is a tremendous value society can derive from the technology. AI will impact most of our lives in more ways than we can think of today. In fact, it often is hard to argue against the value AI can potentially create for society. Indeed, with the increasing noise and real development around AI, there are murmurs that AI may replace software as the default engagement model.

Artificial intelligence may replace software

Think about it. When we use our phone or Amazon Alexa to do a voice search, we simply speak, hardly using the app or software in the traditional sense. A chatbot can become a single interface for multiple software programs that allow us to pay our electric, phone, and credit card bills.

Therefore, artificial intelligence replacing software as the next technology shift is quite possible. However, can we rely on AI? Or, more precisely, can we always rely upon it? A particularly concerning issue is that of bias. Indeed, there have been multiple debates around the bias an AI system can introduce.

But can AI be unbiased?

It’s true that humans have biases. As a result, we’ve established checks and balances, such as superiors and laws, to discover and mitigate them. But how would an AI system determine if the answer it is providing is neutral and bereft of bias? It can’t, and because of their extreme complexity, it’s almost impossible to explain why and how an AI system arrived at a particular decision or conclusion. For example, a couple of years ago Google’s algorithms classified people of a certain demography in a derogatory manner.

It is certainly possible that the people who design AI systems may introduce their own biases into them. Worse, however, is that AI systems may over a period of time start developing their own biases. And even worse, they cannot even be questioned or “retaught” the correct way to arrive at a conclusion.

AI and ethics

There have already been instances in which AI systems gave results for which they weren’t even designed. Now think about this in a business environment. For example, many enterprises will leverage an AI system to screen the resumes of potential candidates. How can the businesses be sure their system isn’t rejecting good candidates due to some machine bias?

A case of this type could be considered an acceptable, genuine mistake, and it could be argued that the system isn’t doing it deliberately. However, what happens if these mistakes eventually turn into unethicality? We can pardon mistakes but we shouldn’t do the same with unethical decisions. Taking it one step further, given that these systems ideally learn on their own, will their unethicality become manifold as the time progress?

How far-fetched it is that the AI systems become so habitually unethical that users become frustrated? What are the chances that humanity stops further developing AI systems when it realizes that it’s not possible to create AI systems without biases? While every technology brings a level of evil with the good, AI’s negative aspects could multiply very fast, and mostly without explanation. If these apprehensions scare developers away, society and business could lose AI’s tremendous potential positive improvements. That would be even more unfortunate.

As the adoption of AI systems increases, we will likely witness more cases of wrong or unethical behavior. This will fundamentally question and push regulators and developers to put boundaries around these systems. But therein is a paradox: developing systems that learn on their own, while putting boundaries around that learning – quite a contradiction. However, we must overcome these challenges to exploit the true potential of AI.

What do you think?

AI: Democratization or Dumbing of Creativity? Pick your Cause | Sherpas in Blue Shirts

The earlier assumption that artificial intelligence (AI) would impact “routine” jobs first is not holding ground anymore. Indeed, we might be deluding ourselves by thinking that the time in which AI could be the most used interface to engage with technology systems is far in the future. But I’m getting ahead of myself. Let’s look at what’s happening today in the creative sector.

IBM Watson was used last year to create a 20th Century Fox movie trailer. Adobe Sensei is putting the digital experience in the hands of non-professionals. Google is leveraging AI with Autodraw to “help everyone create anything visual, fast.” Think about anyone, any one of us, taking a picture and then telling photo editing software what to do. No need to work with complex brushes, paints, or understanding of color patterns.

AI and creative talent

This is scary for creative people such as graphic designers, digital artists, and others who may consider artificial intelligence a job killing replacement of their skills, despite pundits’ claims that it will “augment” human capabilities. Antagonists proclaim that AI systems will at best reduce the overhead with which creative people deal. But removing overhead is just the first step; the next step is surely the creative. More so, AI systems can ingest so much data, and will increasingly rely on unsupervised learning to correlate so many behavioral traits to create compelling creative content that a human creative artist cannot possibly fathom or understand. Thus, these artists will begin leveraging AI systems to create exceptional, “unthinkable” user experiences that have no “baseline” of reference, but soon may be replaced by these very systems.

AI and businesses

Businesses have always struggled to hire highly skilled creative professionals, and have paid through the nose to secure and retain them. That they will be able to leverage artificial intelligence to take charge of creativity to drive messages and communication to their end users, rather than relying on creative experts, will make them extremely pleased. As the intent of most AI systems is to enable non-specialists to perform many tasks by “hiding what is under the hood,” businesses might not need as many specialist human creative skills.

However, despite this seeming upside of AI systems, their under the hood nature will create problems of accountability. The complexity of their deep learning and neural networks will become such that even the teams developing the systems won’t be able to provide answers to specific decisions they make. Thus, if something goes wrong, where should the blame be placed? With the creation team, or with the AI system itself? The system won’t care – after all, it’s just a technology – and the team members will argue that the system learned by itself, far beyond what was coded, and they cannot be held accountable for its misdeeds. This is scary for businesses.

AI and the impact beyond business

Imagine the impact this will have on the society. Although you can track back how any other technological system arrives at an answer, AI systems that are now supposed to run not only social infrastructure but also much our entire life won’t be accountable to anyone! This is scary for everyone.

I don’t want to add to the alarmist scare going on in the industry, but I cannot be blind to successful use cases I witness daily. People will argue that AI has a long way to go before it becomes a credible alternative to human-based creativity. But, the reality is that the road may not be as long as is generally perceived.

Have a question?

Please let us know how we can help you.

Contact us

Email us

How can we engage?

Please let us know how we can help you on your journey.