“Software is eating the world,” wrote Marc Andreessen, co-founder and general partner of venture capital firm Andreessen Horowitz, in an essay published in The Wall Street Journal in 2011. But today, it’s clear that services are eating software. The implications of this trend are very significant for companies. The advantages are clear, but it’s also clear that there are challenges. Most companies today are not set up to deal with a services world. I believe they need a new set of management and operating models that allow companies to get clarity on what they are doing with services and allow them to stay in control.
The increasing popularity and uptake of Artificial Intelligence (AI) is giving rise to concerns about its risks, explainability, and fairness in the decisions that it makes. One big area of concern is bias in the algorithms that are used in AI for decision making. Another risk is the probabilistic approach to handling decisions and the potential for unpredictable outcomes based on AI self-learning. These concerns are justified, given the implicit ethical and business risks, for example, impact on people’s lives and livelihood, or bad business decisions based on AI recommendations that were founded on partial data.
The good news is that the software industry is starting to address these concerns. For example, last year, vendors including Google, IBM, and Microsoft announced tools (either released or in development) for detecting bias in AI, and recently, there were more announcements.
Last year IBM brought out:
- Adversarial Robustness 360 Toolbox (ART), a Python library available on GitHub, to make machine learning models more robust against adversarial threats such as inputs that are manipulated to derive desired outputs
- AI Fairness 360, an open-source toolkit with metrics that identify bias in datasets and machine learning models, and algorithms to mitigate them
Last month, IBM further augmented its offerings with the release of AI Explainability 360, an open source toolkit of algorithms to support the understanding and explainability of machine learning models. It is a companion to the other toolkits.
Cognitive Scale recently unveiled the beta of Cortex Certifai, software that automatically detects and scores vulnerabilities in black box AI models without having access to the internals of the model. Certifai is a Kubernetes application and runs as a native cloud service on Amazon, Azure, Google, and Redhat clouds. Cognitive Scale also unveiled the AI Trust Index. Developed in collaboration with AI Global, it will provide composite risk scores for automated black-box decision making models. This is an interesting development that could grow to become a badge of honour for AI software, and a differentiator for those with the most trusted rating.
The Reality of Bias
While these announcements and those made last year are good news, there are aspects of AI training that will be difficult to address because bias is all around us in real life. For example, public data would show AI that there are many more male CEOs and board members than female ones, leading it to possibly conclude that male candidates are more suitable for shortlisting for a non-executive director vacancy than women. Or public data could lead AI to increase mortgage or auto loan risk factors for individuals living in a particular zip code or postcode to unreasonably high levels.
It is the encoding and application of these kinds of biases automatically at scale that is worrying. Regulations in some countries address some of the issues, but not all countries do. Besides, the potential for new threats and risks is high.
There is still a lot more for us to understand when it comes to making AI fair and explainable. This is a complex and growing field. As demand for AI grows, we will see more demand for solutions to check AI as well.
Do you know anyone who hasn’t had a frustrating experience because the contact center rep they interacted with didn’t speak their native language? We didn’t think so.
The truth is that while enterprises have multiple business reasons for establishing their contact centers in offshore locations in Eastern Europe, Latin America, and Asia Pacific, the reps’ language and communication skills often have a negative impact on the overall customer and brand experience.
And although many companies have developed their own solutions to assess candidates’ language capabilities, they’re plagued with multiple challenges, including:
- Resource intensive: Developing language assessment solutions takes considerable time and resources. They need to be thoughtfully designed, particularly around the local nuances of the markets where they are being leveraged. This can escalate the development budget and timelines, and put an additional burden on L&D teams.
- Lack of standardization: Most language assessment tests are developed by in-house experts in a specific region. This approach can be detrimental to organizations with operations in multiple geographies, because it lacks consistency across regions, and can leave gaps in the evaluation criteria.
- Involvement of human judgment: Because humans are responsible for evaluating candidates, a lot of subjectivity comes into play. And human bias, whether intentional or not, can greatly reduce transparency in the candidate selection process.
- Maintenance issues: The real value of these solutions depends on their ability to test candidates for unprepared scenarios. But regularly updating the assessment materials to keep the content fresh and reflect changing requirements further strains internal resources.
Third-party vendors’ technology-based solutions can help
Commercial language and communication assessment solutions have been around for years. But innovative vendors – such as Pearson, an established player in this market, and Emmersion Learning, which incorporates the latest AI technology into its solution – are increasingly leveraging a combination of linguistic methodologies, technical automation, and advanced statistical processes to deliver a scalable assessment that can predict speaking, listening, and responding proficiency.
For instance, technology-driven solutions may test candidates’ “language chunking” ability, which means their ability to group chunks of semantic meaning. This concept is similar to techniques that are commonly used for memorizing numbers. By linking numbers to concepts, a person can be successful in retaining large sequences of digits in working memory. Without conceptual awareness, memorization is hard.
During an assessment, through automation and AI, the candidate may be asked to repeat sentences of increasing complexity. Success in this exercise relies on the candidate’s ability to memorize complex sentences, which can only be done when they can chunk for meaning. A candidate’s mastery of an exercise to repeat sentences of increasing complexity is a great predictor of the candidate’s language proficiency.
Organizations that embrace technology-based solutions for language assessments can anticipate multiple benefits: reduced costs, decreased hiring cycle times, improved quality of hires, better role placement, freed time to devote to value-add initiatives, and improved customer experience and satisfaction. Ultimately, it’s a triple win for the organization, its candidates, and its customers.
The RPA world just got a bit more exciting with the early release of Robin, a Domain Specific Language (DSL) designed for RPA and offered via an Open Source Software (OSS) route. A brainchild of Marios Stavropoulos, a tech guru and founder and CEO of Softomotive, Robin is set to disrupt the highly platform-specific RPA market. Robin is not the first attempt to democratize RPA, so will it succeed at this feat?
The Robin advantage
RPA democratization isn’t a new concept. Other OSS frameworks, such as Selenium, have been used for RPA. But they weren’t designed for RPA and are best known for software testing automation. And there are other free options such as WorkFusion’s RPA Express, and Automation Anywhere’s and UiPath’s free community licenses. These have certainly lowered the barrier to RPA adoption but come with limits, for example, the number of bots or servers used.
When demonstrating his software environment, Stavropoulos explains that the principles he has applied to Robin are to make RPA agile, accessible, and free from vendor lock-in. This could be very powerful, for example, an RPA DSL could provide more user functionality. Not having to rip out and replace robots when switching to a different RPA software is tremendously appealing. And, availability of OSS RPA is likely to boost innovation as it will make it a lot easier to develop new light programs that simply collect and process data, such as RPA acting as a central data broker for some functions.
What’s in it for Softomotive?
There are four main reasons for an RPA vendor to invest in an open source offering.
First, Softomotive will become the keeper of the code. And while it will not charge for the software, not even other RPA vendors that start to support it, it will charge for Robin support and maintenance should customers wish to pay for those services.
Second, many other OSS vendors grew on the back of this model and got acquired by bigger companies. For example, JasperSoft, the OSS reporting company was acquired by Tibco for US$185 million in 2014, and Hitachi Data Systems acquired Pentaho for a rumored US$500 million in 2015. I’m not at all hinting here that Softomotive is looking to be acquired, but these are compelling numbers.
Third, if Robin is successfully adopted, the user community will contribute to the development of the environment and modules to a community library. There will also be community-led support and issue resolution, and so on.
Finally, Softomotive will still have its own products and will continue to generate revenue based on the solutions it wraps around Robin.
Robin success factors
Of course, while Robin is a great idea, Stavropoulos needs to ensure it is quickly and widely adopted. For it to become the de facto language of RPA, other RPA vendors must support it. And the only way to get them to support it is by forcing their hands with widespread adoption.
There are two ways Stavropoulos can make this happen; via free online delivery and through classroom-based training in key RPA developer hubs such as Bangalore. He is lucky to have a lot of existing users in small- to medium-sized companies. The developers in those companies are likely to try out Robin and give Stavropoulos a flying start.
Getting Robin onto a major OSS framework is also very important.
An RPA DSL on an OSS ticket is an exciting proposition that could significantly disrupt the market. But success depends on adoption and on Stavropoulos playing his cards right.
Enterprises are increasingly embracing DevOps to enhance their business performance by accelerating their software time-to-market. In principle, DevOps covers the entire spectrum of Software Development Life Cycle, SDLC, activities from design through operation. But, in practice, only ~ 20 percent of enterprises are leveraging DevOps end-to-end, according to our recent research, DevOps Services PEAK Matrix™ Assessment and Market Trends 2019 – Siloed DevOps is No DevOps!
That means the remaining ~ 80 percent that are taking a siloed approach to DevOps are missing out on the many types of values it can deliver.
Types of DevOps fragmentation
Instead of adopting DevOps in its intended end-to-end fashion, many enterprises in different verticals and at different stages of maturity are tailoring it to focus on siloed, distinct portions of the SDLC. The most common types of fragmentation are: 1) Apps DevOps, applying DevOps principles only across the application development cycle; 2) Test Ops, using DevOps principles in testing; and 3) Infra Ops, applying DevOps principles only to infrastructure.
Why fragmentation delivers minimal value
Pocketed adoption makes it tough to realize the full value that DevOps can deliver. The primary reason is bottlenecks. First, workflow throughout the SDLC is impeded when DevOps principles and automation are only applied to certain phases of it. Second, lack of end-to-end adoption makes it more difficult for enterprises to gain a full view of their applications portfolio, spot bottlenecks, incorporate stakeholder feedback in real-time, and make the entire process more efficient.
Additionally, when DevOps is used in a siloed manner, it focuses primarily on increasing the technical efficiency of processes. This means that DevOps’ ability to support the enterprise’s broader business-oriented objectives is severely restricted.
Finally, fragmented DevOps adoption creates a disintegrated culture in which teams work independently of each other, in turn leading to further conflicts, dependencies, and stretched timelines. All this, of course, defeats DevOps’ main purpose.
Moving to end-to-end DevOps adoption
To successfully adopt DevOps end-to-end, enterprises should place automation, culture, and infrastructure at the heart of their strategy.
- Automation: Automating various elements of the SDLC is extremely beneficial; doing so helps reduce implementation timelines and increase team productivity by standardizing processes and diminishing the scope of errors
- Culture: A collaborative culture is essential to a successful DevOps implementation as it involves the development, operations, and business teams working together in an iterative fashion to achieve cross-team and business-oriented KPIs
- Infrastructure: Increasing adoption of cloud-native technologies like as infra-as-code, microservices, serverless, and containers helps maintain configuration consistency in deployment, eventually leads to an increase in developer productivity, and saves on cloud computing costs.
DevOps has the ability to deliver significant value to enterprises. But implementing it in a siloed manner quickly dilutes a lot of that potential value. To realize all DevOps’ benefits, enterprises should implement it end-to-end, invest in automation, robust and modular infrastructure, and tools and technologies to ensure agility, and develop a culture that helps them improve cross-team collaboration.
Companies widely recognize the potential power of artificial intelligence (AI). They instinctively understand that it feels like we’re on the cusp of something that will change our lives and our businesses in a profound way. Yet, many struggle with where to apply it. Executives can’t shake the feeling that they should have use cases for AI and use it productively today, even recognizing that AI is not mature yet and will be far more powerful tomorrow and in the future. If you’re looking for how and where your company should use AI, let me give you a perspective on a great application of AI today: your digital platforms.
In a previous blog post, we explored the evolution of enterprise IT infrastructures from a cost-center positioning to one that enables digital transformation through a concept known as aware automation — a combination of intelligent automation and cognitive/Artificial Intelligence (AI)-driven automation. In this post, we’ll explore some potential use cases and best practices for aware automation within the enterprise.
In today’s digital world, enterprise success is all about speed, agility, and flexibility in order to adapt to market and competitor dynamics. It is no surprise that 62% of enterprises view IT services agility and flexibility as a primary focus of their IT services strategy1, with cost reduction seen as a derivative.
The digital businesses of today require a business-centric IT infrastructure that is agile, flexible, scalable and cost-effective. For a long time, IT infrastructure has taken up an inordinate amount of time and the lion’s share of precious resources (particularly financial). However, with new cloud delivery models gaining prominence and advancements in the underlying technology, business leaders now view IT infrastructure as an enabler of digital transformation — or at the very least, want to ensure that their IT infrastructure evolves to such a state.
While today’s enterprises turn to automation for a multitude of competitive advantages, cost savings is at the top of their list. Through their marketing initiatives, often backed by client case studies and references, third-party service providers often boast automation-driven FTE reductions that save their clients millions of dollars.
Indeed, we’ve seen claims of savings to the tune of 30-70 percent FTE reductions. But our own data, culled from BPO deals on which we advise, show FTE reductions that are one-third to two-thirds lower.
Why is there such a significant gap? It’s because the service providers are calculating the reduction at the project level, instead of at the process level. While the numbers show well using a project level calculation, they’re very misleading, and often lead to disappointment.
Let’s take a quick look at an invoice processing example to see the glaring differences.
As you see, an automation-driven invoice data extraction project in North America results in a 60 percent FTE reduction. Yet, when you expand the calculation to include invoice coding and exception handling in all operating regions – i.e., the enterprise-wide end-to-end invoicing process – the number drops to 10 percent. A 60 percent FTE reduction is highly enticing, and technically it’s correct. But it doesn’t show you the whole picture.
In order to properly assess the value of automation and develop your business case, you need to look at the percentage savings for the entire process. This is the only way you’ll obtain objective, realizable benefits data.
How can you find the automation savings data you need?
Your first thought might be to try and collect it from similar enterprises that have already implemented automation. But the numbers won’t be particularly reliable, as most enterprises are in the early days of their automation journey.
The most practical and valuable approach is to look at the BPO deal-based data for the entire process to be automated. Doing so gives you a realistic view of the automation-driven FTE savings for a couple of reasons. First, the FTE base for automation benefit calculation in deals is clearly defined in the baselining/RFP phase as the total number of FTEs in the process. And second, the FTE benefit numbers within deals are slightly more aggressive than the current norm, but because providers are continually refining their capabilities, they are comfortable with contractually committing to the higher numbers.
And remember that your BPO and/or RPA implementation provider should present this data to you to set realistic expectations. If they don’t, you’ll be armed with the ammunition you need.
Automation has the potential to greatly reduce your expenses. But before you leap, you need to carefully evaluate how the savings are being calculated. Your satisfaction depends on it.
If you’d like detailed insights on the FTE reduction numbers across different BPO processes within live BPO deals, please connect with us at [email protected] or visit https://www.everestgrp.com/research/domain-expertise/benchmarking/.
Just a short five or so years ago, digital capabilities were a competitive differentiator for major service providers. Today, they’re a competitive must. As a result, global and offshore-heritage service providers alike are making significant investments in digital technologies such as blockchain, Artificial Intelligence (AI), Robotic Process Automation (RPA), and Internet of Things (IoT).
While the global players took the lead in building what is now a billion-dollar digital landscape, offshore-heritage service providers such as Infosys, TCS, and Wipro are quickly catching up. And their strategies to build and deliver greater value through digital-driven productivity and IP are clearly paying off. For example, our research found that their digital revenue jumped from 20 percent to 30 percent of their total revenue between Q1 2018 and Q1 2019.
Let’s look at how offshore-heritage service providers are upping their game with digital investments.
Internal digital-based capabilities
One of their strategies is to enhance the customer experience and improve efficiency through internal development of digital-based capabilities. For example:
- Infosys launched AssistEdge Discover to increase the rate of automation implementations at the enterprise level through process discovery
- TCS launched the connected intelligence data lake software on Amazon Web Services (AWS) to help clients build their own analytics services
- Wipro made its AI and Machine learning (ML) solutions available on AWS to govern supply chain processes and enhance productivity and customer experience
- Tech Mahindra launched NetOps.ai, its network automation and managed services framework, to speed up 5G network adoption
- HCL launched iCE.X, an IoT device management platform, to bring intelligent IoT device management to telecom and media services, and increase IoT use case adoption.
As their initial reskill/upskill approach left them far behind global service providers’ inorganic approach, offshore-heritage service providers have taken the leap and started acquiring companies to obtain direct access to already-trained talent. For example:
- Genpact acquired riskCanvas to access its suite of anti-money laundering (AML) solutions
- Tech Mahindra acquired Dynacommerce, a computer software provider, to support its digital transformation strategy and enable a future-proof and future-ready digital experience for its customers
- HCL acquired Strong-Bridge Envision (SBE), a digital transformation consulting firm, to leverage its capabilities in digital strategy development, agile program management, business transformation, and organizational change management. With this acquisition, SBE will become part of HCL’s global digital and analytics business
- Tech Mahindra acquired K-Vision, a provider of mobile network solutions, for US$1.5 million to build and support the roll-out of a 4G and 5G telecom network in Japan. The acquisition will leverage the local presence and expertise of K-Vision to build its network services business in the country.
Partnership with startups
In order to develop skills and knowledge about these next-generation digital technologies in the general workforce, offshore-heritage service providers are partnering with niche start-ups to improve their agility/flexibility, reduce costs, and access stronger and better insights. For example:
- Wipro partnered with RiskLens, a provider of cyber risk software and management solutions, to deliver quantitative cyber risk assessments to enterprise customers and government organizations
- Tech Mahindra partnered with Rakuten Mobile Network to open a next-generation (4G and 5G) software-defined network lab. The partnership will help both the firms drive innovation and disruption in the 5G space
- TCS partnered with JDA software to build next-generation cognitive solutions to optimize supply chains for customers. The partnership will develop joint, interoperable technology solutions for supply chains of the future, and accelerate human-machine collaboration to solve complex business problems
- HCL partnered with Kneat.com, a software firm, to provide and implement next-generation digital solutions for facilities, equipment, and computer systems validation processes leveraging Kneat’s paperless software platform.
With digitalization on the rise across industries and product segments, and a bearish economy outlook in key markets such as the United States and Europe, offshore-heritage service providers will continue to invest heavily in next-gen technologies. This will help them to emerge as strong partners for global organizations to wade through their economic pressures.
To learn more about offshore providers’ digital strategies, key market trends, global locations activity, and service provider activity in Q2 2019, please see our Market Vista™ : Q2 2019 report.