“Software is eating the world,” wrote Marc Andreessen, co-founder and general partner of venture capital firm Andreessen Horowitz, in an essay published in The Wall Street Journal in 2011. But today, it’s clear that services are eating software. The implications of this trend are very significant for companies. The advantages are clear, but it’s also clear that there are challenges. Most companies today are not set up to deal with a services world. I believe they need a new set of management and operating models that allow companies to get clarity on what they are doing with services and allow them to stay in control.
Developed economies around the world (with the exception of Spain’s) are facing generational low levels of unemployment. While that’s good news for workers, it’s a serious talent deficit problem for employers. How bad is it?
In 2018, the number of unemployed persons per job opening in the U.S. fell below the benchmark of one. In plain speak, that means there are more job openings out there than the number of relevant available people to fill them.
And the Beveridge curve tells essentially the same story in Europe.
Against this backdrop, I was very interested in hearing what solutions and ideas were presented at NTT DATA’s “Captaining the Talent” Summit in Lisbon last month. (Full disclosure – NTT DATA arranged my travel for the event.) The event featured a range of speakers from all walks of life – from the former captain of New Zealand’s national rugby union team to British perfumer Jo Malone to TED Speaker and eminent neuroscientist Mariano Sigman.
The common thread among all the sessions was the question on every senior executive and leader’s mind: how to navigate the choppy waters in a talent-deficit world.
- Actively addressing FOMO is vital to talent acquisition and retention: Today’s millennial and younger employees have constant FOMO, or fear of missing out. They often question whether they’re working on the most exciting project, if their firm is solving for the toughest problems out there, and if they themselves are doing the most they can. To acquire and retain these employees, you need to go beyond workplace “gimmicks” like massages, pet-friendly offices, flex hours, and daily ice cream. Your employees need to be invested your company’s mission, and you need to make a genuine effort to make them feel empowered, not just rewarded.
- Bridging the physical environment must be seamless: Most enterprises fixate on the consumer experience, but often forget that any digital transformation has to be valuable to their employees as well. Employees face friction in executing daily work tasks as they grapple with legacy systems. As their expectations from work and the workplace change, enterprises need to ensure they embrace consumerization of IT and help employees feel more productive and engaged through internal digitalization – from admin/expense/travel tools to more effective knowledge management, trainings, etc. At Everest Group, we firmly believe that true digital transformation has to essentially move from Customer Experience to Stakeholder Experience, enabling significant improvement for four personas – customers, employees, partners, and society.
- Unlocking true motivation needs reimagined incentives: Traditional rewards and recognition mechanisms are limited in the impact they create, and a one-size-fits-all approach no longer cuts it. Intriguing science behind motivation and its dynamics suggest that you need to work more creatively to enable channels that allow your employees to tap into their search for meaning, e.g., creating a difference in the world they live in. And you must embrace more intelligence and empathy, enabled by technology, to provide a more personalized experience for your employees, like offering them customized learning avenues and opportunities.
- Enabling true diversity goes beyond a checklist: More often than not, the diversity and inclusion conversation comes down to virtue signaling…a board seat here, a person of color there. But true diversity has to include diversity of thought as well. Successful organizations democratize idea incubation so that even new/young employees feel empowered to contribute and create impact through avenues such as hackathons and internal crowdsourcing initiatives.
What we face today is a talent deficit of a unique nature. In the mature markets, there isn’t enough talent to go around, while the future of work in a global technology environment brings a significant reskilling/upskilling challenge for traditional offshore/nearshore geographies. The common underpinning theme is the irrevocable shift in the profile of people that work in this environment. Talent is changing across the life cycle – from sourcing to retention to relevance – requiring a rethink of traditional talent management practices. How we respond is going to create an irrevocable difference in the future of work.
In September, HCL Technologies announced its acquisition of the semiconductor engineering services firm Sankalp Semiconductor in an all-cash deal worth US$25 million, with Sankalp operating as a 100 percent subsidiary of HCL. While this is not a particularly large acquisition, it impacts a key market player, and it highlights a couple of key trends in the semiconductor engineering services market.
What the acquisition means for HCL
The acquisition impacts HCL in a few important ways:
Enhanced semiconductor engineering capabilities
The recent acquisition by HCL is a strategic move to cement its position in semiconductor chip engineering services by strengthening its existing digital design services and expanding into the analog and mixed-signal space.
Both HCL and Sankalp Semiconductor provide chip engineering services in the pre-silicon and post-silicon segments of the value chain (See Figure 1). But while HCL’s chip engineering expertise lies in digital design, Sankalp has strong capabilities in analog and mixed-signal circuit design as well.
And HCL will be gaining experience. Sankalp has more than 5,000 person-years of experience in semiconductor engineering services and covers the digital, analog, and mixed-signal domains through its 1,000+ engineers based in India and Canada. In analog and mixed-signal design alone, the company has more than 1,500 person-years of experience and has delivered more than 500 projects.
Though HCL is a major player in engineering services, its acquisition of Sankalp Semiconductor, which reported revenues of ~US$20 million in FY2019, will be a nice boost to its semiconductor engineering services top line.
Increased market access
Sankalp will strengthen HCL’s play in specific market segments including automotive, consumer, IoT, medical electronics, networking, and wireless.
How the acquisition reflects industry trends
HCL’s acquisition of Sankalp is the latest in a series of acquisitions that have taken place in the semiconductor engineering services industry over the past few years. As shown in the graphic below, in 2015, Aricent acquired the Bengaluru-based semiconductor services firm SmartPlay Technologies Pvt Ltd before itself being acquired by Altran in 2017, which was – in turn – acquired by Capgemini in 2019. Cyient Europe Ltd acquired custom analog and mixed-signal circuits design company Ansem N.V, and L&TTS acquired Bengaluru-based Graphene Semiconductor.
All of these acquisitions reflect an important industry trend that has some specific consequences. There is an increasing focus on semiconductor engineering due to the rise of IoT and smart device applications, as well as a growing demand for greater computing power and device miniaturization.
This trend is driving several outcomes. First, it is forcing semiconductor companies to think about how to reduce time-to-market, as well as how to gain access to engineers with the right kinds of expertise. Many are turning to outsourcing to address these challenges. As a result, we expect outsourcing in this sector to grow at a rate of 10% over the next three years.
Second, it is forcing semiconductor engineering service providers to expand their portfolios to successfully address market needs. That challenge, coupled with the generally fragmented nature of the industry, is likely to result in ongoing merger and acquisition activity.
Ultimately, whether they choose to grow organically or inorganically, semiconductor engineering services firms will want to invest in their capabilities so they can grab a higher share of outsourcing from the ~US$ 470 billion semiconductor industry pie.
The increasing popularity and uptake of Artificial Intelligence (AI) is giving rise to concerns about its risks, explainability, and fairness in the decisions that it makes. One big area of concern is bias in the algorithms that are used in AI for decision making. Another risk is the probabilistic approach to handling decisions and the potential for unpredictable outcomes based on AI self-learning. These concerns are justified, given the implicit ethical and business risks, for example, impact on people’s lives and livelihood, or bad business decisions based on AI recommendations that were founded on partial data.
The good news is that the software industry is starting to address these concerns. For example, last year, vendors including Google, IBM, and Microsoft announced tools (either released or in development) for detecting bias in AI, and recently, there were more announcements.
Last year IBM brought out:
- Adversarial Robustness 360 Toolbox (ART), a Python library available on GitHub, to make machine learning models more robust against adversarial threats such as inputs that are manipulated to derive desired outputs
- AI Fairness 360, an open-source toolkit with metrics that identify bias in datasets and machine learning models, and algorithms to mitigate them
Last month, IBM further augmented its offerings with the release of AI Explainability 360, an open source toolkit of algorithms to support the understanding and explainability of machine learning models. It is a companion to the other toolkits.
Cognitive Scale recently unveiled the beta of Cortex Certifai, software that automatically detects and scores vulnerabilities in black box AI models without having access to the internals of the model. Certifai is a Kubernetes application and runs as a native cloud service on Amazon, Azure, Google, and Redhat clouds. Cognitive Scale also unveiled the AI Trust Index. Developed in collaboration with AI Global, it will provide composite risk scores for automated black-box decision making models. This is an interesting development that could grow to become a badge of honour for AI software, and a differentiator for those with the most trusted rating.
The Reality of Bias
While these announcements and those made last year are good news, there are aspects of AI training that will be difficult to address because bias is all around us in real life. For example, public data would show AI that there are many more male CEOs and board members than female ones, leading it to possibly conclude that male candidates are more suitable for shortlisting for a non-executive director vacancy than women. Or public data could lead AI to increase mortgage or auto loan risk factors for individuals living in a particular zip code or postcode to unreasonably high levels.
It is the encoding and application of these kinds of biases automatically at scale that is worrying. Regulations in some countries address some of the issues, but not all countries do. Besides, the potential for new threats and risks is high.
There is still a lot more for us to understand when it comes to making AI fair and explainable. This is a complex and growing field. As demand for AI grows, we will see more demand for solutions to check AI as well.
India is clearly becoming the “it” destination for pharmaceutical companies’ shared services centers (SSC) – or Global In-House Center (GIC) – organizations. Why do we say this? Because global pharmas with headquarters in the U.S. and Europe employ more than 11,000 FTEs employees in their India-based shared services centers to deliver not only table stakes transactional finance and HR services but also highly complex processes across all stages of drug development, including drug R&D and clinical trials.
What’s India’s appeal? There are four factors.
Established/Mature Location for Global Pharma Services Delivery
India is a time-tested, proven GIC destination for a wide range of industries. Many of the world’s leading pharmaceutical companies started delivering their global services support operations from India back in early 1990s. Now, pharma majors like AstraZeneca, Eli Lilly, and Novartis are delivering complex, judgment-intensive services such as product R&D, biostatistics, and clinical trials site management from their India GICs. Hyderabad, Chennai, and Bangalore are the preferred locations, housing more than 80 percent of the pharmaceutical GIC talent.
Skilled Talent Pool
Talent availability, at scale, is one of India’s strongest value propositions. In recent years, many pharma companies have been able to successfully scale their delivery teams supporting diverse functions such as R&D, commercials, IT, and finance. For example, a leading pharma GIC houses 2,000+ resources providing IT services for various pharma functions. And multiple other pharma SSCs have scaled teams (400+ resources) that support R&D services, and dedicated resource groups comprised of doctors, PhDs, and biostatisticians, for complex drug R&D processes like development of computational solutions for analyzing clinical trials.
Opportunities for Cross-functional Collaboration
India’s availability of diverse talent profiles at scale allows India-based pharma SSCs to support multiple functions. And because many of them house IT resources with R&D and commercial business teams, they have multiple opportunities to collaborate on and insource IT work for drug R&D (e.g., to build IT platforms for drug development and IT services for lab support), and commercial operations (e.g., IT services for finance.) The value of this collaboration? Tighter integration of functions, better understanding of business requirements, and faster execution.
Mature Market for Digital Services Delivery
Leading India-based pharma GICs are working on digital initiatives including analytics and automation, and some are serving as global automation CoEs for their parent enterprises. Many are developing analytical tools for marketing & sales operations, competitive intelligence, and incentive planning. They are also investing heavily in automating less complex and high-volume transactional processes such as expense management, purchase order creation, offer letter generation, résumé screening, and management reporting, and deploying RPA bots to read files, extract data, and report adverse events. As part of the broader digital agenda, some centers have also started exploring the uses of artificial intelligence/machine learning to recruit patients and select sites for clinical trials, and for channel sequencing and optimization in their enterprise’s sales & marketing function.
Going forward, pharma companies not only expect their India SSCs to grow in scale and expand the scope of their process delivery, but also play a significant role in their digital transformation journeys by leading initiatives across all stages of the product R&D lifecycle. To satisfy these expectations, the GICs need to build deep domain capabilities and acquire or train talent to deliver increasingly complex, higher up the value chain services and next-generation digital initiatives.
To learn more about why pharma companies consider India their preferred service delivery destination, please read our recently published report, “Healthcare and Life Sciences – GICs in India Fast-tracking Enterprises’ Digital Agenda,” or connect directly with the report authors Anish Agarwal, Bharath M, and Rajeshwaran Pagalam.
Our research suggests that more than 90 percent of enterprises around the world have adopted cloud services in some shape or form. Additionally, 46 percent of them run a multi-cloud environment and 59 percent have adopted more advanced concepts like containers and microservices in their production set-up. As they go deeper into the cloud ecosystem to realize even more value, they need to be careful of two seemingly similar but vastly different concepts: cloud native and native cloud.
What are Cloud Native and Native Cloud?
Cloud native used to refer to more container-centric workloads. However, at Everest Group, we define cloud native as building blocks of digital workloads that are scalable, flexible, resilient, and responsive to business demands for agility. These workloads are developer centric and operationally better than their “non-cloud” brethren.
Earlier, native cloud meant any workloads using cloud services. Now – just like in the mobile world where apps are “native Android or iOS,” meaning specifically built for these operating systems – native cloud refers to leveraging the capabilities of a specific cloud vendor to build a workload that is not available “like-to-like’ in other platforms. These are innovative disruptive offerings such as cloud-based relational database services, serverless instances, developer platforms, AI capabilities, workload monitoring, and cost management. They are not portable across other cloud platforms without a huge amount of rework.
With these evolutions, we recommend that enterprises…
Embrace Cloud Native
Cloud native workloads provide the fundamental flexibility and business agility enterprises are looking for. They thrive on cloud platforms’ core capabilities without getting tied to them. So, if need be, the workloads can easily be ported to other cloud platforms without any meaningful rework or drop in performance. Cloud native workloads also allow enterprises to build hybrid cloud environments.
And Be Cautious of Native Cloud
Most cloud vendors have built meaningfully advanced capabilities into their platforms. Because these capabilities are largely native to their own cloud stack, they are difficult to port across to other environments without considerable investment. And if – more likely, when – the cloud vendor makes changes to its functional, technical, or commercial model, the enterprise finds it tough to move away from the platform…the workloads essentially become prisoners of that platform.
At the same time, native cloud capabilities are fundamentally disruptive and very useful for enterprise workloads. However, to adopt such advanced features in the right manner and still be able build a multi-cloud strategy, enterprises need the necessary architectural, technical, deployment, and support capabilities. For example, in a serverless application, the architect can put business logic in a container and the event trigger in serverless code. With that approach, when porting to another platform, the container can be directly ported and only the event trigger needs to change.
Overall, enterprise architects need to be cautious of how deep they are going into a cloud stack.
Given that cloud platform feature functionality parity is becoming common, cloud vendors will increasingly push enterprises to go deeper into their stack. The on-premise cloud stack offered by all three large hyperscalers – Amazon Web Services, Azure, and Google Cloud Platform — is an extension of this strategy. Enterprise architects need to have a well thought out plan if they want to go deeper into a cloud stack. They must evaluate the interoperability of their workloads and run simulations at every critical juncture. Although it is unlikely that an enterprise would completely move off a particular cloud platform, enterprise architects should make sure they have the ability to compartmentalize workloads to work in unison and interoperate across a multi-cloud environment.
Please share your cloud native and native cloud experiences with me at [email protected].
Do you know anyone who hasn’t had a frustrating experience because the contact center rep they interacted with didn’t speak their native language? We didn’t think so.
The truth is that while enterprises have multiple business reasons for establishing their contact centers in offshore locations in Eastern Europe, Latin America, and Asia Pacific, the reps’ language and communication skills often have a negative impact on the overall customer and brand experience.
And although many companies have developed their own solutions to assess candidates’ language capabilities, they’re plagued with multiple challenges, including:
- Resource intensive: Developing language assessment solutions takes considerable time and resources. They need to be thoughtfully designed, particularly around the local nuances of the markets where they are being leveraged. This can escalate the development budget and timelines, and put an additional burden on L&D teams.
- Lack of standardization: Most language assessment tests are developed by in-house experts in a specific region. This approach can be detrimental to organizations with operations in multiple geographies, because it lacks consistency across regions, and can leave gaps in the evaluation criteria.
- Involvement of human judgment: Because humans are responsible for evaluating candidates, a lot of subjectivity comes into play. And human bias, whether intentional or not, can greatly reduce transparency in the candidate selection process.
- Maintenance issues: The real value of these solutions depends on their ability to test candidates for unprepared scenarios. But regularly updating the assessment materials to keep the content fresh and reflect changing requirements further strains internal resources.
Third-party vendors’ technology-based solutions can help
Commercial language and communication assessment solutions have been around for years. But innovative vendors – such as Pearson, an established player in this market, and Emmersion Learning, which incorporates the latest AI technology into its solution – are increasingly leveraging a combination of linguistic methodologies, technical automation, and advanced statistical processes to deliver a scalable assessment that can predict speaking, listening, and responding proficiency.
For instance, technology-driven solutions may test candidates’ “language chunking” ability, which means their ability to group chunks of semantic meaning. This concept is similar to techniques that are commonly used for memorizing numbers. By linking numbers to concepts, a person can be successful in retaining large sequences of digits in working memory. Without conceptual awareness, memorization is hard.
During an assessment, through automation and AI, the candidate may be asked to repeat sentences of increasing complexity. Success in this exercise relies on the candidate’s ability to memorize complex sentences, which can only be done when they can chunk for meaning. A candidate’s mastery of an exercise to repeat sentences of increasing complexity is a great predictor of the candidate’s language proficiency.
Organizations that embrace technology-based solutions for language assessments can anticipate multiple benefits: reduced costs, decreased hiring cycle times, improved quality of hires, better role placement, freed time to devote to value-add initiatives, and improved customer experience and satisfaction. Ultimately, it’s a triple win for the organization, its candidates, and its customers.
As we presented in a recent blog, shared services centers (SSCs) – or what we refer to as Global In-House Centers (GICs) – must create their own innovation team to support their parent enterprises’ innovation agenda. But how should you structure your team to yield the desired outcomes?
Innovation maturity and mandate
You should start by determining your SSC’s innovation maturity and mandate. The maturity is determined by the strength of your existing internal capabilities, including talent, technology, and culture; the involvement and support you require from leadership; the primary focus area of the innovation, e.g., generate revenue, reduce costs, or mitigate risks; and the impact generated by your innovation initiatives e.g., dollar value of costs saved or revenues generated.
The innovation mandate is outlined by the level of ownership and visibility for innovation initiatives; the extent of cross-collaboration between business units / functional teams; and overall alignment of your SSC with the parent enterprise’s structure and business model.
Once you’re armed with that information, you can select one of the three SSC, or GIC, innovation team structures most prevalent today, based on the guidelines we present below.
Types of SSC innovation team structures
SSCs with low-to-medium maturity and innovation mandate
If this describes your SSC, you’ll do best with a centralized structure in which your parent enterprise drives the innovation and you have limited involvement. This structure allows the parent company to have greater control and ownership, and prevents the GIC’s low maturity from being an obstacle. Many organizations prefer this structure, as it enables faster implementation of enterprise-wide and business model-related innovations, promotes standardization, and improves governance of innovation initiatives. However, many SSCs are reluctant to operate in this structure, as it presents limited opportunities for them to breed an in-house culture of innovation and deliver higher-level transformational value.
SSCs with moderate-to-high maturity and innovation mandate in a specific domain
The best fit for these SSCs is a business unit-or functional team-led innovation structure. This allows the parent enterprise to adopt a decentralized innovation approach, enable direct communication and visibility between the SSC and business unit or functional stakeholders, leverage innovation teams placed within the GIC’s business units or functional teams, and provide better alignment on domain-specific end-business objectives. Key success factors include regular mentoring by the parent’s teams to build strong future-ready GIC leadership, and direct communication channels between SSC and business unit stakeholders.
SSCs with high overall maturity and innovation mandate
For GICs that fall into this category, a dedicated innovation team in which responsibility for innovation is fully in its hands works best. This structure allows the GIC to take more ownership of proposing and prototyping new, innovative solutions, and equips it with capabilities to better respond to enterprise-wide requirements.
Achieving the right balance of ownership, accountability, and investment is the key to successfully implementing this structure and making it a win-win for both SSCs and parent enterprises. It enables the SSC to reach its true potential and gain recognition as a thought leadership partner and empowers the parent to implement innovation initiatives with relative ease and replicate best practices across business units and functions.
Because every company’s innovation structure is inherently different, GIC leaders need to thoroughly investigate each of the models and decide on the most appropriate one based on their GICs’ overall maturity and mandate.
If you’d like detailed insights and real-life case studies on how SSCs are driving their enterprises’ innovation agenda, please read our report Leading Innovation and Creating Value: The 2019 Imperative for GICs.
In upcoming blogs, we’ll be discussing ways you can promote innovation and increase its impact in your shared services. Stay tuned!
Shared Services Centers (SSCs) – what we refer to as Global In-house Centers (GICs) – need to achieve breakeven to be financially viable. The breakeven equation is straightforward: the point at which total labor arbitrage (the average difference in labor cost between the SSC and a center at home) is equal to the SSC’s run cost (all non-labor costs such as facility rent, utilities, training, recruitment, travel, and other miscellaneous costs.)
Conventional wisdom says that that only large centers with a minimum of 1,000 FTEs can achieve breakeven. But that’s old-school thinking, and old-world reality.
We analyzed the breakeven point for 850 GICs in today’s digital world across a variety of factors, including the scope and complexity of services delivered, locations leveraged, and employee profiles. And we found that even an SSC with as few as 25 FTEs can be financially viable if it is delivering high-end, judgment-intensive services.
The rise of small SSCs/GICs
In the last three years, the average SSC scale, as measured by the number of FTEs, has declined by about 60 percent.
Why are we seeing this significant increase in small-scale centers? Several reasons:
- Lower barriers to entry: Technology advancements facilitate better collaboration and knowledge transfer among leadership and peers
- More robust ecosystem: Better infrastructure, access to a large talent pool with relevant technical and functional skills, and multiple professional services firms to provide on-ground support
- Lower cost: Easier access to cost-competitive real estate, and wider availability of talent with the relevant functional, and managerial skills.
Today, it’s not about scale…it’s about alignment with the broader sourcing strategy
Ever since the inception of the SSC model, enterprises have been relying on their centers to improve products, processes, customer and employee experiences, build high-value skills, and drive operational excellence. But in today’s environment, scale no longer matters. Why? Because some of the main levers for SSC success, such as enhancing cultural integration, accelerating the strategic agenda (e.g., innovation, digital transformation), facilitating cross-functional collaboration, and promoting process ownership, are scale-agnostic.
Today, the decision on whether or not to establish a delivery center must be based on how it aligns with the enterprise’s broader sourcing strategy. In particular, enterprises should assess whether the SSC/GIC can help them:
- Retain and strengthen in-house capabilities, especially for core intellectual property intensive work
- Develop tighter integration (better control and governance) and stronger alignment on culture and brand
- Accelerate the adoption of digital and other disruptive technologies such as automation, analytics, and artificial intelligence.
The next time you’re thinking about setting up a new SSC/GIC, don’t let the scale of the center – or lack thereof – stop you from exploring the possibilities!
The RPA world just got a bit more exciting with the early release of Robin, a Domain Specific Language (DSL) designed for RPA and offered via an Open Source Software (OSS) route. A brainchild of Marios Stavropoulos, a tech guru and founder and CEO of Softomotive, Robin is set to disrupt the highly platform-specific RPA market. Robin is not the first attempt to democratize RPA, so will it succeed at this feat?
The Robin advantage
RPA democratization isn’t a new concept. Other OSS frameworks, such as Selenium, have been used for RPA. But they weren’t designed for RPA and are best known for software testing automation. And there are other free options such as WorkFusion’s RPA Express, and Automation Anywhere’s and UiPath’s free community licenses. These have certainly lowered the barrier to RPA adoption but come with limits, for example, the number of bots or servers used.
When demonstrating his software environment, Stavropoulos explains that the principles he has applied to Robin are to make RPA agile, accessible, and free from vendor lock-in. This could be very powerful, for example, an RPA DSL could provide more user functionality. Not having to rip out and replace robots when switching to a different RPA software is tremendously appealing. And, availability of OSS RPA is likely to boost innovation as it will make it a lot easier to develop new light programs that simply collect and process data, such as RPA acting as a central data broker for some functions.
What’s in it for Softomotive?
There are four main reasons for an RPA vendor to invest in an open source offering.
First, Softomotive will become the keeper of the code. And while it will not charge for the software, not even other RPA vendors that start to support it, it will charge for Robin support and maintenance should customers wish to pay for those services.
Second, many other OSS vendors grew on the back of this model and got acquired by bigger companies. For example, JasperSoft, the OSS reporting company was acquired by Tibco for US$185 million in 2014, and Hitachi Data Systems acquired Pentaho for a rumored US$500 million in 2015. I’m not at all hinting here that Softomotive is looking to be acquired, but these are compelling numbers.
Third, if Robin is successfully adopted, the user community will contribute to the development of the environment and modules to a community library. There will also be community-led support and issue resolution, and so on.
Finally, Softomotive will still have its own products and will continue to generate revenue based on the solutions it wraps around Robin.
Robin success factors
Of course, while Robin is a great idea, Stavropoulos needs to ensure it is quickly and widely adopted. For it to become the de facto language of RPA, other RPA vendors must support it. And the only way to get them to support it is by forcing their hands with widespread adoption.
There are two ways Stavropoulos can make this happen; via free online delivery and through classroom-based training in key RPA developer hubs such as Bangalore. He is lucky to have a lot of existing users in small- to medium-sized companies. The developers in those companies are likely to try out Robin and give Stavropoulos a flying start.
Getting Robin onto a major OSS framework is also very important.
An RPA DSL on an OSS ticket is an exciting proposition that could significantly disrupt the market. But success depends on adoption and on Stavropoulos playing his cards right.