The Internet industries are playing fast and loose with people’s personal data. It’s clear that self-regulation is merely wistful thinking and governments are forced to act. The European Union is clearly leading the change with GDPR and other regulations. The Facebook fiasco is a rude wakeup call for all data-driven enterprises, billions of internet users and governments/regulatory bodies all over the world. Mark Zuckerberg’s apologies seem sincere and heartfelt, however, the problem is large and very complex, hence there are no easy fixes, says Peter Bendor-Samuel, CEO and founder of Dallas-based Everest Group in an exclusive interview with Financial Chronicle just after Mark Zuckerberg testified on Capitol Hill early this morning.
For 20 years, the Internet has democratized access to information and learning, and allowed us to have a public voice and become part of online communities. Today, we’re at risk of losing out to those who wish to abuse our personal information and create divisions and havoc in the world.
The Cambridge Analytica / Facebook data harvesting case is the latest scandal to make the headlines, with much of the current debate focused on data harvesting for political or marketing purposes. But it ignores other serious threats we expose ourselves to by sharing information online.
One of these threats is data harvesting combined with malicious use of Artificial Intelligence (AI). It’s already here and has significantly increased personal and business risks. But due to lack of awareness about the threats, people innocently continue to share information on social media.
What are the Threats?
A recent report by 26 risk experts, including researchers from Cambridge and Oxford universities, cited a wide range of serious threats that could result from the malicious use of AI, including:
- Automated hacking
- Speech synthesis for impersonating victims on video and voice recordings
- ID theft
- Exploiting the vulnerabilities of AI systems for adversarial uses and data poisoning (fake news and Denial of Truth Attacks (DTA))
- Repurposing of drones and cyber-physical systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles, or holding critical infrastructure for ransom
When the BBC asked me to comment on the report, I could think of some of the risks that are already possible. By carelessly sharing information about ourselves and our work, we are simply increasing them:
- Digital ID theft of family members and friends, much of which will be based on what is known about us on social media
- Targeting of employees of businesses in key positions for criminal activities
- DTA that turns truth into lies and vice versa. Much has already been said about fake news, but training AI to do wrong or suggest untruths is already going on. For example, the following graphic illustrates that Google and Bing searches for information about the Welsh language may have been manipulated by frequent use of search strings with negative connotations. These could be misleading for the young and the naïve
While global government action is being taken to mitigate these risks, each of us needs to take personal responsibility by, for example:
- Questioning what we read online, particularly political ads veiled in community-style messaging
- Being cautious about what and how much we share about ourselves, family, friends, and work online
- Using alternatives such as search and social media that share less information with third parties. DuckDuckGo already offers search privacy, and the issues with Facebook may well lead to other platforms that offer smaller and protected social networks
Implications for RPA and AI-based Process Automation
As organizations increasingly focus on client data protection, we may see a tightening of policies against robots connecting to both web sites and enterprise systems. Some organizations frequently change the URL of specific web pages for exactly this reason – to make it difficult for robots to find those pages and access the information that they purvey. Other measures include visual and sound-based checks to separate robots from humans when signing up for online services.
We may well see technology companies make it more difficult for robots to access their software, for example, through human-only user licensing models, with checks to ensure that the user is a human and not a robot. However, after decades of efforts to make enterprise system integration easier, this would be a seriously backward step.
Interestingly, the fight against malicious AI is leading some companies, including Facebook, to hire thousands of people to check for fake or malicious content. Demand for cyber security skills continues to rise as well. These new hires will, in effect, augment the existing AI-based defense systems that on their own are not good enough to tell fake from real or outsmart malicious AI. Far from AI replacing people, it is creating new roles.
Implications for Customer Contact and Experience Services
A few years ago, service providers in this market segment added social media monitoring and management to their portfolio of services. Today, they will have to add social media truth management to their catalogues. Defending organizations against fake news and media will also expand the range of customer experience (CX), public relations, and marketing requirements. Consequently, there is likely to be a net increase in demand for CX and marketing management services. Service providers that can deliver differentiated authentication solutions, e.g., AI and people combinations that can find and differentiate fake from real and perform other tasks such as context analysis, will be in demand.
Another consequence would be that social media as a source of personal data for customization of products and services will shrink as more people opt out of data sharing.
While we’re a long way from the dystopian futures that have been depicted in many sci-fi movies, people and governments need to act now to mitigate risks and help us keep our freedoms and security.