In today’s ever-evolving threat landscape, generative Artificial Intelligence (GAI) is becoming an increasingly popular technology tool to defend against sophisticated cyberattacks. Read on to learn about the latest investments in GAI-powered security products, the potential benefits and drawbacks, and the ramifications for the cybersecurity workforce and industry.
Learn about the latest pricing trends in cyber security in our webinar, Cyber Resiliency Strategy: Key Themes and Pricing Trends for 2023.
GAI has grabbed worldwide interest with its ability to create unique and realistic images, text, audio, code, simulations, and videos that previously were not thought to be possible. Lately, GAI has been applied in many industries, such as the creative arts, healthcare, entertainment, and advertising. Let’s explore the latest cybersecurity industry trends and how GAI can help security teams stay one step ahead of the latest threats.
Cybersecurity vendors are leaving no stone unturned to deliver the power of GAI
In recent years, advanced Artificial Intelligence (AI)- and Machine Learning (ML)-based technologies have been rapidly adopted across the cyber industry, providing intelligent automation capabilities and also augmenting human talent.
The vast use cases of AI/ML in cybersecurity include proactive threat detection, prevention, intelligence, user and entity behavior analytics (UEBA), anomaly detection, vulnerability management, automated incident investigation and response, and more.
With the release of ChatGPT (GPT-3.5/GPT-4), DALL-E, Midjourney AI, Stable Diffusion, and other developments, the hype around GAI is accelerating faster than ever, and vendors are racing to harness its power to develop new products and solutions leveraging this technology.
Key GAI vendor announcements
Here are some examples of suppliers adopting GAI technology in the past four months alone:
- SlashNext launched Generative HumanAI, an email security product aimed at combating business email compromise (BEC), in February
- Microsoft introduced Security Copilot, a solution to help security professionals identify and respond to potential threats using OpenAI’s GPT-4 GAI and Microsoft’s proprietary security-specific model, in March
- Flashpoint expanded its partnership with Google, incorporating GAI into its intelligence solutions for improved threat detection in April
- Among other announcements last month, Recorded Future integrated OpenAI’s GPT model into its AI, Cohesity integrated with Microsoft’s Azure OpenAI for anomaly detection, and Veracode developed a tool utilizing GAI to address security code flaws
Generative AI captured massive attention at RSAC
At the recently concluded RSA Conference 2023 in San Francisco, GAI was a fascinating theme that was widely discussed and showcased in many innovative security products. These include SentinelOne’s announcement of Purple AI, which will leverage GAI and reinforcement learning capabilities to not just detect and thwart attacks but also autonomously remediate them.
Also at the event, Google Cloud launched its Security AI Workbench powered by a security-specific large language model (LLM), Sec-PaLM, aimed at addressing the top three security challenges – threat overload, toilsome tools, and the talent gap. The offering incorporates VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle to augment efforts to analyze incidents and detect and respond to threats.
Foreseeable advantages stemming from GAI in the cybersecurity world
The advantages of using GAI for this industry can include:
- Enhancing threat and vulnerability detection, response, and automated remediation
Its ability to analyze enormous amounts of data and insights from multiple sources enables GAI to detect malicious or anomalous patterns that otherwise might go unnoticed. This can lower alert fatigue and improve the mean time to detect or discover (MTTD), mean time to restore (MTTR), and threat coverage, and enhance overall risk management strategies while reducing total security operations costs. GAI can be employed for machine-speed triaging, predictive remediation, and automated response and action for low-risk incidents. Other potential applications are leveraging the technology to detect malicious URLs and websites and AI-powered phishing campaigns run against enterprises. Furthermore, it can be utilized in Infrastructure as a Code (IaaS) security for detecting and hardenings flaws and for auto-remediation of security misconfigurations and vulnerabilities in applications.
- Bridging the cybersecurity talent gap
The cybersecurity skills shortage is widely recognized, with enterprises finding it daunting to hire and retain talent to effectively run internal programs. More than 3.4 million skilled cybersecurity professionals are currently required globally, according to the 2022 (ISC)² Cybersecurity Workforce Study.
GAI can create phishing/cyberattacks and stimulate threat environments or security awareness programs to test security professionals’ skills and knowledge, accelerating the learning curve and quickly upskilling and reskilling employees. The technology also can be applied to generate automated workflows, playbooks, use cases, and runbooks for enhanced security delivery capabilities.
- Powering virtual assistance, enhanced collaboration, and knowledge sharing
GAI can lessen the burden on analysts of mundane tasks by analyzing, visualizing, and summarizing complex security data into comprehensive reports and charts that previously were created manually. GAI also can help build robust assistants for coding, chat, security, or investigation. It potentially can facilitate effective communication, and serve as a centralized knowledge repository, making it easy to share and manage data from one place. This can help enterprises augment knowledge management and foster a culture of continuous learning and engagement.
Watch out for offensive capabilities of GAI in cybersecurity
Major companies, including Apple, Samsung, Amazon, Accenture, Goldman Sachs, and Verizon, have either banned or restricted employees’ use of GAI-powered utilities to safeguard data confidentiality. Data breaches are a primary risk associated with GAI. Models use massive data sets for learning, and that data could contain enterprises’ sensitive information including Personal Identifiable Information (PII) and financial data. If carelessly handled, it could lead to unauthorized access, unintended disclosure, misuse, and even IP or copyright infringement. GAI also exposes enterprises to regulatory compliance risks, especially those subject to strict data protection laws like the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), the California Consumer Privacy Act (CCPA), etc.
The use of GAI for malicious practices in social engineering, spear phishing, and other scams has been on the uptick. Another potential offensive aspect is that GAI can be employed to create advanced malware strains capable of evading signature-based detection measures.
Malicious actors could use GAI to create sophisticated exploits and other invasive codes to bypass security systems and exploit vulnerabilities in touchpoints. Considering its power to generate new content, brute-force attacks for password theft can be easily facilitated via GAI.
In addition, hackers can utilize deepfake technology to impersonate individuals, leading to identity theft, financial fraud, and the proliferation of misinformation. The efficiency and accuracy of an ML-based security system can be sabotaged if a hacker automates the creation of false positives, wasting analysts’ time and resources while ignoring the real threat.
GAI – A boon or bane?
In the words of Abraham Lincoln, “The best way to predict the future is to create it.” GAI is doing just that. The heavy investments in GAI are a double-edged sword. While the technology can strengthen enterprises’ cyber shield arsenal, adversaries can use it to thwart their defensive attempts. GAI is here to stay and its adoption will accelerate even with security threats, making it pressing for cyber leaders to quickly determine their response and adoption strategies.
Cyber leaders may find a path to expand their roles and become protectors of enterprises by actively taking actions to address GAI’s use. These proactive initiatives can include robust data loss prevention and governance; usage guidelines, policies, and frameworks; workforce education; thorough vulnerability and risk assessments; comprehensive identity and access management; and incident detection and response plans.
Everest Group will continue to follow this growth area. To discuss cybersecurity industry trends, please contact Prabhjyot Kaur and Kumar Avijit.
Continue learning about cybersecurity industry trends in the blog, Now is the Time to Protect Operational Technology Systems from Cyber Risks.