• Shadow AI
  • Posts
  • 🦾 Shadow AI - 20 June 2024

🦾 Shadow AI - 20 June 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

It’s a jam packed edition of Shadow AI! This week, I cover:

👀 Q2 AI Threat Review

🚩 RedFlag

🔎 garak: LLM Vulnerability Scanner

🔒 RedPoint Ventures 2024 InfraRed Report

💻 AI and Big Tech Jobs

💰 Aim Security Series A

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Q2 AI Threat Review

There’s a lot of hype over how AI will increase the volume and impact of cyber attacks so, at the end of each quarter, I’ll do a data-driven threat review on how cyber actors are using AI. The goal of this quarterly review is to monitor any change in trends that your security programs need to take into account.

What’s the short-term outlook?

In case you missed it, the United Kingdom’s National Cyber Security Center (NCSC) published an excellent, clear-eyed assessment of the AI threat environment in Q1. Key predictions by the NCSC include:

  • Artificial intelligence (AI) will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years. However, the impact on the cyber threat will be uneven 


Highly capable state threat actors

Capable state actors, commercial companies selling to states, organised cyber crime groups

Less-skilled hackers-for-hire, opportunistic cyber criminals, hacktivists

Intent

High

High

Opportunistic

Capability

Highly skilled in AI and cyber, well resourced

Skilled in cyber, some resource constraints

Novice cyber skills, limited resource

Reconnaissance

Moderate uplift

Moderate uplift

Uplift

Social engineering, phishing, passwords

Uplift

Uplift

Significant uplift (from low base)

Tools (malware, exploits)

Realistic possibility of uplift

Minimal uplift

Moderate uplift (from low base)

Lateral movement

Minimal uplift

Minimal uplift

No uplift

Exfiltration

Uplift

Uplift

Uplift

Implications

Best placed to harness AI's potential in advanced cyber operations against networks, for example use in advanced malware generation.

Most capability uplift in reconnaissance, social engineering and exfiltration. Will proliferate AI-enabled tools to novice cyber actors. 

Lower barrier to entry to effective and scalable access operations - increasing volume of successful compromise of devices and accounts.

  • The threat to 2025 comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs).

  • All types of cyber threat actors – state and non-state, skilled and less skilled – are already using AI, to varying degrees.

  • AI provides capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect.

  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realized before 2025.

  • AI will almost certainly make cyber attacks more impactful because threat actors will be able to analyze exfiltrated data faster and more effectively, and use it to train AI models.

  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.

How have threat actors used AI in Q2 2024?

Although criminals continue to take advantage of the possibilities that ChatGPT and other LLMs offer, advanced AI-powered attacks are not being seen yet. In fact, Verizon’s 2024 DBIR report states that criminal forums appear to be showing very little interest is leveraging AI alongside malware, phishing, or exploitable vulnerabilities. The focus in criminal forums still appears to be on selling account access. We continue to see phishing, malware, and ransomware be highly effective without the need for AI.

However, we are seeing three notable trends emerge:

  • Compared to last year, criminals seem to have abandoned any attempt at training real criminal large language models (LLMs). Instead, they are jailbreaking existing ones. This has led to an increase in jail-breaking as a service where criminals offer:

    • An anonymized connection to a legitimate LLM (usually ChatGPT)

    • Full privacy

    • A jailbroken prompt guaranteed to work [TrendMicro]

  • We are seeing the emergence of actual criminal deepfake services, with some bypassing Know-Your-Customer user verification in financial services. Prices for criminal deepfake services are ranging from $10 per image to $500 per minute of video. Criminals are now offering to take a stolen ID and create a deepfake image to convince the financial services company of a customer’s legitimacy. The person below is not real. The image has been created based on a stolen Spanish ID provided by a criminal and the deepfake artist has shared this image with potential clients to show their skill. [TrendMicro]

  • AI continues to be used for misinformation and influence operations by nation state actors as discussed in previous Shadow AI newsletters.

While the threat of advanced AI-powered cyber attacks still seems to be on the horizon, the criminal underground is steadily adopting and adapting AI technologies to enhance existing tactics. As AI capabilities become more accessible and mainstream, organizations should proactively be looking to strengthen their security posture against the most likely emerging AI-powered threats such as deepfakes and detection of enhanced social engineering.

AI News to Know

RedFlag: Addepar, a multi-product software and data platform for investment professionals, has open-sourced a tool called RedFlag that leverages AI to transform and streamline security scoping and manual testing. Using RedFlag, Addepar’s Offensive Security team is now able to scope an entire Platform release candidate with hundreds of PRs and identify the highest risk code changes to determine what needs to be tested and how it should be tested in just 10 minutes with a 92% pass rate. This has led to measurable security improvements, including the identification of six high severity issues in the last 8 RCs that were fully remediated before the release was deployed. From a cost standpoint, RedFlag, which uses Claude v3 Sonnet, incurs an average cost of $0.02 per commit review, amounting to ~$8 for a 400 PR release candidate.

RedFlag Workflow for Release Candidates

garak: LLM Vulnerability Scanner: Security researchers at NVIDIA and other institutions published a paper discussing the merits of garak, an open-source scanner designed to probe the security vulnerabilities of LLMs in a structured and holistic way. With components like Generators to interface with different LLMs, Probes to test various attack vectors, Detectors to identify insecure outputs, and Buffs to modify prompts, garak facilitates the systematic exploration of LLM failure modes.

The framework incorporates a wide range of existing techniques like prompt injection, false or misleading claims, malware generation, and jailbreaks.

Unlike benchmarking approaches, garak is oriented towards discovery and facilitating informed decisions around alignment and safe deployment policies for LLMs. The researchers argue for a holistic "red-teaming" approach, systematically probing models to uncover vulnerabilities rather than evaluating against fixed benchmarking post-hoc.

With LLMs being rapidly deployed, garak provides a way to help audit their security posture and work towards safer, more robust deployments.

AI on the Market

InfraRed Report: Redpoint Ventures has released its 2024 InfraRed Report and, not surprisingly, AI is a major topic. Three key AI related market highlights that security and IT professionals need to keep in mind are:

  1. AI is accelerating cloud consumption

  2. Training and inference cost for AI is down 10x

  3. AI coding has gone mainstream with 76% of developers using or expecting to use an AI coding assistant

AI and Big Tech Jobs: There are many predictions being made on how AI will reshape the job market over the long-term, but no one really knows what will happen. Today, however, we know that approximately 5,000 jobs were cut between May 2023 and April 2024 where companies cited AI as the reasons. Many companies, however, are less transparent on the drivers behind layoff decisions so it’s unclear about the true impact AI has had on more than 100,000 layoffs in the tech world. As the workforce reshapes for AI, adjacent needs like AI security and AI training will emerge.

Aim Security Series A: Aim Security, a company 4 months out of stealth that has built a 360 degree AI security platform to help enterprises adopt GenAI securely, has raised $18M in Series A funding.

💼 5 Cool AI Security Jobs of the Week 💼

Senior GenAI Security Engineer @ Swish to create a GenAI solution for a Government client | Alexandria, VA | $175k - $195k | 7+ yrs exp.

Senior Managing Security Consultant – Security for AI lead @ IBM to lead the growth and management of its GenAI security business | Remote | $175k-$263k

AI Cybersecurity Architect @ Travelers to create the technology target state for the Cybersecurity Architecture Unit of a global, Dow 30 company | Hartford, CT or Atlanta, GA | $118k-$196k | 4+ yrs exp.

Senior Security Compliance Engineer @ Observe.AI to develop and scale its GRC function | Bangalore | 7+ yrs exp.

Senior Manager AI/ML Risk Guide @ Capital One to assess and mitigate risks associated with the deployment of machine learning models and AI systems | Multiple Locations | $199k-$227k | 5+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington