• Shadow AI
  • Posts
  • 🦾 Shadow AI - 5 October 2023

🦾 Shadow AI - 5 October 2023

Arming Security and IT leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

October is cybersecurity awareness month, but hopefully you haven’t waited until now to include awareness of AI policies and potential risks into your employee cybersecurity training program.

In this week’s issue we cover:

🤔 Why haven’t we seen more AI-enabled attacks (yet)?

đź‘€ Critical PyTorch Vulnerability Discovered

🏗️ Building a Responsible AI Governance Framework

đź’¸ AI Insurance

🏦 Anthropic Fundraise Part Deux

👉 AI Prompt of the Week: Bard’s take on what an AI insurance policy would look like

Let’s dive in!

Demystifying AI - Why Haven’t We Seen More AI-Enabled Attacks

Over the past 11 months, we’ve seen an explosion of Generative AI tools in the marketplace. We’ve also seen a lot of discussion on Generative AI and the impact it will have on the threat landscape, including:

  1. Sophisticated Phishing: Malicious AI tools like WormGPT and FraudGPT make it possible for cybercriminals to mass-produce personalized phishing emails in their victims' native language, without errors. The Wall Street Journal and Abnormal Security recently provided a great example of just how tailored phishing emails generated by AI can become.

  2. Undetectable Malware: And the risk extends beyond increasingly sophisticated and targeted phishing emails to AI generated malware. Users selling WormGPT on cybercrime forums boasted about how “WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” by virtually all of the major antivirus makers.”

Given this, why haven’t we seen more AI-enabled attacks? Here are four theories:

  1. If it’s Not Broke, Don’t Fix It: Attackers tend to prioritize attacks that have a clear financial or strategic benefit. As we discussed two weeks ago in our ransomware feature, traditional attacks like ransomware and data breaches remained highly profitable with ransomware expected to net nearly $1B in payments this year. The success of traditional attack measures may not be forcing attackers to innovate with AI.

  2. Substantial Initial Investment: Integrating AI into cybercriminal operations requires an upfront investment. This investment includes acquiring or developing AI algorithms, machine learning models, and the necessary computing infrastructure. Moreover, training the AI models to perform malicious tasks effectively demands both time and resources. The costs involved can make it less attractive for cybercriminals who are accustomed to the low-cost, high-reward tactics that are still successful today.

  3. Continuous Maintenance and Adaptation: AI models used for cyberattacks require ongoing maintenance, fine-tuning, and adaptation. Just as legitimate AI applications need to evolve with changing data and environments, malicious AI tools must keep up with evolving security measures and detection methods. Maintaining and updating AI-enabled attacks can add to the complexity of their operations.

  4. Expertise and Skill Gap: Cybercriminals must have an understanding of AI concepts, algorithms, and programming languages, which might not align with their traditional technical skill sets. This potential knowledge gap may present a barrier, especially when considering the risks of their activities being exposed.

The adoption of AI-powered tools and tactics in cyberattacks presents a new paradigm for malicious actors. Even though malicious actors may not be leveraging generative AI in their attacks today, they will inevitably do so to further challenge phishing defenses, antivirus/EDR, intrusion detection systems, attribution, and employee training.

As a community, we need to start developing the tools, techniques, and procedures to improve our defenses now.

I’d love to hear what other theories you have or which ones are most compelling to you. Reply back to this email.

AI News to Know

  • Critical AI Vulnerability Discovered: Oligo, an application security company, discovered a series of vulnerabilities in the PyTorch that allowed remote code execution with elevated privileges and no authentication. PyTorch is a popular open source machine learning framework originally developed by Meta AI that many companies are leveraging as they build their AI systems in the cloud. The exposure was significant with “tens of thousands of IP addresses completely exposed to the attack” according to Oligo.

  • Responsible AI Framework: Equal AI, a non-profit working with companies and policymakers to reduce bias in AI, released a whitepaper on designing and operationalizing a Responsible AI Governance Framework that includes 6 key elements:

AI on the Market

  • AI Insurance: Insurance providers are making moves to offer AI insurance products that help companies transfer risk if AI models fail. Coverage could include copyright infringement, bias and misinformation, and data loss. With a lack of historical data on AI’s financial impact to the business, insurers find themselves in a similar position to the early days of devising cyber insurance policies.

  • Anthropic Fundraise Part Deux: A week after Anthropic raised at least $1.25B from Amazon, it is having discussions with additional investors, including Google, to raise another $2B in funding at a valuation of at least $20B.

AI Prompt of the Week

I like how the output highlights 4 key coverage areas, including the crossover with cybersecurity breaches. The full output also shares some concrete examples of how businesses can transfer risk under such a policy, including:

  • A healthcare company that uses AI to diagnose diseases could purchase AI insurance to cover the cost of defending against a lawsuit if an AI system misdiagnosis a patient.

  • A manufacturing company that uses AI to control its production process could purchase AI insurance to cover the cost of lost revenue if an AI system failure disrupts operations.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington