• Shadow AI
  • Posts
  • 🦾 Shadow AI - 21 September 2023

🦾 Shadow AI - 21 September 2023

Arming Security and IT leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

We’re one month into the Shadow AI Newsletter. One of my favorite parts of doing this is hearing from you so reply direct with what you like, what I can do better, or your favorite AI prompt.

I used Ideogram to try a new logo for the newsletter this week. Ideogram is definitely worth checking out for the cool, creative artwork people are generating.

This week we cover:

đź”® The Future of Ransomware

🚨 Microsoft AI Repo Exposed

🔎 OpenAI’s Red Team Network

đź’° AI Security Fundraising

🏎️ OpenAI vs. Google’s Multimodal Race

Let’s dive in!

Demystifying AI - The Future of Ransomware

According to the U.S. Department of Homeland Security, ransomware attackers extorted at least $450 million globally during the first half of 2023 and are on track to have their second most profitable year ever. The average business needs at least 22 days to recover and resume operations after a ransomware attack. Moreover, although companies are discouraged from paying ransoms, ransomware recovery “frequently costs 50 times more than the ransom demand.”

Recent high-profile breaches reinforce the havoc ransomware attackers are having on companies:

  • Clorox is experiencing elevated level of product outages from a breach announced on August 14th and has no timeline for fully recovering operations.

  • MGM is likely losing up to $8M per day because of how a ransomware attack shut down their operations.

Companies are struggling to keep up with the threat of ransomware today. The complexity of the threat will only increase. Let’s explore 5 ways ransomware attacks may evolve with AI:

  1. Automated Attack Variants: Ransomware authors can use AI to create more sophisticated and customized attack variants. AI algorithms can analyze the target's vulnerabilities and adapt the ransomware to exploit them effectively. This could lead to an increase in the diversity and complexity of ransomware strains.

  2. Advanced Social Engineering: AI-powered chatbots and natural language processing (NLP) can enhance social engineering tactics in phishing emails or messages. Attackers can use AI to generate convincing and context-aware messages, making it harder for users to discern between legitimate and malicious communications.

  3. Automated Target Selection: AI can help threat actors identify lucrative targets by analyzing data on potential victims. This could involve assessing the target's financial stability, data value, or security weaknesses, allowing attackers to prioritize high-value targets and maximize their resources.

  4. Ransom Note Personalization: AI can generate personalized ransom notes, increasing the psychological pressure on victims. These notes can contain specific information about the victim's files or systems, making the threat seem more credible.

  5. Ransom Negotiation: AI-powered chatbots could be used for automated ransom negotiation, reducing the need for human interaction on the attacker's side and potentially expediting the payment process.

What other ways will AI transform ransomware? In a future issue, we’ll explore how AI can help defenders.

AI News to Know

  • Microsoft AI Repo Exposed: Wiz discovered that a public facing AI Github repository owned by Microsoft commingled sensitive internal data, including secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages from 359 employees. The incident exemplifies the data collection and handling risks researchers face with working with huge amounts of external and internal data and how strong security guardrails are needed throughout the AI development process.

  • OpenAI Red Teaming Network: OpenAI is looking to build a community of trusted and experienced experts to help red team throughout model and product development. The program builds upon their existing internal red team, Bug Bounty Program, ChatGPT Feedback Contest, and external red teaming efforts. Domain expertise is being sought across a wide variety of areas, including:

AI on the Market

  • AI Security Companies on 🔥: Consistent with the broader AI fundraising trends we’ve discussed here, AI security startups have raised roughly $130.7 million so far in 2023 according to PitchBook data shared with Axios. Hidden Layer, an AI security company that offers a software to monitor the health and attack surface of models, has been a big early winner with a $50M Series A fundraise announced this week.

  • OpenAI vs Google Multimodal Race: OpenAI and Google are in a race to be the first to launch multimodal large-language models that can generate content that integrates information from various sources including text, images, audio, and other types of data. Multimodel LLMs could exacerbate AI security challenges, including combating biases as they can manifest differently in text, images, and other modalities. For more on this, check out our AI Prompt of the Week 👇

AI Prompt of the Week

I like how the output highlights how multiple modalities can introduce crosscutting risks like cross-modality adversarial attacks or biases. One suggested mitigation I found compelling was segregating the processing of different modalities and then combining their outputs to minimize the impact of adversarial attacks. It will be interesting to see the extent multimodal LLMs thoughtfully address these risks or if the push to market takes precedence.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington