• Shadow AI
  • Posts
  • 🦾 Shadow AI - 15 February 2024

🦾 Shadow AI - 15 February 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

My AI-enabled Super Bowl bets didn’t do very well, but the good news is that I have another year to tweak the model. Congrats to the Chiefs on their Super Bowl win. My prayers go out to all those impacted by another senseless act of gun violence.

This week in Shadow AI, I cover:

🛡️ Nation State Use of AI and Countermeasures to their TTPs

🇮🇩 AI in the Indonesian Election

🔒 OWASP’s AI Security Overview

đź‘€ Slack Adds AI

🖥️ LLMs as an Operating System

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Nation State Use of AI and Countermeasures

Microsoft and OpenAI released briefings yesterday about nation state use of AI. When you look beyond the headlines, there weren’t a lot of newsworthy takeaways, but it did reinforce three viewpoints we’ve already been discussing in Shadow AI:

1) All types of cyber threat actors, including nation state actors, are using AI to varying degrees.

2) The main areas of current uplift are around enhancing existing tactics, techniques, and procedures in reconnaissance and social engineering.

3) AI has not led to significant attacks yet, but has enabled moves to enhance productivity and attack techniques.

Specifically, OpenAI and Microsoft note the activity of:

  • Charcoal Typhoon, a Chinese threat actor, used their services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.

  • Salmon Typhoon, another China-backed actor, used their services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.

  • Crimson Sandstorm, an Iranian threat actor, used their services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.

  • Emerald Sleet, a North Korean threat actor, used their services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.

  • Forest Blizzard, an advanced Russian military intelligence actor, used their services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

As security practitioners threat model LLMs, identifying countermeasures for each tactic, technique, and procedure (TTP) being seen is critical. Potential countermeasures can blend of technological solutions, procedural adjustments, and educational initiatives to reduce the risk of exploitation.

LLM-themed TTPs

Description

Countermeasures

LLM-informed reconnaissance

Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.

Implement strict access controls and monitoring on sensitive information. Regularly update and patch systems to minimize vulnerabilities. Employ deception technologies (e.g., honeypots) to mislead attackers.

LLM-enhanced scripting techniques

Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.

Enhance script detection and management capabilities to differentiate between benign and malicious scripts. Use behavior-based detection systems to identify unusual script activities.

LLM-aided development

Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.

Monitor code repositories and development environments for signs of malicious activity. Employ application allowlisting and secure software development practices.

LLM-supported social engineering

Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.

Train employees on the latest social engineering tactics and the use of verification protocols for identifying genuine communications. Implement advanced email filtering techniques.

LLM-assisted vulnerability research

Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.

Adopt a proactive vulnerability management program, including regular security assessments and penetration testing. Utilize threat intelligence to stay ahead of new vulnerabilities.

LLM-optimized payload crafting

Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.

Implement dynamic analysis and sandboxing to detect and analyze payloads before they execute. Enhance signature-based and behavior-based detection mechanisms.

LLM-enhanced anomaly detection evasion

Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.

Employ advanced machine learning-based anomaly detection systems that can adapt to evolving tactics. Regularly review and adjust detection thresholds and parameters.

LLM-directed security feature bypass

Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.

Continuously update and diversify security measures to reduce predictability. Implement multi-layered security strategies that do not rely on a single feature for protection.

LLM-advised resource development

Using LLMs in tool development, tool modifications, and strategic operational planning.

Strengthen security posture through comprehensive risk assessments and security audits. Foster collaboration within the security community to share insights and countermeasures against emerging threats.

AI News to Know

  • AI in the Indonesian Election: As AI companies pledge to combat electoral disinformation, Indonesia provides a good case study into how AI companies will be challenged in meeting their commitments. Indonesia, the world’s third largest democracy, had it’s national election yesterday. Prabowo Subianto, a seventy two year-old Defense Minister and alleged human rights abuser who has claimed victory in the election, used AI image generator Midjourney to create an “cuddly grandpa” avatar in the lead up to the election. In a Council on Foreign Relations blog, Kat Duffy does a phenomenal job breaking down the challenges AI companies face establishing, and consistently and equitably enforcing, an election policy:

    “Is it problematic to make a cartoon version of a political candidate using an inexpensive American technology platform, as Prabowo’s campaign did? Many would argue it’s not. Would it seem more problematic if the AI avatar wasn’t a cartoon, but looked and sounded exactly like a political candidate, such as Imran Khan’s campaign in Pakistan? Might your decision depend on whether the AI use was disclosed? Or who was creating and disseminating the avatar? Or why they were doing it?”

  • AI Security Overview: OWASP published a very helpful guide on how to address AI security. It distinguishes three types of threats: during development-time (when data is obtained and prepared, and the model is trained/obtained), through using the model (providing input and reading the output), and by attacking the system during runtime (in production) and offers controls to help mitigate the risks.

AI on the Market

  • Slack Adds AI: Slack has launched some generative AI features as a paid add-on for Enterprise plans to help users search answers, provide channel recaps, and get caught up on threads. An internal analysis by Slack during the pilot found that customers, such as SpotOn, Uber and Anthropic could save an average of 97 minutes per user each week using Slack AI to find answers, distill knowledge and spark ideas.

  • LLMs as an Operating System: NVIDIA launched “Chat with RTX,” an AI chatbot that can run locally on your Windows PC. This is NVIDIA's first step towards the vision of "LLM as Operating System" - a locally running, heavily optimized AI assistant that deeply integrates with your file systems, preserves privacy, and leverages your desktop gaming GPUs to the full. The localized approach ensures you do not have to share your data with cloud hosted LLMs like OpenAI or Anthropic and you can safely train it on your own locally stored data.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Director, AI/ML Security @ GSK to help leverage AI and ML to find new medicines | Multiple Locations | $165k-$223k | 10+ yrs exp.

Lead Security Engineer - Governance Risk & Controls @ JPMorgan Chase to apply state of the art AI technologies to a variety of cybersecurity use cases | Tampa, FL | 7+ yrs exp.

Cybersecurity Senior Engineer - SOAR Development @ Truist Bank to integrate and operationalize LLMs within cybersecurity use cases | Orlando, FL | 8+ yrs exp.

Federal Deployment Strategist, IC/USCYBERCOM @ ScaleAI to help NSA and CYBERCOM build their AI/ML capabilities | Washington, DC | $140k-$175k | 3+ yrs exp.

Senior Enterprise Information Risk Manager @ Edward Jones to develop and implement comprehensive AI risk management strategies | St. Louis, MO or Tempe, AZ | $113k-$193k | 10+ yrs exp.

I’m always thinking about way this newsletter can add more value to you and would love to hear your thoughts (both good and bad). Reply directly to this email with any feedback. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington