• Shadow AI
  • Posts
  • 🦾 Shadow AI - 25 July 2024

🦾 Shadow AI - 25 July 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Wow, a ton has happened since last week’s newsletter dropped…

We’ll save the commentary on Crowdstrike’s global IT outage or Joe Biden announcing he won’t run for re-election for other forums and focus in on all the AI-security news you need to know in 3 minutes.

This week, I cover:

🔎 Unpacking Meta's Open Source AI and Security

🥶 AI Summer Shifting to Data Winter?

🚨 AI Hiring Scams

đź’¸ AI CapEx Return

💸💸 OpenAI Burning Cash

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Unpacking Meta’s Open Source AI and Security

As open source AI models rapidly evolve, they're not just catching up to commercial alternatives—they're positioning themselves to reshape the AI landscape. With Meta's release of Llama 3.1 405B, the first frontier-level open source AI model, organization’s now have access to an open source model comparable to GPT-4o and Claude 3.5 Sonnet.

Meta's Vision for Open Source AI Security:

Mark Zuckerberg and Meta outlined a vision for open source AI that emphasizes several key security benefits:

  • Transparency: Open source AI systems allow for widespread scrutiny, potentially making them safer than closed alternatives.

  • Customization: Organizations can train, fine-tune, and distill models with specific data without external parties like closed model vendors accessing it.

  • Avoiding Vendor Lock-in: Open source allows organizations to run and control models themselves in the environment they want (on-prem, cloud, and even on device), avoiding dependence on closed Foundational Model vendors.

New Safety Tools from Meta:

To support their vision, Meta introduced new safety tools for developers:

  1. Llama Guard 3: A high-performance input/output moderation model for detecting violating content across eight languages.

  2. Prompt Guard: A multi-label model to detect and respond to prompt injection and jailbreak attempts.

These tools have been integrated into Llama’s model distribution channels with partners including AWS, NVIDIA, and Databricks.

But how is the security of the actual model???

CyberSecEval 3: Assessing Llama 3's Security

CyberSecEval 3, a new set of security benchmarks for LLMs that Meta released, didn’t receive much attention in the press, but it provides important insights into the empirical measurement of cybersecurity risks associated with Meta’s latest family of LLMs. The evaluation assessed Llama 3 against two broad categories: risks to third parties and risks to application developers and end users.

Risks to Third Parties

Risks to Third Parties

Results

Proposed Mitigations

Automated social engineering

Moderate capabilities in spear-phishing attacks, comparable to GPT-4 Turbo and Qwen 2 72B Instruct

Monitoring LLM usage and implementing protective measures like Llama Guard 3

Scaling manual offensive cyber operations

Did not significantly enhance the success rates of offensive network operations compared to using search engines

No specific mitigation mentioned as capabilities were not significantly enhanced

Autonomous offensive cyber operations

Showed limited capabilities in autonomous hacking challenges

No specific mitigation mentioned due to limited capabilities

Autonomous software vulnerability discovery and exploit generation

More effective in small-scale program vulnerability exploitation challenges than its peers, indicating incremental progress

Continued monitoring and improvements in guardrails such as Meta’s publicly released Code Shield System

Risks to Application Developers and End Users

Risks to Application Developers and End Users

Results

Proposed Mitigations

Prompt injection

Susceptibility with failure rates of 22% for Llama 3 405B and 19% for Llama 3 8B

Partially mitigated through secure application design and the use of protective measures like Meta’s Prompt Guard

Suggesting insecure code

Suggested insecure code at a rate of 31%

Reduced risk with CodeShield

Executing malicious code in code interpreters

Susceptibility to executing malicious code at rates between 1% to 26%

Effective mitigation through monitoring API usage and employing guardrails like Llama Guard 3

Facilitating cyber attacks

Generally refused high-severity attack prompts

Effectiveness improved using Llama Guard 3

So What for Security?

As open source AI models like Llama 3.1 continue to close the performance gap with closed commercial models, security teams must:

  1. Ensure their AI security strategies cover both open and closed AI use cases

  2. Implement and customize native (e.g, Llama Guard 3 and Prompt Guard) or third party tools to enhance security across AI applications

  3. Develop internal LLM security benchmarks informed by those like CyberSecEval 3 and implement mitigation strategies accordingly.

AI News to Know

AI Summer Shifting to Data Winter?: Stefaan Verhulst had a great LinkedIn post summarizing how the data used by AI developers for training their models is rapidly drying up. According to an MIT study, 5% of all data and 25% of high-quality sources in AI training sets are now restricted as publishers and other data holders have taken a number of steps to limit AI scraping by setting up paywalls, changing terms of services, and blocking automated web crawlers with robots.txt restrictions.

AI Hiring Scams: KnowBe4 was hiring a software engineer for their internal IT AI team and they ended up getting fooled by a North Korean actor who used AI during the vetting process. The fake worker passed a background check and all other standard pre-hiring checks by using a valid but stolen US-based identity. He also took a stock image photo and enhanced it with AI to reinforce his legitimacy.

Left: Stock Image Right: AI Enhanced Image

The worker than had their workstation sent to an address that is basically an "IT mule laptop farm". They VPN’d in from where they really physically are (North Korea or over the border in China) and work the night shift so that they seem to be working in US daytime. The scam is that they are actually doing the work, getting paid well, and give a large amount of their salary to North Korea to fund their illegal programs.

On the personal level, LinkedIn’s job feed recommended the below job to me this week that was clearly an AI-generated job description. I reported it and LinkedIn took swift action to take it down, but illegitimate job posts are an issue and are being recommended by the site.

AI on the Market

AI CapEx Return: Microsoft and Google’s earning releases brought AI Capital Expenditures - how much these companies are investing in physical AI data centers - and the return on that investment front in center this week. David Cahn at Sequoia Capital did a great job summarizing how two of the four major players admitted they are essentially in an arms race with the short-term path to revenue monetization unclear.

OpenAI Cash Burn: The Information reports that OpenAI may lose as much as $5B this year and could run out of cash in 12 months if they don’t raise more money. With Meta releasing a comparably performing open source model, investors are likely to start scrutinizing OpenAI’s moat and their route to profitability.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Security Engineering Manager, X-Sec @ Meta to Manage a team in delivering enterprise-scale security platforms and products | Multiple Locations | $213k - $293k | 10+ yrs exp.

AI + Security PhD Residency @ X to build and refine tech prototypes and market analyses at the intersection of advanced AI and industrial cybersecurity | Mountain View, CA | $118k-$145k

Security Engineer, Enterprise AI Protection @ Google to ensure that AI products are safe, trustworthy, and aligned with AI principles | Multiple Locations | $136k-$200k | 2+ years exp

Software Engineer, AI Security @ Robust Intelligence to help build the future of secure, trustworthy AI | San Fran | $150k-$180k | 1+ yrs exp.

Sr. Product Manager, Security @ Robust Intelligence to build products that help reduce AI risk | San Fran | $180k-$220k | 5+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington