• Shadow AI
  • Posts
  • 🦾 Shadow AI - 14 March 2024

🦾 Shadow AI - 14 March 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

After last week’s newsletter, OpenAI revealed they made MFA available for all user accounts. I was excited to see this as it was an issue Shadow AI highlighted last month. It’s Anthropic’s turn now!

This week, I cover:

🇪🇺 The Challenges and Opportunities of EU’s AI Act

🗳️ Election Execution Disconnect

⛓️‍💥 AI Disconnect Part 2

đź”’ AI Safety Beyond the Model Level

đź’» The Latest AI Software Engineer

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - The Challenges and Opportunities of EU's AI Act

Demystifying the implications of the European Union's AI Act that passed Parliament yesterday is no easy task. I found a recent episode of the William Fry Podcast hosted by AI lawyer Barry Scannell that featured Kai Zenner, Head of Office and Digital Policy Advisor for Member of European Parliament Axel Voss, to provide a great a deep dive into the EU AI Act and its impact on companies.

Kai Zenner has been working on data privacy and AI regulation since he joined Axel Voss’ office in 2017 and was a key staffer on in developing the EU’s AI Act.

Here are three of my key takeaways on what the EU AI Act means for organizations navigating their AI journey.

Preserving Relevancy Through Flexibility

AI technology is moving so fast that there’s a risk the law will be outdated by the time companies have to implement it. This is not necessarily unique to technology regulation. In fact, some have argued that GDPR was outdated to some extent when it took effect. However, the velocity of change in AI is so high that the authors of the EU’s AI Act actually designed a “regulatory learning process” into it by including delegated acts, regulatory sandboxes, an advisory forum, and a scientific panel. The commission, for example, has flexibility in adding to the list of high risk AI systems so it keeps pace with AI systems that emerge after the Act’s passage.

Legal Uncertainty

One of Zenner and Scannell's main concerns is the high level of legal uncertainty embedded within the Act. They argue that the cost of complying with the EU AI Act could potentially hamper AI innovation, particularly amongst smaller and mid-sized enterprises. Many organizations may lack the resources necessary to navigate these complex regulatory waters, inadvertently leading to costly compliance missteps and stifling creativity. Zenner argues that the EU placed a priority on getting agreement on a law and having it be adopted as soon as possible over quality in the details. The result is a law that is producing a high degree of legal uncertainty and that contradicts the Act's fundamental goal of fostering European AI development and ensuring the safe deployment of AI technologies.

Prohibited AI Systems, High-Risk Measures, and Open Source Systems

The Act does not provide clear guidance around prohibited AI systems, the classification of high-risk AI systems, and the governance of open source systems. While the intent of these mechanisms is clear-cut, their method of application remains vague and complex. The situational judgment upon which certain prohibitions rely on may force businesses to make intricate assessments about their AI systems, leading to legal uncertainty and inconsistent interpretations.

These regulatory requirements, for example, could be a potential minefield for the insurance industry using AI to evaluate the biometric characteristics of an insured and their behaviors. Based on the AI Act, arguments could be made this type of AI system is prohibited or is high risk.

Even if ultimately deemed high-risk, the high-risk self-assessment process is also ambiguous and costly. Companies within any of the eight high-risk categories will need to actively prove their systems do not fall under the high-risk category, a process which also carries administrative obligations.

And if you’re using an open source model like Llama, there is inconsistent guidance not only within the AI Act, but also across the Cyber Resilience Act that was recently passed.

The Way Forward

Despite compliance with the AI Act being too complex, too difficult, and too expensive, Zenner and Scannell maintain a positive outlook. They believe the AI Act provides ample opportunities for public and private sector stakeholders to work together in mitigating legal risks, promoting standardization of procedures, and reducing legal uncertainty.

There is, however, a complex and critical task ahead for the EU to nail implementation of the AI Act. Alex Voss argues that ten additional steps are needed to make implementation of the AI Act a success.

Miguel Valle del Olmo shared this helpful timeline of some of the major implementation milestones for the AI Act:

The bottom line is if you’re a company leveraging LLMs, whether internally built systems or SaaS systems offering AI capabilities, start designing your strategy for inventorying the AI technologies you’re using, how you will assess their risk, and how you will come into compliance with the AI Act.

And, if you’d like to go deeper, give the full podcast a listen.

AI News to Know

Election Execution Disconnect: OpenAI and Anthropic have both publicly committed to route election-related prompts to authoritative sources of election information such as CanIVote.org and TurboVote.org. However, when Proof News ran six seemingly straightforward voting information and election procedure queries through ChatGPT 3.5 and Claude 3 Sonnet, the queries were never routed properly. To see if there’s a disconnect between OpenAI’s public statements on the election and their safety program, I put ChatGPT 4 to the test myself and sure enough there was no redirect. (Note: I do live in NJ, but I am not actually a felon 🙂)

AI Disconnect Part Two: Jason Clinton, the CISO at Anthropic, shared in a Twitter post how Claude 3 Opus can help fully automate vulnerability discovery. Sean Heelan, however, noted that the outputs were not accurate and it actually failed to find the CVE. AI vendors are making grandiose claims of capabilities and how they can transform security programs, but it’s important to rigorously assess whether capabilities truly match the claim.

AI Safety Beyond the Model Level: The AI Snake Oil newsletter outlines a compelling case for how AI safety cannot be addressed at the model level alone. They make four recommendations that challenge the conventional approach to AI safety and red-teaming that security practitioners should be considering:

  1. Defenses Against Misuse Must be Primarily Located Outside Models

  2. Assess the Incremental Risk of Releasing a Model

  3. Refocus Red Teaming Toward Early Warning

  4. Have Third Parties with Aligned Incentives Lead Red Teaming

“Trying to make an AI model that can’t be misused is like trying to make a computer that can’t be used for bad things”

Arvind Narayanan and Sayash Kapoor

AI on the Market

The Latest AI Software Engineer: Cognition AI, which has raised $21M, introduce Devin, a new AI software engineer, that has passed practical engineering interviews from leading AI companies. Instead of only offering coding suggestions and autocompletion of tasks, it can write entire software programs on its own. Could this free developers up to handle a backlog of security issues or will security practitioners be primarily interfacing with AI engineers to govern vulnerabilities in code?

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Security Compliance Manager @ Hive to Manage the Compliance Program for a Company Transforming Content Moderation | San Fran | $140k - $180k | 4+ yrs exp.

Cybersecurity GRC Senior Manager @ BigBear.AI to Mature its GRC Program and Support Mission Critical Operations | Remote | 10+ yrs exp.

AppSec Startup Co-Founder/Founding Engineer @ Stealth to Secure Gen AI applications from AI-based attacks

Enterprise Security Architect @ Marvell to Protect the Semiconductor’s cloud infrastructure and AI Systems | Multiple Locations | $113k-$169k | 8+ yrs exp.

Product Security Architect @ Sema4.ai to Help Secure and Define the Future of Knowledge Work | Remote | 5+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington