• Shadow AI
  • Posts
  • 🦾 Shadow AI - 13 June 2024

🦾 Shadow AI - 13 June 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

It’s been a big week of AI security news!

This week, I cover:

🛡️ Safeguarding AI in High-Stakes Environment

 Apple goes All-in on AI

🚨 EmailGPT Vulnerability

🗣️ LLM Threat Taxonomy

đź‘€ Challenges in AI Red Teaming

đź’° Cyberhaven Fundraising

🤑 Mistral AI Series B

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Safeguarding AI in High-Stakes Environments

Enterprise adoption of AI is accelerating. As we discussed in last week’s issue of Shadow AI, 65 percent of respondents to a McKinsey Global Survey on AI reported that their organizations are regularly using Generative AI, nearly double the percentage from their last survey ten months ago.

The most common Generative AI business cases for enterprises are:

  • Marketing and Sales

  • Product and/or Service Development

  • IT

The advancement of AI for these use cases and others has brought about transformative opportunities across various sectors, including critical infrastructure and high-risk domains. From healthcare diagnostics to transportation systems and financial services, AI is increasingly being leveraged to enhance efficiency, accuracy, and decision-making capabilities for both customers and employees. However, the deployment of AI in these high-stakes environments raises significant security concerns and organizational AI risk management strategies don’t appear to be evolving concurrently.

McKinsey Survey: The state of AI in early 2024

As AI systems become more advanced and integrated into critical processes, the attack surface and data risk expands, introducing new vulnerabilities that threat actors may exploit.

Here are 5 steps critical infrastructure companies should be taking to ensure their AI risk management strategies keep pace:

  1. Establish a Strong AI Risk Management Framework: Organizations should establish a comprehensive AI risk management framework (the NIST AI Risk Management Framework is a good start) that’s integrated into its enterprise risk management program to map, measure, and manage the risks associated with AI deployments effectively. The framework should include policies, processes, and organizational structures to ensure the responsible development, testing, deployment, and monitoring of AI systems in accordance with the business’ risk appetite and regulatory requirements. AI risk indicators should regularly be reported to the appropriate Executive Risk Management Committee and even the Board. One leading example of this is a $40 billion energy company which has established an Artificial Intelligence Steering Committee to manage end-to-end risk associated with the enterprise’s AI adoption.

  2. Implement Robust Testing and Validation Processes: Before deploying AI systems in critical environments, organizations should implement rigorous testing and validation processes to ensure the accuracy, reliability, and robustness of these systems and the model should go through an established model governance process. Testing should cover various attack scenarios, edge cases, and failure modes.

  3. Establish Continuous Monitoring and Auditing Mechanisms: Continuous monitoring and auditing mechanisms are essential for detecting anomalies, tracking model performance, and promptly addressing any security issues that may arise. Key risk indicators that can be continuously monitored should be created for each control within your AI Risk Framework allowing you to stay ahead of emerging threats and quickly respond to any incidents.

  4. Implement Robust Cybersecurity Measures: Robust cybersecurity measures, such as secure data handling practices, access controls, and encryption, must be implemented to protect AI systems and their underlying data from unauthorized access or manipulation. Close collaboration between AI experts, cybersecurity professionals, privacy teams, and other domain experts is essential to holistically address the security challenges posed by AI in critical infrastructure and high-risk domains.

  5. Adhere to Regulatory Guidelines and Industry Standards: Regulatory bodies and industry organizations play a crucial role in establishing guidelines, standards, and best practices for AI security in critical sectors. Organizations should stay updated on the latest regulatory developments and industry standards, such as the EU AI Act and California’s AI Transparency Act (in-progress), and ensure compliance with guidelines to mitigate AI risks and foster responsible AI development and deployment.

AI News to Know

Apple Goes All-In on AI: By now, you’ve seen the news around Apple’s AI announcement, but there are a couple things worth highlighting given the potential security implications.

1) Apple Intelligence is the personal intelligence system that brings GenAI to iPhone, iPad, and Mac that will be available on iOS 18, iPadOS 18, and macOS Sequoia. As part of their GenAI offering, Apple has launched a partnership with OpenAI to integrate ChatGPT into their products. Apple has said that privacy protections are built in for users who access ChatGPT through obscuring of IP addresses and OpenAI won’t store requests. A user will also have to provide explicit consent before a request goes to ChatGPT.

2) Apple’s Intelligence seeks to prioritize privacy and security via Apple’s new private cloud computing at scale. Their blog post does a great job detailing the risk traditional computing approaches present, what risks were considered as part of their private cloud compute threat model, and how they designed their private cloud compute to address those unique challenges through features like stateless computation and enforceable guarantees, no privileged runtime access, enhanced hardware security, and verifiable transparency,

“Powerful AI hardware in the data center can fulfill a user’s request with large, complex machine learning models — but it requires unencrypted access to the user's request and accompanying personal data. That precludes the use of end-to-end encryption, so cloud AI applications have to date employed traditional approaches to cloud security”

Apple

Overall, Apple Intelligence has taken a very thoughtful approach to security and privacy consistent with their mission. Private Cloud Compute may very well become the GenAI LLM reference architecture for regulatory compliant and Enterprise AI apps.

EmailGPT Vulnerability: A new vulnerability has been identified in EmailGPT, a Google Chrome extension and API service that utilizes OpenAI’s GPT models to assist users writing emails within Gmail. The flaw allows attackers to gain control over the Chrome extension by submitting harmful prompts and can be exploited by anyone with access to the service. Despite multiple reported attempts to contact the developers, the organization who identified the vulnerability, CyRC, has received no response within their 90-day responsible disclosure period. This vulnerability is a good reminder on the importance of having a comprehensive AI inventory, maintaining governance over browser extensions, and enforcing browser security controls. If EmailGPT is used in your environment, you should remove it to mitigate potential risks.

LLM Threat Taxonomy: The Cloud Security Alliance published a LLM Threat Taxonomy to help establish a standard language around LLM risks and threats. As organizations are building their AI Risk Management Framework, it’s critical to coalesce around a common LLM taxonomy and CSA’s paper is a good starting point.

Challenges in AI Red Teaming: Anthropic published a very insightful piece detailing some of the challenges with current AI red team capabilities and arguing for automated and quantitative red team evaluation methods. Depending on the red teaming method, challenges range from manual processes, depth vs. breadth tradeoffs, operational overhead, and lack of scaleability, among others. The post also includes policy recommendations to improve adoption and standardization of red teaming across Frontier AI companies.

AI on the Market

Cyberhaven Fundraising: AI data security company Cyberhaven, has raised $88M in funding at a valuation of $488M. Cyberhaven’s platform finds and follows a company’s sensitive data to protect it across its lifecycle and any form it takes. Its customers include Snowflake, Reddit, and Barbie among others.

Mistral AI Series B: French AI startup Mistral, which is building a leading open source Frontier AI model, raised $640M at a $6B valuation to supercharge their expansion and corporate adoption efforts.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Multiple Roles @ SevenAI to be part of the early R&D or product teams for a security operations startup that just raised $36M | Boston

Security Trust, and Governance Lead @ Weights and Biases to oversee its Information Security Management System | Remote | $158k-$220k

Program Manager - Network/Cyber/AI - Legal Affairs @ T-Mobile to drive day-to-day regulatory guidance, risk management, and compliance functions | Multiple Locations | $88k-$120k | 3+ yrs

Staff Cyber Security Engineer (GenAI) @ NBCUniversal to join the security architecture team and focus on securing AI | Remote | $130k-$170k | 8+ yrs exp.

Incident Response Engineer, AI Red Team @ Walmart to Help Secure GenAI systems | Reston, VA or Bentonville, AK (hybrid) | $110k-$264k

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington