• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 28 September 2023

šŸ¦¾ Shadow AI - 28 September 2023

Arming Security and IT leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Last week, we highlighted how OpenAI and Google are in a race to release multimodal LLMs and the security implications around it. It shouldnā€™t come as a surprise that OpenAI announced this week that it is integrating voice and image capabilities into ChatGPT. There are so many additional applications security and IT practitioners will need to be aware of, including developers who are giving ChatGPT screenshots of product designs and asking it to write the code for it.

This week we cover the ā€œmultimodalā€ nature of the regulatory environment as well asā€¦

āš–ļø Assessing Foundational Model Compliance with EU AI Act

šŸ”’ Securing High Risk AI Systems

šŸ•µļø The CIAā€™s new AI tool

šŸ” Metaā€™s Approach to Responsible AI

šŸ«±šŸ½ā€šŸ«²šŸ»AI Partnerships - Microsoft and OpenAI; Amazon and Anthropic; and what could be next

šŸ˜‚ AI Prompt of the Week - Dad Joke!

Letā€™s dive in!

Demystifying AI - AI Regulation

Will the United States have more success regulating AI than it has had regulating privacy?

Privacy regulation in the United States has been marked by a slow and often fragmented journey. To date, federal legislation for privacy only addresses specific sectors such as healthcare and childrenā€™s online activities. States have been looking to fill this void by creating their own legislation like Californiaā€™s CPRA.

In the absence of federal privacy legislation, many companies have built their privacy programs to comply with the European Unionā€™s (EU) General Data Protection Regulation (GDPR).

We find ourselves in a similar position with AI regulation today.

The EU has taken a more proactive stance than the U.S. on AI regulation. In April 2021, the EU introduced the Artificial Intelligence Act, which aims to create a harmonized regulatory framework for AI across member states. The Act categorizes AI systems into four risk levels, each with its own set of requirements and restrictions. High-risk AI applications, such as facial recognition in public spaces, must comply with strict regulations, including conformity assessments and transparency obligations.

While the EU AI Act is on its way to become formal law as soon as the end of this year, the United States is on a slow path developing AI legislation as it has been with privacy legislation. The US has signaled an interest in regulating AI via the SAFE Innovation Framework, the series of AI Insight Forums, and various White House efforts. Although thereā€™s broad agreement across the public and private sector that AI should be regulated, the U.S. lacks a clear direction on how to balance innovation with accountability and consumer protection.

Similar to their approach with privacy compliance, U.S.-based AI companies are likely to focus on compliance with the EU AI Act once passed and then account for any U.S. specific nuances as they emerge.

So how are the U.S.-based AI companies positioned to meet the proposed EU AI Act requirements?

Stanford researchers evaluated foundation AI model providers like OpenAI and Google for their compliance with the proposed EU AI Act and found significant gaps in their compliance with the proposed requirements.

From a risk management standpoint, only a small number of model providers disclosed the risk mitigations they implement and the efficacy of these mitigations. No model providers met the proposed requirement to disclose ā€œnon-mitigated risks with an explanation on the reason why they cannot be mitigated.ā€

ChatGPTā€™s multimodal release is good example of the risk management in action. They helpfully disclosed that:

1) they collaborated with accessibility app Be My Eyes to ensure responsible image usage that assists people's daily lives without overstepping privacy boundaries.

2) they excluded features like chat transcription

3) they conducted tests to identify potential harms in high-risk domains and implemented technical safeguards around image analysis.

They donā€™t, however, take that next step to disclose more details on the technical safeguards in place or where non-mitigated risks exist.

AI News to Know

  • Securing High Risk AI Systems: The European Commission released a report detailing 4 guiding principles to address the security high-risk AI systems:

    1) Itā€™s imperative to secure the ā€œAI Systemā€ and not simply the AI Model

    2) A comprehensive cyber risk assessment of the system and its components is critical

    3) Cybersecurity of AI systems should rely on a combination of existing controls for software systems and AI-specific controls on individual models

    4) Itā€™s going to become increasingly complex to safeguard more advanced AI models and certain AI technology may not be ready for use in high-risk AI systems.

  • AI for Spooks?: The CIA is building its own AI tool to help analysts better sift through open source intelligence. The CIA hasnā€™t specified which foundational AI model its using for its chatbot (hopefully itā€™s own!), but the tool is expected to be available to all 18 intelligence agencies. Hopefully we learn more about the security and privacy guardrails built into it for U.S. citizens.

  • Metaā€™s Approach to Responsible AI: At yesterdayā€™s Connect 2023 conference, Meta announced several new generative AI features and shared how they are working to build AI responsibly via safety classifiers, human feedback, fine tuning, and pre-training.

AI on the Market

  • Microsoft and OpenAI: Microsoft, which has integrated OpenAIā€™s capabilities into its productivity and search products, is exploring ways to reduce reliance on OpenAI as costs increase. As we discussed in the second issue of Shadow AI, long-tail models that are smaller, more cost-effective, and designed for specific business use cases will continue to emerge and Microsoft appears to be testing the waters. This is happening as OpenAI seeks to raise additional funds at a $90B valuation.

  • Amazon and Anthropic: Earlier this week, Amazon announced itā€™s investing up to $4B in Anthropic, the company behind Claude, which is built on AWS. Who will be investing in Cohere? Oracle?

AI Prompt of the Week

Weā€™re keeping it light this week. I did a ā€œbake-offā€ between ChatGPT and Bard for the best dad joke about cybersecurity and Bard was the winner.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington