- Shadow AI
- Posts
- š¦¾ Shadow AI - 28 September 2023
š¦¾ Shadow AI - 28 September 2023
Arming Security and IT leaders for the Future
Forwarded this newsletter? Sign up for Shadow AI here.
Hello,
Last week, we highlighted how OpenAI and Google are in a race to release multimodal LLMs and the security implications around it. It shouldnāt come as a surprise that OpenAI announced this week that it is integrating voice and image capabilities into ChatGPT. There are so many additional applications security and IT practitioners will need to be aware of, including developers who are giving ChatGPT screenshots of product designs and asking it to write the code for it.
I gave ChatGPT a screenshot of a SaaS dashboard and it wrote the code for it.
This is the future.
ā Mckay Wrigley (@mckaywrigley)
2:59 PM ā¢ Sep 27, 2023
This week we cover the āmultimodalā nature of the regulatory environment as well asā¦
āļø Assessing Foundational Model Compliance with EU AI Act
š Securing High Risk AI Systems
šµļø The CIAās new AI tool
š Metaās Approach to Responsible AI
š«±š½āš«²š»AI Partnerships - Microsoft and OpenAI; Amazon and Anthropic; and what could be next
š AI Prompt of the Week - Dad Joke!
Letās dive in!
Demystifying AI - AI Regulation
Will the United States have more success regulating AI than it has had regulating privacy?
Privacy regulation in the United States has been marked by a slow and often fragmented journey. To date, federal legislation for privacy only addresses specific sectors such as healthcare and childrenās online activities. States have been looking to fill this void by creating their own legislation like Californiaās CPRA.
In the absence of federal privacy legislation, many companies have built their privacy programs to comply with the European Unionās (EU) General Data Protection Regulation (GDPR).
We find ourselves in a similar position with AI regulation today.
The EU has taken a more proactive stance than the U.S. on AI regulation. In April 2021, the EU introduced the Artificial Intelligence Act, which aims to create a harmonized regulatory framework for AI across member states. The Act categorizes AI systems into four risk levels, each with its own set of requirements and restrictions. High-risk AI applications, such as facial recognition in public spaces, must comply with strict regulations, including conformity assessments and transparency obligations.
While the EU AI Act is on its way to become formal law as soon as the end of this year, the United States is on a slow path developing AI legislation as it has been with privacy legislation. The US has signaled an interest in regulating AI via the SAFE Innovation Framework, the series of AI Insight Forums, and various White House efforts. Although thereās broad agreement across the public and private sector that AI should be regulated, the U.S. lacks a clear direction on how to balance innovation with accountability and consumer protection.
Similar to their approach with privacy compliance, U.S.-based AI companies are likely to focus on compliance with the EU AI Act once passed and then account for any U.S. specific nuances as they emerge.
So how are the U.S.-based AI companies positioned to meet the proposed EU AI Act requirements?
Stanford researchers evaluated foundation AI model providers like OpenAI and Google for their compliance with the proposed EU AI Act and found significant gaps in their compliance with the proposed requirements.
From a risk management standpoint, only a small number of model providers disclosed the risk mitigations they implement and the efficacy of these mitigations. No model providers met the proposed requirement to disclose ānon-mitigated risks with an explanation on the reason why they cannot be mitigated.ā
ChatGPTās multimodal release is good example of the risk management in action. They helpfully disclosed that:
1) they collaborated with accessibility app Be My Eyes to ensure responsible image usage that assists people's daily lives without overstepping privacy boundaries.
2) they excluded features like chat transcription
3) they conducted tests to identify potential harms in high-risk domains and implemented technical safeguards around image analysis.
They donāt, however, take that next step to disclose more details on the technical safeguards in place or where non-mitigated risks exist.
AI News to Know
Securing High Risk AI Systems: The European Commission released a report detailing 4 guiding principles to address the security high-risk AI systems:
1) Itās imperative to secure the āAI Systemā and not simply the AI Model
2) A comprehensive cyber risk assessment of the system and its components is critical
3) Cybersecurity of AI systems should rely on a combination of existing controls for software systems and AI-specific controls on individual models
4) Itās going to become increasingly complex to safeguard more advanced AI models and certain AI technology may not be ready for use in high-risk AI systems.
AI for Spooks?: The CIA is building its own AI tool to help analysts better sift through open source intelligence. The CIA hasnāt specified which foundational AI model its using for its chatbot (hopefully itās own!), but the tool is expected to be available to all 18 intelligence agencies. Hopefully we learn more about the security and privacy guardrails built into it for U.S. citizens.
Metaās Approach to Responsible AI: At yesterdayās Connect 2023 conference, Meta announced several new generative AI features and shared how they are working to build AI responsibly via safety classifiers, human feedback, fine tuning, and pre-training.
AI on the Market
Microsoft and OpenAI: Microsoft, which has integrated OpenAIās capabilities into its productivity and search products, is exploring ways to reduce reliance on OpenAI as costs increase. As we discussed in the second issue of Shadow AI, long-tail models that are smaller, more cost-effective, and designed for specific business use cases will continue to emerge and Microsoft appears to be testing the waters. This is happening as OpenAI seeks to raise additional funds at a $90B valuation.
Amazon and Anthropic: Earlier this week, Amazon announced itās investing up to $4B in Anthropic, the company behind Claude, which is built on AWS. Who will be investing in Cohere? Oracle?
AI Prompt of the Week
Weāre keeping it light this week. I did a ābake-offā between ChatGPT and Bard for the best dad joke about cybersecurity and Bard was the winner.
Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.
Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.
If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!
Until next Thursday, humans.
-Andrew Heighington