- Shadow AI
- Posts
- š¦¾ Shadow AI - 14 December 2023
š¦¾ Shadow AI - 14 December 2023
Arming Security And IT Leaders For The Future
Forwarded this newsletter? Sign up for Shadow AI here.
Hello,
Iām excited about this weekās issue of Shadow AI. I think itās one of the best so far in uncovering real world implications of AI for security and IT practitioners. Let me know where youād like to see me take this newsletter in 2024!
In this weekās issue we cover:
āļø Demystifying the EU AI Act
šØ 1600+ Hugging Face API Tokens Exposed
š½ OpenAI Downtime
š» Anonymous Sudan to Blame?
šļø AI Adoption in the US Government
šš¾ Open Source Race
š«š· Mistral Fundraise
š DropBox AIās āInsecure by Designā feature
š A Wildly Inaccurate AI Prompt of the Week
Letās dive in!
Demystifying AI - EU AI Act
By now, you likely heard that the EU reached a provisional agreement on the Artificial Intelligence Act.
What do we know so far?
The details still need to get ironed out. EU government officials and aides of lawmakers met Tuesday to discuss critical paths to finalizing the law such as scope and how it will work. This included discussion on the legal basis on how governments can use AI in biometric surveillance, copyright issues posed by foundation models, and how to regulate major AI systems.
The rules ban certain AI applications that present a potential threat to citizensā rights and democracy, including:
biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race) with an exception for the use of biometric identification systems in publicly accessible spaces for law enforcement.
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behavior or personal characteristics;
AI systems that manipulate human behavior to circumvent their free will;
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
The rules establish obligations for AI based on its potential risks and level of impact. High risk AI systems will have certain obligations, including a mandatory fundamental rights impact assessment.
Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
Whatās next?
Once there is a finalized text of the act, formal review, resolution, and publication still needs to happen.
So what?
Itās premature to know the impact the EU AI Act will have since so many key details are still being determined.
French President Emmanuel Macron has already voiced concerns that the agreement will leave European tech companies behind competitors in the US, UK and China.
Weāre still at least 6 months away from having an EU AI Act and thatās assuming all the critical details that really matter in the law can get ironed out.
AI News to Know
1600+ Hugging Face API Tokens Exposed: Lasso Security identified 1681 valid API tokens across HuggingFace, a popular open source AI repository, and GitHub. They were able to gain access to 723 organizations' accounts, including Meta, HuggingFace, Microsoft, Google, and VMware. They obtained full access, both read and write permissions, to Meta Llama2, BigScience Workshop (Bloom), and EleutherAI (Pythia), all organizations whose AI models have been downloaded millions of times. Vulnerabilities like this can lead to supply chain risk, model and data set theft, and poisoning of training data.
OpenAI Downtime: Since Nov 7th, OpenAI has been facing an increased availability issues. Itās most recent issue was yesterday when it experienced a major 31-minute outage. The incident included an outage on ChatGPT on the web and elevated error rates for individual users and some ChatGPT Enterprise users. Two weeks ago, Shadow AI looked at the importance of building business resiliency plans for your third party AI systems and this week further emphasizes the need to prioritize this.
Anonymous Sudan to Blame?: Anonymous Sudan, a pro-Russian hacker group known for its DDoS attacks, claimed responsibility for a November 8th outage on OpenAI and posted this week that they will not stop.
AI Adoption in the U.S. Government: A Government Accountability Office (GAO) report released this week found that federal agencies are already using AI in 282 applications today. The report also highlights that there are more than 1,000 addition planned uses for artificial intelligence (AI) in the federal government. The study found that Federal agencies have taken initial steps to comply with AI requirements in executive orders and federal law, but more work remains to effectively address AI risk. Iām a little concerned for our kids that the Department of Education has only identified one AI use case!
GAO-24-105980: Artificial Intelligence - Agencies Have Begun Implementation but Need to Complete Key Requirements
AI on the Market
Open Source Race: ARK Invest has released some interesting research on the trajectory of open source models vs. closed models. Open source AI model security is going to be critical and, as weāve seen in the Hugging Face incident, we still have important areas to focus on.
Mistral Fundraise: In that vein, a16z led a $415 Series A investment in Mistral AI, an open source LLM creator based in France. The young company is now valued at $2B after Mistral raised $113M in seed funding at a $260M valuation in June 2023 when it was four weeks old. a16zās investment comes after they took a stake in OpenAIās $300M round in April as the look to diversify across the Generative AI value chain. How could the EU AI Act impact the French company?
DropBox AIās āInsecure by Designā feature: DropBox released new AI features that were automatically enabled by default for its "Dropbox AI alphaā offering. DropBox AI is an OpenAI powered chatbot for exploring file contents using an "Ask something about this file" feature. The release has caused concern among Dropboxās customer base. If you are a team or enterprise user of DropBox, check your AI settings and make sure youāve done a risk assessment on your file content being sent to OpenAI.
AI Prompt of the Week
The output is an important reminder on the current limitations of AI because much of what is written is inaccurate. It states that Anonymous Sudan specifically targets Sudan, but in reality they have focused their attacks on countries such as Sweden, Denmark, America, and Australia. CloudFlare has a much more helpful write up on the threat actor.
Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.
Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.
If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!
Until next Thursday, humans.
-Andrew Heighington