- Shadow AI
- Posts
- š¦¾ Shadow AI - 27 June 2024
š¦¾ Shadow AI - 27 June 2024
Arming Security and IT Leaders for the Future
Forwarded this newsletter? Sign up for Shadow AI here.
Hello,
Shadow AI has been on a streak of 25 weeks with a newsletter and 41 out of the last 42 weeks since it launched last August. The only time I hit pause was the last week in December, but Iām going to do so again over the 4th of July holiday.
Shadow AI will be back in your inbox the following week with a readout from AWS Summit in NYC. If youāll be there, Iād love to meet up.
This week, I cover:
āļø Q2 AI Regulation Update
šØ AI and Zero Day Vulnerabilities
š”ļøAI to Bolster Cybersecurity
šØ OpenAI Withdrawing From China
āļø State of the Cloud in 2024
š¼ 5 Cool AI Security Jobs of the Week
And, thanks to Gaurav Kulkarni, Co-Founder and CEO at AuditCue, for getting me up to speed on India's Digital Personal Data Protection Act (DPDPA).
Letās dive in!
Demystifying AI - Q2 AI Regulation Update
I received good feedback on last weekās Q2 AI threat update so I thought itād also be helpful to share an AI regulation update as we wrap up the quarter.
Although U.S. Federal AI legislation is moving at a snailās pace (not surprisingly given itās an election year), there have been two notable state-level developments in the United States.
Coloradoās Consumer Protections for Artificial Intelligence Act
In May, Colorado was the first state to pass AI legislation, which requires developers of high-risk AI systems to use reasonable care to protect consumers from risks of algorithmic discrimination in the system starting February 1, 2026. High-risk AI systems are defined as āany AI system that, when deployed, makes, or is a substantial factor in making, a consequential decision,ā but the legislation does include some important exceptions. Coloradoās AI Act also requires the deployer of a high-risk system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Reasonable care including having a risk assessment policy and program, completing an impact assessment, and public disclosure of the high-risk AI system.
Californiaās AI Safety and Innovation Bill (SB 1047)
SB 1047 is a proposed bill in California that would make large frontier model providers like OpenAI, Anthropic, and Meta liable for any ācritical harmsā from their AI systems. It also effectively requires large frontier model providers to create a kill switch for their AI model in the event of an emergency. The bill presents real risk to open-source AI efforts at companies like Meta, Mistral, and open-source AI startups because they would be responsible for bad actors that use their AI models. The bill pass the California Senate in May and recently passed the Assembly Privacy Committee 8-0. It is fast approaching an August vote that, if passed, would send the bill to Governor Newsomās desk for decision on whether to sign.
Globally, there are a number of key regulations at various stages of development that bear watching to inform your security compliance programs:
Regulation | Status | Key Q2 Updates |
---|
EU AI Act | Finalization and Publication | Final text approved by the European Parliament. The Act is anticipated to be published in the EU Official Journal on July 12th, which means it will enter into force 20 days after. Tiered requirements will then start to kick in, beginning with prohibitions on unacceptable AI risk in 6 months. |
American Privacy Rights Act | Draft | A revised draft bill was released that removed protections against AI bias |
Canada's AI and Data Act | Legislative Review | Revised bill presented to parliament; includes new provisions for AI impact assessments. |
India's Digital Personal Data Protection Act (DPDPA) | Proposal | Umbrella legislation that establishes a new framework for processing personal data of Indian citizens. AI systems capable of making decisions without human interventions are currently considered in scope for the Act. There are some key differences between DPDPA and GDPR that organizations will need to account for. |
South Korea's AI Act | Proposed | The proposed focuses on several key areas including classifying AI used in areas directly impacting human life and safety as high risk and creating a legal foundation for āAI ethical principlesā |
AI adjacent regulation is also impacting companies. On the heels of facing charges for violating the EU Digital Markets Act (DMA), Apple recently announced that they may not release its artificial intelligence features in the EU this year due to concerns that the interoperability requirements of the DMA could force them to ācompromise the integrity of our products in ways that risk user privacy and data security.ā The interoperability requirement is intended to make it easier for people to leave Big Tech platforms, but there are real challenges in maintaining certain security measures, such as end-to-end encryption, and significant regulatory uncertainty over how DMA will be enforced. As a result, weāre starting to see the DMA impact the release of AI features from Big Tech companies.
Whatās the bottom line?
Weāre still in the early stages of AI regulation, but weāre headed down a path similar to cybersecurity regulation where we lack global harmonization and businesses will need to contend with significant administrative burden.
AI News to Know
AI and Zero-Day Vulnerabilities: Researchers from the University of Illinois Urbana-Champaign recently published a paper detailing how teams of LLM agents are capable of exploiting real-world zero day vulnerabilities. They tested a system of agents against 15 real-world web-based, open source vulnerabilities and found the system of agents can exploit real-world zero-day vulnerabilities with an overall success rate of 18%. With the average cost for a run at $4.39, the total cost would be $24.39 per successful exploit. The research has both positive and negative implications. On the positive side, penetration testers could potentially conduct more frequent and cost-effective testing. On the negative side, even though we have not seen broad use of attackers leveraging LLMs for vulnerability exploitation, itās very possible LLM agents exploiting zero-day vulnerabilities is on the horizon.
The bad news first: AI/LLM agents are exploiting zero days, showing a 4.5x improvement from prior agent performance.
The (ok not really) good news: These agents still perform poorly.I am very concerned about this (see my book) but much more concerned about the immediate dangerā¦ x.com/i/web/status/1ā¦
ā Nicole Perlroth (@nicoleperlroth)
5:30 PM ā¢ Jun 26, 2024
AI to Bolster Cybersecurity: Google Cloud CISO Phil Venables lays out 3 promising use cases for cybersecurity in Juneās edition of Googleās Cloud CISO Perspectives:
1) AI for malware analysis: Google recently tested the malware analysis effectiveness of Gemini 1.5 Pro and found it was able to analyze malware in 30 to 40 seconds and generate summarized reports. While Google says it was ānotably accurate,ā the exact accuracy level was not revealed.
2) AI to boost SecOps teams: Google highlights how customers like āPfizer and Fiserv are using natural language queries with Gemini in Security Operations to help new team members onboard faster, enable analysts to find answers more quickly, and improve efficiency of their security operations.ā
3) Identifying and fixing vulnerabilities: Google successfully instructed AI foundation models to write project-specific code that could improve fuzzing coverage to identify more vulnerabilities. They then built an automated pipeline to analyze vulnerabilities, generate patches, and test the fixes before selecting the best ones for human review.
AI on the Market
OpenAI Withdrawing from China: OpenAI is planning to restrict access to its API platform in China starting July 9th. Chinese companies like Baidu and Alibaba are swarming to fill the void, but perhaps this is a carrot to gain more business from the United States Government? Appointing Paul Nakasone, a well respected retired leader of the NSA and CYBERCOM to their Board and then restricting access to China are two big moves by OpenAI in the past two weeks.
State of the Cloud in 2024: Bessemer Venture Partners released its annual State of the Cloud report which includes 5 key AI trends. For IT security professionals reports like these are valuable to see where the market may be headed, including vertical AI companies eventually taking market share from legacy SaaS products.
āWe tend to overestimate the effect of a technology in the short run and underestimate the effect in the long runā
š¼ 5 Cool AI Security Jobs of the Week š¼
Sr. Legal Program Manager, Privacy, Cybersecurity, and AI @ OpenTable to oversee and manage legal programs related to privacy, cyber, and AI at KAYAK and OpenTable | Remote | $130k - $160k | 5+ yrs exp.
Security Engineer @ Quora to own and improve the security of cloud Infrastructure and Quora services | Remote | $147-$275k | 2+ yrs exp.
Sr Lead Cybersecurity Architect | AI-ML Security @ JPMorgan Chase to develop high-quality cybersecurity solutions for AI applications | Wilmington, DE or Plano, TX
Platform Security Engineer @ Glean to develop and maintain the security foundation of their AI platform | Palo Alto, CA | $185k-$280k | 5+ yrs exp.
AI and Automation Program Manager @ Okta to lead Customer First efforts in exploring, adopting, and implementing cutting-edge AI and automation solutions | Remote | $114k-$172k | 5+ yrs exp.
If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!
Until next Thursday, humans.
-Andrew Heighington