- Shadow AI
- Posts
- 🦾 Shadow AI - 7 September 2023
🦾 Shadow AI - 7 September 2023
Arming Security and IT leaders for the Future
Forwarded this newsletter? Sign up for Shadow AI here.
Hello,
Thanks for all the feedback and support this past week on the launch of the Shadow AI Newsletter.
It’s been exciting to see subscribers from leading financial institutions, VC funds, the U.S. Government, and cutting edge technology companies.
Although this is a “newsletter,” the goal of it is to be much more than a news roundup.
AI is moving at breathtaking speed. Shadow AI aims to provide current and future security and IT leaders with curated content to help keep pace with the speed of change.
I’d love to hear from you on if these first two issues are hitting the mark and what else you’d like to see.
Let’s dive into this week’s issue!
Demystifying AI: AI vs. Traditional Systems
While AI offers exciting possibilities, its complexity, data dependency, and unique attack vectors require tailored security measures. Traditional software, with its rule-based nature, presents a more predictable security landscape.
There are 6 key differences that need to be considered when weighing the security implications of AI systems vs traditional systems:
1. Complexity
AI Systems
| Traditional Systems
|
2. Data Dependency
AI Systems
| Traditional Systems
|
3. Attack Vectors
AI Systems
| Traditional Systems
|
4. Explainability
AI Systems
| Traditional Systems
|
5. Continuous Learning
AI Systems
| Traditional Systems
|
6. Ethical Concerns
AI Systems
| Traditional Systems
|
The Bottom Line: When designing AI security strategies,
1) avoid simply bolting AI security onto your traditional security strategy
2) engage key stakeholders from the Chief Data Office, Privacy Office, Product, and Engineering, General Counsel, and business units from the start
3) Ensure the AI security strategy accounts for the 6 key elements above.
AI News and Notes
AI and Secure by Design: Ross Haleliuk has a great piece on how the industry is moving so fast to build AI systems that we’re repeating the same mistakes we made when building the Internet, cloud, and even the construction industry.
So what? Privacy and security will face a catch up game if we don’t move quickly to truly build security and privacy into AI from the start through robust threat modeling, secure and private by default configurations, and security as a core business requirement.
Short vs. Long-Term Societal AI Risk: There is an ongoing debate within the AI ethics community about whether to prioritize mitigating short-term risks from AI (like bias and discrimination) or long-term existential risks (like artificial general intelligence posing an existential threat).
So what? We need to be able to balance managing both the short- and long-term societal risks of AI. The United States Government needs to lead the way by building a strong public, private, and international AI partnerships. Other countries are moving fast. Today, the UK Government announced a Frontier AI Taskforce with experts from academia, national security, and technical organizations and an aggressive mandate.
Companies building LLMs or leveraging third party LLMs increasingly need to adopt AI risk management best practices like those outlined in NIST’s AI Risk Management Framework and integrate them into their enterprise risk management processes.
AI on the Market
AI Model Evolution and Implications: Lightspeed argues that, despite relative AI model usage concentration today, a wide variety of AI models will emerge as enterprises and developers gradually select models specialized for their use cases. They classify the model landscape into 3 primary categories:
Big Brain Models (e.g., OpenAI, Anthropic, Cohere): Complex, expensive, and expansive models that are the initial entry point of developers exploring what AI can do for their applications
Challenger Models (e.g., Llama 2, Falcon): Rapidly advancing high-capability models when fine tuned can challenge the big brain models on tasks
Long Tail Models: Expert models built for a specific use case that are inexpensive and flexible
So what? As companies pursue a mix of Big Brain Models and more specialized Long Tail Models, security and IT teams will need to have tools and guardrails in place to handle compliance with regulatory frameworks, LLM security, and observability and monitoring across their AI stacks.
AI Prompt of the Week
I like how the output covers requirements for user privacy, fairness, content safety, and overall system integrity. I also like how it emphasizes that this is not a one time process, but one that requires continuously assessing and updating requirements as new threats and vulnerabilities emerge and as regulatory and ethical standards evolve.
However, I was surprised data encryption and AI red teaming did not make the top 5.
Have a favorite, funny, or doomsday prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.
Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.
If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!
Until next Thursday, humans.
-Andrew Heighington