• Shadow AI
  • Posts
  • 🦾 Shadow AI - 7 September 2023

🦾 Shadow AI - 7 September 2023

Arming Security and IT leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Thanks for all the feedback and support this past week on the launch of the Shadow AI Newsletter.

It’s been exciting to see subscribers from leading financial institutions, VC funds, the U.S. Government, and cutting edge technology companies.

Although this is a “newsletter,” the goal of it is to be much more than a news roundup.

AI is moving at breathtaking speed. Shadow AI aims to provide current and future security and IT leaders with curated content to help keep pace with the speed of change.

I’d love to hear from you on if these first two issues are hitting the mark and what else you’d like to see.

Let’s dive into this week’s issue!

Demystifying AI: AI vs. Traditional Systems

While AI offers exciting possibilities, its complexity, data dependency, and unique attack vectors require tailored security measures. Traditional software, with its rule-based nature, presents a more predictable security landscape.

There are 6 key differences that need to be considered when weighing the security implications of AI systems vs traditional systems:

1. Complexity

AI Systems

  • AI systems are like digital brains; they learn and adapt based on data.

  • Their complexity arises from the need to analyze vast amounts of data and make decisions, making them challenging to secure.

Traditional Systems

  • Traditional software follows predefined rules and logic.

  • Generally, it's easier to predict how traditional software will behave, making security measures more straightforward.

2. Data Dependency

AI Systems

  • Depend heavily on data for training and decision-making.

  • Data quality is critical; poor data can lead to inaccurate AI outcomes and potential security vulnerabilities

Traditional Systems

  • Data is used primarily for input and output; it doesn’t adapt or learn from data

  • Software vulnerabilities often result from coding errors rather than data issues

3. Attack Vectors

AI Systems

  • Vulnerable to data poisoning attacks where malicious data can corrupt the AI's understanding.

  • Adversarial attacks manipulate input data to deceive AI systems.

Traditional Systems

  • Vulnerabilities are typically exploited through code-based attacks.

  • Security focuses on code-level defenses.

4. Explainability

AI Systems

  • Often viewed as "black boxes" due to their complexity.

  • Understanding why an AI makes a particular decision can be challenging, potentially hiding security risks.

Traditional Systems

  • Easier to comprehend the software's behavior and identify security issues through code review.

5. Continuous Learning

AI Systems

  • Can adapt and change their behavior over time.

  • Security must address ongoing learning, as malicious actors could exploit changes.

Traditional Systems

  • Comparably, it remains more static even with regular product updates, making security assessments more stable.

6. Ethical Concerns

AI Systems

  • AI decisions can raise ethical concerns, leading to privacy and bias issues.

  • Security encompasses not only technical aspects but also ethical considerations.

Traditional Systems

  • Ethical concerns are typically related to how data is handled rather than software behavior itself.

The Bottom Line: When designing AI security strategies,

1) avoid simply bolting AI security onto your traditional security strategy

2) engage key stakeholders from the Chief Data Office, Privacy Office, Product, and Engineering, General Counsel, and business units from the start

3) Ensure the AI security strategy accounts for the 6 key elements above.

AI News and Notes

  • AI and Secure by Design: Ross Haleliuk has a great piece on how the industry is moving so fast to build AI systems that we’re repeating the same mistakes we made when building the Internet, cloud, and even the construction industry.

So what? Privacy and security will face a catch up game if we don’t move quickly to truly build security and privacy into AI from the start through robust threat modeling, secure and private by default configurations, and security as a core business requirement.

  • Short vs. Long-Term Societal AI Risk: There is an ongoing debate within the AI ethics community about whether to prioritize mitigating short-term risks from AI (like bias and discrimination) or long-term existential risks (like artificial general intelligence posing an existential threat).

So what? We need to be able to balance managing both the short- and long-term societal risks of AI. The United States Government needs to lead the way by building a strong public, private, and international AI partnerships. Other countries are moving fast. Today, the UK Government announced a Frontier AI Taskforce with experts from academia, national security, and technical organizations and an aggressive mandate.

Companies building LLMs or leveraging third party LLMs increasingly need to adopt AI risk management best practices like those outlined in NIST’s AI Risk Management Framework and integrate them into their enterprise risk management processes.

AI on the Market

AI Model Evolution and Implications: Lightspeed argues that, despite relative AI model usage concentration today, a wide variety of AI models will emerge as enterprises and developers gradually select models specialized for their use cases. They classify the model landscape into 3 primary categories:

  1. Big Brain Models (e.g., OpenAI, Anthropic, Cohere): Complex, expensive, and expansive models that are the initial entry point of developers exploring what AI can do for their applications

  2. Challenger Models (e.g., Llama 2, Falcon): Rapidly advancing high-capability models when fine tuned can challenge the big brain models on tasks

  3. Long Tail Models: Expert models built for a specific use case that are inexpensive and flexible

So what? As companies pursue a mix of Big Brain Models and more specialized Long Tail Models, security and IT teams will need to have tools and guardrails in place to handle compliance with regulatory frameworks, LLM security, and observability and monitoring across their AI stacks.

AI Prompt of the Week

I like how the output covers requirements for user privacy, fairness, content safety, and overall system integrity. I also like how it emphasizes that this is not a one time process, but one that requires continuously assessing and updating requirements as new threats and vulnerabilities emerge and as regulatory and ethical standards evolve.

However, I was surprised data encryption and AI red teaming did not make the top 5.

Have a favorite, funny, or doomsday prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington