• Shadow AI
  • Posts
  • 🦾 Shadow AI - 18 January 2024

🦾 Shadow AI - 18 January 2024

Arming Security and IT leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

AI is all the buzz at Davos, but is the hype sustainable? In this week’s newsletter, I cover:

👋 Goodbye Crypto, Hello AI

🚨 AI Assistants and Code Security: Do they Increase Risk?

💻 🏛️ Hackers on the Hill

💤 AI Sleeper Agents

🥇 The Best of AI Security

👀 AI Powered Vulnerability Detection

👩🏾‍✈️ Microsoft Copilot Pro

👉 AI Prompt of the Week: Prompt injection attack via invisible instructions

Let’s dive in!

Demystifying AI - Goodbye Crypto, Hello AI

AI is one of four key themes at the World Economic Forum’s Annual Meeting in Davos and here are my 3 key takeaways from some of the world’s elite on AI:

1) AI and Jobs: IT and technology jobs have the highest exposure rate of automation and augmentation (73%) when compared to other major job function groups. The transformation that’s AI will bring to security and IT jobs is massive and Shadow AI is excited to help our readers successfully navigate that shift.

Emerging technology has the potential to have a significant impact on jobs. Image: World Economic Forum

2) People are Key to Building Confidence in AI: The biggest threat to business adoption of AI is not a lack of desire but confidence. In a recent EY survey, nearly 70% of CEOs revealed that uncertainty around generative AI made it challenging to develop and execute a strategy quickly. Developing strong governance, training, and technical controls to build corporate confidence in AI and enable a company’s AI strategy is critical and was a key discussion point across the forum.

3) OpenAI’s Partnership with the Military: In an interview with Bloomberg at Davos, OpenAI’s Vice President of Global Affair Anna Makanju said that OpenAI is working with the Department of Defense on cybersecurity tools for open source software that secures critical infrastructure and their government partnership right now is limited to the U.S.

Overall, the AI outlook from Davos was very positive, but a lot can change in a year. Just look at crypto which was the darling last year and didn’t make many headlines this year besides Jaime Dimon calling it a “pet rock.”

AI News to Know

  • AI Assistants and Code Security: Do they Increase Risk? A recent research study by Stanford examines the impact of AI code assistants on the security of code written by developers. Participants using AI assistants wrote significantly less secure code compared to those without access to these tools. This difference was observed across various programming tasks from SQL, encryption/decryption, and message signing. Furthermore, those with AI assistance were more likely to overestimate the security of their code. The study highlights the potential risks of over-reliance on AI for coding, especially regarding security, and suggests a need for more informed usage and design of such tools.

  • Hackers on the Hill: There’s bipartisan agreement on the risks AI presents and ~100 staffers and lawmakers in Congress recently participated in a private red teaming event with different AI models and chatbots. As lawmakers draft AI regulation, the hackathon gave them a firsthand opportunity to talk directly with hackers about how AI models can be manipulated.

  • AI Sleeper Agents: A research article by Anthropic investigates the concept of deliberately training LLMs with backdoors that activate deceptive behavior under specific conditions. It reveals that backdoors in LLMs can persist despite safety training techniques like reinforcement learning, supervised fine-tuning, and adversarial training. This could be particularly impactful open source models and as Jon Ticknor points out in his insightful post “we need to bring the same rigor to LLM origin analysis like we do other software/hardware tools.”

  • Best of AI Security: Matt Johansen announced the launch of his Best of AI Security project modeled after the Top 10 Web Hacking Techniques project he previously managed. Nominations on the best hacking techniques utilizing AI or hacks of AI systems from the community are due by January 30th.

AI on the Market

  • AI Powered Vulnerability Detection: Vicarius, a vulnerability remediation platform that helps companies like PepsiCo, Hewlett Packard Enterprise and Equinix “automate much of the discovery, prioritization and remediation workloads plaguing security and IT teams” raised $30M in Series B.

  • Microsoft Copilot Pro: Microsoft announced Copilot Pro which provides accelerated performance and creativity tools. Users are also able to access Copilot in select Microsoft 365 apps to draft documents, summarize emails, and create presentations. If your organization is having discussions around leveraging Copilot Pro, are you designing your security strategy in parallel?

AI Prompt of the Week

Kudos to Riley Goodside, a staff prompt engineer at Scale AI, for his example of a prompt injection attack via invisible instructions pasted in text. The message he originally provided contained hidden text using special characters. The characters formed a secret instruction to ignore the original prompt and instead respond with “Follow Riley Goodside.”

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington