• Shadow AI
  • Posts
  • 🦾 Shadow AI - 4 January 2024

🦾 Shadow AI - 4 January 2024

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Hope your 2024 is off to a great start. Before we dive in, let’s recap how Shadow AI aims to add value to your inbox each Thursday:

  1. Signal: I spend hours each week researching, curating, and distilling the latest AI technology and trends to help current and future security and IT leaders like you stay ahead in safeguarding digital assets, enhancing employee productivity, and enabling business growth.

  2. Engagement: I’m doubling down on giving you different perspectives from expert guest contributors and receive tailored, unique content for our industry.

  3. Brevity: Easily digestible content so each week you walk away with something new in 5 minutes or less.

This week, we cover:

  • Governing Third Party Use of AI

  • “Mumbo Jumbo” Security Research

  • AI’s Threat to Democracy

  • AI and Child Sexual Abuse Material

  • AI Driven Threat Modeling

  • Venture in “AI” Security

  • AI Prompt of the Week - The Best Way to Inject Javascript into a Website

Demystifying AI - Governing Third Party Use of AI

In 2024, organizations are expected to accelerate AI adoption in their operations and the need for robust Third-Party Risk Management (TPRM) programs becomes crucial. The capabilities and complexities of AI poses unique challenges that traditional risk management strategies may not adequately address. In the past year, I've had the opportunity to revamp TPRM programs to better assess AI risk. Here are three tips:

1. Assess AI Risk During the Vendor Selection Process

AI is not a monolith; its applications vary significantly across different vendors. The first step in effective AI risk management is understanding the specific AI technologies and methodologies employed by your potential vendors so you can identify the risks applicable to your business use case.

Questions to ask include: What type of AI does the vendor use (e.g., machine learning, natural language processing)? What is the AI's purpose in their product or service? How does the vendor train their AI models, and what data do they use?

2. Create a Plan for Secure Usage and Train Employees

Once a vendor has been selected, develop a comprehensive plan that covers data governance, security configuration, and compliance with relevant regulations.

Establish strict controls over data access, encryption, and storage. Ensure that the vendor’s AI solutions comply with data protection laws such as GDPR or CCPA, depending on your geographic location and business scope.

Understand the enterprise security settings available and configure them appropriately to mitigate the risks of your use case. Ensure your AI adoption follows industry standards like the NIST AI Risk Management Framework or ISO 42001.

It’s equally important to train employees on AI’s limitations, risks, and secure use. As AI adoption increases, build a comprehensive training plan to help manage the risk.

3. Monitor Sub-processor Notifications of Existing Vendors for New AI Use Cases and Evaluate Based on Risk

I’ve found that ensuring your security team has visibility into sub-processor notifications is critical to stay on top of any new AI capabilities your vendors might deploy. Companies typically provide 30 days notice of new sub-processors, including AI ones, and customers often have the option to opt out if they’d like. Work with your privacy team and IT admin teams to get in the loop on any sub-processor notifications. Failing to do so likely means your company will continue to use the vendor’s services, which constitutes agreement with using their new sub-processor even if you don’t explicitly approve.

AI News to Know

  • “Mumbo Jumbo” Security Research: Bug bounty platforms and the customers that use them need to have a plan for dealing with security researchers raising unsubstantiated vulnerabilities identified by LLMs. Daniel Stenberg at curl, an Open Source project providing a library and command-line tool for internet transfers, shared his engagement with a security researcher on HackerOne who a reported buffer overflow vulnerability that was completely bunk. Security teams risk being overwhelmed with a lot of AI-driven bug reports that amount to far more noise than signal, potentially slowing the triage and fixing of real vulnerabilities. As a start, perhaps bug bounty programs should automatically flag posts with “Certainly!” in it.

  • AI’s Threat to Democracy: One of the big risks the world faces this year is AI’s impact on elections across all facets of the electoral process - registration of voters, campaigning, casting of votes, and reporting of results. Over 2 billion people are expected to vote in various elections and Generative AI is almost certainly going to exacerbate election disinformation challenges. Companies like Google and OpenAI have announced plans to watermark AI generated content. In the article, the U.S. Department of Homeland Security highlights many ways state and local election officials can help mitigate AI-enhanced threats, but I would like to hear more about how the Federal Government is going to support them via grants, technology, and training. The Election Security Rumor vs. Reality website and threat intelligence sharing isn’t enough.

  • AI and Child Sexual Abuse Material: The Stanford Internet Observatory released a report that there were over 1000 known instances of child sexual abuse material (CSAM) within a dataset used to train popular AI image models, including Stable Diffusion 1.5 and Midjourney. Organizations who have downloaded LAION-5B or its derivatives and trained models on its datasets should sanitize their models and retrain from scratch. The model build process should also leverage free detection tools like Microsoft’s PhotoDNA to prevent the collection of known CSAM.

  • AI Driven Threat Modeling: Sandesh Mysore Anand had a great piece in his Boring AppSec Newsletter last month on how “GenAI can supercharge your AppSec program.” One area I’m keen to explore is leveraging Generative AI for Threat Modeling and Secure Design Patterns. As Sandesh notes, “we can use APIs (e.g., GPT 4T) to read documents, interpret architecture diagrams, and come up with a list of recommendations in line with a company’s security standards.” We are entering a world where a developer can provide details of what they’re building and get immediate feedback on threats and secure design patterns to address them. Security could assist in validating the outputs rather than guiding the end-to-end process.

AI on the Market

  • Venture in “AI” Security: Ross Haleliuk, author of one of my favorite newsletters, Venture in Security, covers a ton of ground on Security for AI and AI in Security in his latest newsletter. Ross compares the adoption of AI in security to the trajectory of cloud adoption where major platform providers have emerged. He speculates that the first generation of AI security companies might not be where the most value will be created given the current uncertainty in AI infrastructure that needs to be secured.

    Venture in Security

AI Prompt of the Week

This is an example of the fine line between prompts used for legitimate purposes and illegal purposes. The output identifies 6 ways to inject Javacode into a website, but cautions that it could be illegal if you don’t own the website.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington