• Shadow AI
  • Posts
  • 🦾 Shadow AI - 16 November 2023

🦾 Shadow AI - 16 November 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

I’ve been building several Custom GPTs this past week and this week we’ll preview how easy it was to create one that’s accelerating completion of security due diligence questionnaires.

There will be no newsletter next week due to the Thanksgiving holiday. I’m grateful for your readership and hope you get to enjoy the day with family and friends.

Let’s dive in!

Demystifying AI - Custom GPTs

I developed 4 Custom GPTs on OpenAI this past week:

  • Magical Planner: Your go-to guide for Disney trip planning! [Because we’re surprising our kids with a trip to Disney next month and I wanted a free and easy way to plan]

  • Coaches Corner: Your assistant for coaching youth sports teams. [Because I run a girls recreation basketball league and want to help youth sports coaches create fun and instructional practices]

  • Gluten Free Guide: I find gluten-free eateries for you. [Because my two daughters and I have celiac disease and many people need easier ways to find gluten free restaurants]

  • Security Companion: Accelerate completion of your third party security questionnaires. [Because the third party due diligence process is broken and security practitioners need an easier way to consistently answer customer questionnaires]

Let’s break down the one you’re probably most interested in, Security Companion.

  • I uploaded mocked up answers to every typical security question I receive based on SIG Lite and others standard questionnaires so the GPT can use this as its knowledge base.

  • I provided prompt instructions for how the GPT should help security teams swiftly and accurately answer queries.

  • I refined the capabilities I wanted to deploy in my GPT (code interpreter, web browsing, or DALL-E generation)

  • I deselected using conversation data in my GPT to improve OpenAI models

I asked away and here’s an example result:

One of the areas I’m focused on now is assessing how I can ensure the GPT does not fall for a prompt injection attack and allow an unauthorized individual to access my GPT instructions. There is no easy way to do this today. Walter Haydock has a great overview of the challenges and strategies for managing this risk in his Deploy Securely newsletter.

I continue to refine all of these Custom GPTs and welcome your feedback.

What custom security or IT GPTs could you benefit from?

AI News to Know

  • UnitedHealthcare’s AI Lawsuit: Fortune 10 company UnitedHealthcare is facing a lawsuit that its post-acute care algorithm, which it acquired by purchasing Navihealth in 2020, repeatedly and wrongfully refused to pay healthcare claims of senior patients. It’s AI model allegedly had a 90% error rate and conflicted with Medicare Advantage coverage rules while helping UnitedHealthcare save money.

  • Google’s Lawsuit against Bard Scammers: Google announce it’s suing fraudsters who created fake social media pages and ran ads that encouraged people to “download” Bard.” Downloading resulted in the users installing malware on their device that compromised their social media accounts. 

  • AI Election in Argentina: Shadow AI has examined the emerging impact AI is having on elections and the latest example of this is in Argentina where both presidential candidates are using AI to create promotional content and attack their opponent. Buckle up in the U.S. for 2024.

  • Best Practices for Securing LLMs: Rich Harang, Principal Security Architect at NVIDIA, wrote an excellent piece on best practices for securing LLM-enabled applications covering defending against prompt injection, information leaks, and LLM reliability. His guidance on establishing a trust boundaries is especially useful.

AI on the Market

  1. Responsible AI Commitments: The VC community engaged in a spirited debate this week as 40 technology investors, including General Atlantic, Insight Partners, and Softbank Investment Advisors, committed to 5 key Responsible AI focus areas with the startups they invest in:

    1) Secure organizational buy-in

    2) Foster trust through transparency

    3) Forecast AI risks and benefits

    4) Audit and test to ensure product safety regularly

    5) Make regular and ongoing improvements

    Detractors argued that signing onto this commitment would detract from innovation and slow down AI building.

AI Prompt of the Week

The BlackCat ransomware gang is leveraging a new tactic with MeridianLink, a public digital lending company by reporting them to the SEC for failing to disclose a material breach in 4 days. I like how the output suggests a number of steps including immediate legal and compliance review (the new SEC rules technically don’t take effect until December 15th), outreach to the SEC, and establishing more robust reporting mechanisms. If you’re a public company, ensure your incident response plan accounts for this type of scenario and exercise it with key stakeholders.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until November 30th, humans.

-Andrew Heighington