• Shadow AI
  • Posts
  • 🦾 Shadow AI - 11 January 2024

🦾 Shadow AI - 11 January 2024

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

In this week’s newsletter, we cover OpenAI’s much anticipated GPT store launch, 5 security GPTs that caught my attention, and:

📈 World Economic Forum’s Global Risk Report

📧 Real-World Attacks (Likely) Generated by AI

🤨 AI Trust Gap

💰 Pay Up for Training Data

👩🏽‍✈️ AI Prompt of the Week - SOC Copilot

Let’s dive in!

Demystifying AI - Custom GPTs

OpenAI’s custom GPT store opened yesterday to paid users. The floodgates have opened and I’ll be watching how OpenAI governs the GPT store and compliance with its usage policies.

Here are 5 security inspired GPTs that caught my eye so far (for good or bad):

  1. PenTestGPT is a cybersecurity expert to aid in penetration testing. You can ask it to “Guide you on generating exploitations to a target system” and it will run you through a step-by-step process on how to exploit a vulnerable system.

  2. LLM Security Oracle is an expert in AI and ML cybersecurity analysis that can discuss the latest strategies to protect against LLM adversarial attacks or analyze vulnerabilities in current ML models.

  3. SOC Copilot serves as an in-depth security operations assistant that can look for indicators of compromise related to a threat, summarize details on a threat actor, or help remediate a vulnerability.

  4. Cyber Security Ninja is an incident assistant that can analyze network logs, assess code for vulnerabilities, and provide tips on responding to a security incident.

  5. CyberGPT is an all-purpose cyber security advisor that can do everything from providing tips on security social media accounts (ahem SEC!), to creating a plan for SOC2 compliance, and summarizing today’s top cybersecurity threats.

I’d love to hear which ones you’ve found most interesting or helpful in your day to day.

AI News to Know

  • Global Risks Report: Reinforcing a common theme for Shadow AI, the World Economic Forum’sGlobal Risks Report 2024” ranked AI-derived misinformation and disinformation and its ability to influence up to 3 billion global election voters this year as the top risk, ahead of climate change, war and economic weakness. In the U.S., our current strategy appears to heavily rely on under resourced state and local election officials.

  • Real-World Attacks (Likely) Generated by AI: Abnormal Security released a paper sharing 5 examples of phishing attacks likely enabled by AI. The biggest advancement so far appears to be how AI has eliminated (not surprisingly) the grammatical errors and typos that were often key indicators of phishing attacks. In fact, NSA Cybersecurity Director Rob Joyce reiterated this trend in a conference at Fordham this week where he stated that attackers are “generating better English language outreach to their victims.” Does your security awareness training for 2024 include material on AI security and how phishing attacks are evolving? Have you assessed your technical email security controls and their effectiveness against the increasingly complex (and targeted) phishing attacks we’ll see?

    Uncovering AI-Generated Email Attacks: Real-World Examples from 2023 (Abnormal Security)

AI on the Market

  • AI Trust Gap: In a survey of over 5,000 employees and C-suite executives done by Workday, only one out of five workers say their employers have guidelines for using AI and only 52% of employees are in favor of using AI in their organization. The data reiterates the need for strong corporate AI governance programs, which are still nascent in many enterprises.

  • Pay Up for Training Data: AI companies are facing a wave of copyright lawsuits and lawmakers took up the issue at a Senate hearing yesterday. Members of both parties agreed that AI companies should pay media outlets for their content. Setting the legal issues aside, in practical terms licensing data would actually favor big firms like OpenAI who have deep pockets and create huge pressure on AI startups looking to challenge the incumbents.

AI Prompt of the Week

The output provides a good executive summary of the cyber threat actor associated with China’s Ministry of State Security, their target regions and entities, the tactics they use, and operational methods. Making the threat intelligence briefing process to executives is an area I can see AI making a positive impact.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington