• Shadow AI
  • Posts
  • 🦾 Shadow AI - 12 October 2023

🦾 Shadow AI - 12 October 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

MLB playoffs are in full swing. NBA and NHL seasons are starting to kick off. The WNBA finals are here. My New England Patriots are a dumpster fire.

And, I’m finally watching Ted Lasso.

Will Professional Coaches eventually start using AI chatbots in game to help guide their decisions? Maybe we’ll unpack that in a future issue.

This week we cover:

🎭 AI Deepfake Trends and Strategies to Combat

⚖️ AI Executive Order?

👮🏽 Snap’s AI Chatbot - Potential Privacy Violation

đźš© Capture the Flag (CTF) Competition

🎧 AI Cybersecurity Podcast Launch

📉 AI Investment Falling Back to Earth?

đź’¸ Github Copilot Losing Money

👉 AI Prompt of the Week: ChatGPT Stumped!

Let’s dive in!

Demystifying AI - AI Deepfakes

Last week, we discussed 4 theories around why we have not seen more AI-enabled cyber breaches yet. We are, however, experiencing an apparent uptick in AI deepfakes. Deepfakes are audio, images, and/or videos that appear to realistically show voice and actions, but are actually fake representations made using artificial intelligence. The use of deepfakes is not new, but the applications are starting to evolve.

Celebrities such as Tom Hanks, Gayle King, and MrBeast all recently warned that deepfake videos of them were being used for sales or scams without their permission.

The integrity of elections is also being impacted. Slovakian Parliamentary candidates recently had to contend with disinformation spread through AI deepfakes. Two days before the election on September 30th, an audio recording was posted on Facebook pretending to be the leader of Slovakia’s progressive liberal party discussing buying votes from the country’s marginalized minority population.

The recording was strategically released during a 48-hour moratorium ahead of the election where media outlets and politicians are supposed to stay silent. As a result, they struggled to discredit it. The audio recording also exploited a loophole in Meta’s manipulated-media policy which states that only faked videos (and not audio recordings) violate its terms.

So what can security and IT professionals do to combat a rise in AI deepfakes?

  1. Digital Watermarking of Content: Watermarking can be used to embed hidden information in digital media to verify its authenticity.

  2. Deepfake Detection: Tools are emerging that can analyze the facial features of people in videos or look for inconsistencies in the lighting or background to help rapidly identify deepfakes.

  3. Build a Playbook and Exercise it: What if you work for a Fortune 100 company and your CEO is deepfaked prior to an earnings release? How would you respond? This is just one of many potential scenarios that could emerge. Assess what the most likely and impactful deepfake scenarios are to your company, build a response plan, and test it.

  4. Train Executives: Ensure your security awareness program includes training your most likely targeted executives on your plans for detecting, defending against, and responding to deepfakes.

Nationwide, we also need an urgent strategy to effectively combat deepfakes in preparation for the 2024 election in the United States.

AI News to Know

  • AI Executive Order?: Congressional Democrats wrote a letter to President Biden urging him to turn non-binding safeguards from the AI Bill of Rights into policy through an upcoming AI executive order. They also advocate for the AI principles to apply across the Federal Government wherever possible which is not the case today.

  • Snap’s AI Chatbot - Potential Privacy Violation: The British Information Commissioner’s Office issued a provisional enforcement notice against Snap for failing to properly assess the privacy risks to its users of the “My AI” chatbot, particularly to children between the ages of 13 to 17. The “My AI” chatbot is built on OpenAI’s GPT offering and, although a final enforcement notice has not been issued yet, it’s a reminder that AI systems need to be built with data security and privacy requirements from the outset.

  • Capture the Flag (CTF) Competition: AI Village has launched a CTF competition where competitors will seek to solve 27 hand-crafted machine learning security challenges to find flags, solve puzzles, and gain hands-on experience with the concepts of AI security and safety. There’s $50,000 in total prizes!

  • AI Cybersecurity Podcast Launch: Ashish Rajan, Host of the Cloud Security Podcast, and Caleb Sima, Chair of the AI Safety Initiative for the Cloud Security Alliance, are launching a new podcast on the CISO viewpoint on AI and associated risks. I’m excited to subscribe as it seems to be a great complement to what we’re building at Shadow AI. The first episode hits Oct 18th.

AI on the Market

  • AI Investment Falling Back to Earth?: The number of Q3 2023 Venture Capital deals in the generative AI sector were 29% lower than the preceding quarter per Pitchbook as investors start to more closely scrutinize the retention and monetization opportunities of AI companies. Deal value was higher, but that trend was due to Amazon’s major investment in Anthropic worth up to $4B.

  • Github Copilot Losing Money: Github Copilot is a great example of companies struggling to monetize new AI capabilities. Copilot is exceeding $100M in ARR, but Microsoft is losing a minimum of $10 a month on each of its users according to the Wall Street Journal. Copilot costs $10/user/month and, in some cases, power users have cost Microsoft up to $80 per month.

AI Prompt of the Week

ChatGPT apparently does not like when you ask it questions with the persona of a Board Director because this is the most it provides in a response. I would have expected it to ask questions around:

1) What is the plan to build or deploy AI responsibly?

2) What types of AI systems is the company using?

3) What are the estimated costs of implementing and not implementing such a system?

4) What is the company’s plan to design its AI systems securely from the start?

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington