• Shadow AI
  • Posts
  • 🦾 Shadow AI - 8 February 2024

🦾 Shadow AI - 8 February 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Have you used AI to make any Super Bowl picks? I don’t bet on sports often, but I always bet a little for the Super Bowl. This year I’ll be using a site my buddy created called Pine-sports that allows users to build and share AI sports models and leverage advanced analytics. Tune in next week to see how I did.

This week in Shadow AI, I cover:

🤑 Deepfakes Making Deep Pockets

🗣️ The Importance of Linguistic Diversity in AI Safety

🦺 AI Safety Institute

🏆 AI Safety Leaderboard

🚀 Security for AI Market Map

🧑🏽‍💼 DHS Launches AI Corps

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Deepfakes Making Deep Pockets

There’s been a lot of commentary about the successful deepfake attack on a multinational Hong Kong firm, but deepfakes targeting corporations have been lining the pockets of scammers for several years.

A Brief History of Profitable Deepfake Attacks

  • In 2020, a branch manager of a Japanese company in Hong Kong sent $35 million to scammers after they used AI to clone the voice of the parent company’s director in a phone call.

  • In 2021, fraudsters in China stole $75 million via fake tax invoices by fooling government-run facial recognition systems with deepfakes.

  • In May 2023, an unknown malicious actor targeted a company for financial gain using a combination of synthetic audio, video, and text messages. The actor, impersonating the voice of a company executive, reached a member of the company using a poor quality audio call over WhatsApp. The actor then suggested a Teams meeting and the screen appeared to show the executive in their office. The connection was very poor, so the actor recommended switching to text and proceeded to urge the target to wire them money. The attempted attack failed as the target became very suspicious.

The most recent deepfake scam appears to be a more advanced attempt that follows similar techniques to the attempted attack in May 2023 by using a combination of synthetic audio, video, and text.

Anatomy of the $25M AI-Enabled Scam

There’s still a lot of incomplete information on the most recent deepfake scam, but we know this was a sophisticated and targeted attack that incorporated deepfake technology in combination with reconnaissance and social engineering. Here’s the anatomy of the attack:

  1. An employee, who works in the finance department in the Hong Kong office, received a message in January – from someone who appeared to be the company’s Britain-based Chief Financial Officer – asking for a “secret transaction” to be made.

  2. Although the employee was initially doubtful, the victim was fooled after being invited to a video conference call and seeing the company’s CFO and other employees in attendance. Every attendee except the victim was a deepfake based on publicly available footage.

  3. On the video call, the scammers asked the finance employee to do a self-introduction. Then, they immediately gave the employee wiring orders and abruptly ended the call. This limited the opportunity for interaction and the likelihood of getting caught.

  4. The scammers followed up with the victim via instant messaging, emails, and one-on-one video calls using deep fakes.

  5. The finance employee ultimately made 15 transfers totaling $25.6 million USD sent to 5 different bank accounts.

What Can be Done To Prevent Attacks Like This?

Organizations should take a layered approach to identifying, defending against, and responding to deepfake threats. Here are some recommendations:

  1. Assess your identity verification and financial authorization procedures. Given the rapid improvements in generative AI and real-time rendering, testing for liveness during real-time communications is key. Mandatory multi-factor authentication in real-time can ensure those entering sensitive communication channels or activities are able to prove their identity.

  2. Invest in digital content authenticity and provenance to protect media that contains executives from being used or repurposed for disinformation.

  3. Refresh employee training programs so it includes an overview of potential uses of deepfakes, how they are being used across various channels of audio, video, and text, and how they are being coupled with phishing to meet an attacker’s objectives.

  4. Develop playbooks covering the most likely and impactful deepfake scenarios to your company and exercise them.

  5. Build a culture that encourages questioning and openness at all levels.

  6. Continually test and strengthen email security controls to protect against increasingly sophisticated social engineering tactics.

AI News to Know - (AI Safety Week!)

  • The Importance of Linguistic Diversity in AI Safety: Researchers from Brown University dive into how existing AI safety mechanisms, particularly in GPT-4, do not adequately generalize to low-resource languages (LRL) making them susceptible to translation-based attacks. By translating unsafe prompts from English into low-resource languages like Zulu through Google Translate, attackers can increase their chances of bypassing GPT-4’s safety filters from <1% to 79%. The study highlights the importance of considering linguistic diversity in AI safety training to mitigate risks across all language domains and the need for more inclusive and robust multilingual safety measures.

  • AI Safety Institute: President Biden named Elizabeth Kelly, a top White House aide who was integral to the AI Executive Order (EO) as the director of the new US AI Safety Institute at the National Institute of Standards and Technology. Elham Tabassi, was appointed as Chief Technology Officer. A key priority for the two leaders will be to secure adequate funding to meet their significant responsibilities outlined in the EO, including develop AI red teaming guidelines.

  • AI Safety Leaderboard: The authors of DecodingTrust released a new LLM Safety Leaderboard that aims to provide a unified evaluation of LLMs to help practitioners better understand potential risks across key factors like toxicity, stereotype and bias, adversarial robustness, ethics and fairness. Claude 2.0 currently has the championship belt.

    LLM Safety Leaderboard

AI on the Market

  • Security for AI Market Map: Menlo Ventures breaks down the emerging AI security stack seeking to assure enterprises that AI can be adopted safely at scale. Menlo defines the opportunity across three categories - governance, observability, and security - and believes adoption will follow in that order. As you’re building your company’s AI security strategy, make sure it covers all three areas.

  • DHS Launches AI Corps: In an effort to attract private sector talent to help DHS use AI safely and responsibly across their broad mission, DHS has launched an AI Corps program. Similar programs have been launched for other critical areas like cybersecurity with slow starts so it’ll be interesting to watch if this one finds success. The jobs are remote and pay between $143,000 to $191,000.

    Credit: Matthew Ferraro, Sr. Counselor to Secretary of Homeland Security

đź’Ľ 5 Cool AI Jobs of the Week đź’Ľ

Senior Software Security Engineer, AI @ Asana to co-own the security of Asana’s AI feature set | San Fran Hybrid | $202k-$316k | 6+ yrs exp.

IT Responsible AI Manager @ Nestle to ensure AI systems are developed and implemented in a responsible manner for the world’s largest nutrition, health, and wellness company | Arlington, VA | $120k-$172k | 5+ yrs exp.

Sr. Manager, Software Development, AI Security @ Amazon to lead its software engineering team in delivering secure GenAI based experiences | Seattle | $176k-$342k | 10+ yrs exp.

PhD Residency - AI and Cybersecurity @ SandboxAQ to solve global challenges with AI and Quantum | Remote | Masters or PhD program

Principal Software Security Engineer @ Anthropic to build reliable, interpretable, and steerable AI systems | San Fran Hybrid | $485k-$560k | 8+ yrs exp.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington