🦾 Shadow AI - 25 April 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

It’s here! The NFL Draft (and your weekly dose of Shadow AI).

This week, I cover:

🤖 AI’s Looming Threat to Trust

🏛️ The Future of AI Innovation Act

⚖️ AI Board Governance

💰 Early Bets on Security Operations

📈 GenAI in the Cybersecurity Market

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - AI’s Looming Threat to Trust

We are currently living in a crisis of trust.

According to FINCEN, impersonation scams, in which people or trusted services are impersonated for financial gain, pose a $200 billion global problem.

The impact of impersonation scams is only going to get worst with Generative AI and its accelerating trust challenges. Attackers now have an infinite number of fake personas and content factories at their fingertips, enabling them to churn out misinformation, social media narratives, deepfakes, phishing emails, or fraudulent documents on demand.

These AI capabilities turn many of our current security defenses obsolete. How can you implement secure authentication or content filtering when fake identities and content look just as real as the genuine article? Traditional methods for detecting deception break down.

This is a profound challenge, as entire sectors of the economy and pillars of society rely on the trustworthiness of information and identities. Just think about legal contracts, journalism, scientific research, medical information, communications from authorities - anything of vital importance where authenticity is key.

For security professionals, the implications are significant:

• Automated phishing and social engineering at an unprecedented, hyper-personalized scale

• Mass production of fake but credible misinformation, conspiracy theories, and propaganda

• Ability to rapidly manufacture fraudulent legal/financial documents and evidence

• Creation of trusted fake personas to infiltrate organizations and manipulate insiders

And this may just be the beginning. As language models are trained on more data sources, they can begin exhibiting shockingly personal knowledge and contextual abilities.

Security teams need to build a comprehensive strategy for designing resilience around AI impersonation that includes these 7 steps:

  • Identity Resilience

    • Implement advanced biometric authentication using multiple modalities (facial, voice, behavioral)

    • Leverage liveness detection to prevent replay/deepfake attacks

    • Use device binding and continuous risk assessment

    • Deploy decentralized identity solutions with verifiable credentials

    • Maintain robust identity proofing and corroboration processes

  • Content/Data Provenance

    • Implement digital watermarking and blockchain provenance for high-stakes content

    • Use cryptographic signing for verifying document authenticity

    • Deploy data monitoring for unauthorized replication/tampering

    • Maintain verifiable audit logs and data lineage tracking

  • Communication Security

    • Leverage out-of-band confirmation for high-risk communications

    • Maintain approved communication channel allowlists

  • Adversarial AI Defenses

    • Deploy defensive AI tuned to detect generative AI artifacts

    • Implement AI output verification controls

    • Leverage human cross-verification for critical AI outputs

  • Threat Intelligence

    • Build AI impersonation specific threat intel capabilities

    • Maintain updated impersonation threat models

    • Incorporate AI forensics into incident response

  • Workforce Readiness

    • Upskill security staff on AI impersonation threats

    • Train personnel on detecting signs of AI deceptions

    • Maintain human processes for high-trust decisions

    • Foster critical thinking to combat AI manipulation

  • Resilient Architecture

    • Design systems assuming AI impersonation will occur

    • Reduce attack surfaces through segmentation

Combating the growing risk of AI impersonation requires a multi-layered defense spanning people, processes, technology and governance.

AI News to Know

The Future of AI Innovation Act: A bipartisan bill called the "Future of Artificial Intelligence Innovation Act of 2024" was introduced in Congress this week. It aims to establish a comprehensive framework for AI by setting standards, metrics, and evaluation tools. I’m still skeptical the U.S. Government will pass any AI legislation in an election year, but key components and objectives of the bill include:

  1. AI Standards and Metrics: The Act proposes the creation of voluntary AI standards, metrics, and evaluation tools to ensure safety, reliability, and interoperability of AI technologies across different sectors.

  2. AI Safety Institute and Testbeds: A new Artificial Intelligence Safety Institute is proposed, along with programs on AI testbeds to facilitate innovation and testing of AI technologies in collaboration with national labs and both public and private sectors.

  3. International Cooperation: The Act emphasizes the need for international coalitions to harmonize AI standards globally, suggesting the U.S. should work closely with international partners to ensure that AI technologies are safe and beneficial globally.

  4. Regulatory Barriers and Innovation: The Comptroller General is tasked with identifying and addressing regulatory barriers to AI innovation, aiming to streamline the development and deployment of AI across industries.

  5. Privacy and Security: The bill addresses the privacy and security concerns associated with AI, emphasizing the development of secure AI systems that protect user data and ensure the integrity of AI applications.

AI Board Governance: Institutional Shareholder Services examined S&P 500 company proxy statements filed from September 2022 through September 2023 for mentions of board oversight and director skills related to AI. They found that:

  • Only about 15% of companies in the S&P 500 provide some disclosure in proxy statements about board oversight of AI.

  • Disclosure of board oversight of AI and directors’ AI expertise is primarily found in the information technology sector, with 38% of companies providing some level of board oversight disclosure.

  • 13% of S&P 500 companies have at least one director with AI-related expertise.

In more recent annual 10-K report filings, we’re seeing a rapid increase in AI references as emerging risks and business opportunities. I except proxy statements will follow suit, but don’t anticipate a major overhaul in AI committee governance since we haven’t seen one in cyber committee governance.

AI on the Market

Early Bets on Security Operations: AI-powered security operations platforms have been an early AI winner. Last week, StrikeReady raised $12M. Other AI-enabled SOC platforms raising money have included Radiant Security and Reach Security. This week, Prophet Security launched out of stealth with $11 million in seed funding.

GenAI in the Cyber Security Market: Looking beyond security operations investments, Dimension Market Research projects that global generative AI across all cybersecurity domains will surge from $17.8B to $146.9B by 2032. With this trajectory, AI will increasingly permeate our security jobs from a defensive and offensive standpoint and now is the time to proactively determine what it means for your career.

💼 5 Cool AI Security Jobs of the Week 💼

AI Prompt Security Engineer @ Allstate to identify potential security risks and vulnerabilities in Allstate’s use and integration of GenAI | Remote | $84k - $145k | 5+ yrs exp.

Security Engineer, GuardDuty Security Analytics and AI Research @ AWS to research and develop core data mining and machine learning algorithms for Amazon GuardDuty | Seattle, WA | $135k-$212k | 3+ yrs exp.

Sr. Principal Security Researcher - GenAI @ Palo Alto Networks to conduct advanced research into GenAI security risks | Santa Clara, CA | $170k-$275k

AI/ML Lead @ Prophet Security to lead the integration and optimization of large language models into its security product | Remote / Palo Alto

Principal Security AI Architect @ Microsoft to cultivate extensive knowledge of attacker tactics and leverage cutting-edge technology to combat them | Multiple Locations | 7+ yrs exp. | $133k-$256k

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington