🦾 Shadow AI - 9 May 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Hope everyone is having a great week and RSA was a success if you attended. I definitely had FOMO this week!

This week, I cover:

🇨🇳 Emerging AI Adversary Tactics from East Asia

👮🏽‍♀️ FBI’s AI Warning

💡Reimagining Infrastructure Security for AI

✋ Secure by Design Pledge

🤖 RSA Check-In

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Emerging AI Adversary Tactics from East Asia

In a report by Microsoft Threat Intelligence, recent cybersecurity trends highlight the evolving tactics, techniques, and procedures (TTPs) used by threat actors from China and North Korea, focusing on their innovative use of artificial intelligence (AI).

China's Cyber Operations

Chinese threat actors have been observed sharpening their cyber operations with a notable pivot towards AI-enhanced influence campaigns. One example involves the group identified as Storm-1376, which used AI-generated audio and video to manipulate perceptions during Taiwan's elections. This group notably created fake endorsements using AI-generated voices of prominent figures like Foxconn owner Terry Gou, falsely portraying his political stance. This was the first time that Microsoft Threat Intelligence witnessed a nation state actor using AI content in attempts to influence a foreign election.

Moreover, Chinese actors have also been seen leveraging AI to generate dynamic and engaging online content aimed at stoking divisions within the United States and other regions. This includes AI-generated memes and social media content that amplifies divisive issues, including spreading misinformation regarding various incidents such as the train derailment in Kentucky and the Maui wildfires. Despite these efforts, there is little evidence to suggest that these campaigns have successfully swayed public opinion to date.

Chinese Communist Party-affiliated actors are also leveraging inauthentic “sockpuppet” social media accounts to pose polarizing questions on divisive U.S. issues. This strategy is aimed at understanding and exploiting the divisions within U.S. society, potentially to influence voter behavior and outcomes in the U.S. presidential elections​ which we’ll likely see more of in the coming months.

North Korea Cyber Operations

North Korean cyber activities have also shown an increasing integration of AI technologies, though with a different focus compared to China. Notably, groups like Emerald Sleet are using LLMs to research potential targets with expertise in DPRK defense and nuclear issues and generate content for phishing campaigns.

Additionally, North Korean actors have been prolific in their cyber attacks aimed at cryptocurrency theft, funding significant portions of their national defense budget through such illicit activities. This includes sophisticated attacks on the software supply chains and crypto exchanges, where AI tools have likely played a role in identifying vulnerabilities and optimizing attack vectors.

Outlook and Implications

Early nation-state usage of AI shows how another layer is being added to the evolving cyber battleground which will become even more pronounced as the U.S. Election season heats up. In fact, Sam Altman shared his concerns about AI and misinformation leading up to the Presidential Election at a Brookings Institute event this week. More capabilities to combat disinformation and restore trust needed through a well-coordinated public-private sector strategy. OpenAI’s release of an image detection classifier that can evaluate whether images were created by OpenAI tools and assist disinformation researchers in spotting deepfakes is a good and timely step in that direction.

AI News to Know

FBI’s AI Warning: As we’re seeing AI gradually enable nation-state hacking, the FBI is also reporting that other hacking groups are broadly starting to use AI to hack US corporations and government agencies. AI, as we’ve discussed here and the FBI has validated, is democratizing more sophisticated attack capabilities that we typically reserved for more advanced state actors. As these cyber attack tools make their way down the food chain, the FBI also acknowledged they are using AI for defensive and investigative purposes.

Reimagining Infrastructure Security for AI: OpenAI released a paper highlighting six key security measures necessary to protect advanced AI infrastructure:

  1. Trusted Computing for AI Accelerators: It emphasizes the integration of trusted computing with AI accelerators like GPUs, utilizing emerging encryption and hardware security technologies such as confidential computing to protect model weights and inference data.

  2. Network and Tenant Isolation Guarantees: This measure focuses on implementing strong isolation mechanisms to shield AI infrastructure from threats, including network segmentation and robust tenant isolation to prevent cross-tenant access and data exfiltration.

  3. Innovation in Operational and Physical Security for Datacenters: Proposes stringent operational and physical security controls in AI datacenters to protect against insider threats, including enhanced fortification, access controls, and novel security methods like remote 'kill switches' and tamper-evident systems.

  4. AI-Specific Audit and Compliance Programs: This involves tailoring audit and compliance frameworks to the unique needs of AI systems, ensuring they meet existing and new AI-specific security and regulatory standards.

  5. AI for Cyber Defense: Discusses the potential of AI to transform cyber defense, enhancing the ability of security programs to detect and respond to threats by integrating AI into security workflows and automating key processes.

  6. Resilience, Redundancy, and Research: Emphasizes the importance of continuous security research and development of redundant systems to enhance resilience, recognizing that no system is flawless and that security must evolve to counter emerging threats.

Secure by Design Pledge: More than 60 companies committed to the Secure by Design (SbD) Pledge led by the Cybersecurity and Infrastructure Security Agency (CISA), which focuses on encouraging software manufacturers to adopt principles that enhance cybersecurity from the ground up. Many of the companies who signed up for the pledge are enterprise security and technology companies, but it was good to see some smaller AI security companies like HiddenLayer, Lasso Security, and Scale AI, join the fold. It’d be nice to see CISA pursue a parallel LLM security focused pledge with Foundation Model providers as a next step.

AI on the Market

RSA Check-In: Not surprisingly, AI was a hot topic at RSA. Enterprise security vendors like Palo Alto, Google, and Crowdstrike (to name a few) all strategically announced new AI capabilities. Smaller vendors promoted their AI capabilities on the expo floor while keynotes covered AI topics that explored the power and possibilities of AI. For those of you who attended RSA, what were your biggest AI takeaways?

💼 5 Cool AI Security Jobs of the Week 💼

Senior Manager, GenAI Risk Management @ Capital One to Drive Risk Assessment Programs forward with a key focus on GenAI | Richmond, VA and NYC | $199k - $227k | 5+ yrs exp.

Lead Product Manager @ Rapid7 to lead identifying, defining, and delivering large scale, enterprise grade AI solutions | Arlington, VA | 6+ yrs exp.

Software Development Manager, AI Security @ Amazon to build security tooling and paved path solutions for secure GenAI | Seattle or Austin | $147k - $287k | 8+ yrs exp. 

Head of Security @ Rocket Lawyer to elevate the security program and champion AI security | Multiple Locations | 10+ yrs exp.

AI Penetration Tester @ Microsoft to identify security risks in enterprise AI systems | Remote | $145k-$238k | 4+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington