🦾 Shadow AI - 7 March 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

This week’s issue of Shadow AI summarizes the details of the Department of Justice’s indictment of Leon Ding, a Google software engineer, who stole AI secrets from Google. It goes well beyond the headlines and is a fascinating, must read case for every security practitioner.

I also cover:

🔒Restrictions on Independent AI Safety Research

🕸️ LLM API Offerings

📈 Claude 3 Model Family Release

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - A Harrowing Case of Stolen AI Secrets at Google

Yesterday, the Department of Justice charged Linwei Ding, a.k.a. Leon Ding, with multiple counts related to the theft of AI trade secrets from Google.

The indictment details how Ding, employed by Google as a software engineer, allegedly uploaded over 500 unique files containing Google Confidential Information to a personal cloud storage account between May 2022 and May 2023. This information included details about Google's supercomputing data centers, the architecture and software of GPU and TPU chips, and systems management software.

During this time, Ding maintained unauthorized affiliations with PRC-based companies in the AI industry, where he used Google's confidential information to further his own ventures and those of the companies with which he was involved.

Unauthorized Affiliations:

  1. Affiliation with Rongshu Technology: Ding began uploading Google Confidential Information to a personal cloud account in May 2022 and continued until May 2023. During this period, he received several communications from the CEO of Rongshu, an early-stage technology company based in the PRC, which offered him the position of Chief Technology Officer (CTO) with a significant salary plus bonuses and company stock. Rongshu's business goals included the development of acceleration software for machine learning on GPU chips and the creation of AI federated learning platforms. Ding traveled to the PRC in October 2022, remaining there until March 2023, while still working for Google where he participated in investor meetings to raise capital for Rongshu.

  2. Founding Shanghai Zhisuan Technology Co.: By May 30, 2023, Ding had founded Shanghai Zhisuan Technology Co., Ltd. and served as its CEO. Zhisuan aimed to develop a CMS (Cluster Management System) capable of accelerating machine learning workloads, including training large AI models with supercomputing chips. Ding applied to a PRC-based startup incubation program, MiraclePlus, on behalf of Zhisuan, which was later accepted. An agreement was signed on November 20, 2023, granting a 7% ownership interest in Zhisuan to a company affiliated with MiraclePlus in exchange for investment capital.

  3. Travel and Pitching to Investors: Ding traveled to the PRC and presented Zhisuan to investors at the MiraclePlus venture capital investor conference in Beijing on November 24, 2023. A document circulated by Ding to Zhisuan WeChat group members boasted about the company's experience with Google's computational power platform and its plans to replicate and upgrade it for China's national conditions.

Google’s Security Measures

The indictment says that Google took reasonable safety measures to safeguard its technology, including

Physical Security Measures:

  • Deployment of campus-wide security guards and installation of cameras at most building entry points.

  • Restriction of building access, requiring employees to badge in at front entrances.

  • Additional access restrictions to certain floors or areas within buildings, limited to a subset of employees through badge access.

Network Security Measures:

  • Implementation of a data loss prevention system that monitored and logged certain data transfers to and from the network.

  • Unique identification and authentication of each device before accessing the Google corporate network.

  • Mandatory two-factor authentication for all Google employees for their work-related Google accounts.

Logging of employee activity on the network, including file transfers to platforms such as Google Drive or DropBox.

Monitoring and Analysis:

  • Collection of physical and network access information, including badge access times and locations, Internet Protocol (IP) addresses for employee logins, and two-factor authentication logs. This data was analyzed for potential risks.

  • Automated tools and human analysts assessed this data regularly to detect potential malicious activity. Alerts would be generated for anomalies, such as discrepancies between network access and physical badge-ins.

  • Restriction of network access for employees traveling to certain countries known for cybersecurity risks (e.g., China, North Korea, and Iran).

Access Control to Sensitive Information:

  • Further restriction of access to sensitive information, including trade secrets, to a subset of employees whose job duties related to the subject matter.

Employee Agreements and Training:

  • Requirement for every Google employee to sign an Employment Agreement, agreeing to protect Google Confidential Information, use it only for the benefit of Google, not retain or disseminate it, and not engage in competing businesses.

  • Mandatory adherence to Google's Code of Conduct, which emphasized the importance of protecting trade secrets and other confidential intellectual property. This was supplemented by security training, particularly for employees working on sensitive technology projects.

What went wrong?
  1. Exploitation of Authorized Access:

    Ding exploited his authorized access to sensitive information for personal gain. Despite Google's restrictions on accessing certain sensitive information to relevant employees, Ding was able to misuse this access to upload confidential and proprietary information to a personal cloud account.

  2. Ineffective Detection of Unauthorized Data Exfiltration:

    Ding's method of exfiltrating data, which involved copying data from Google source files into the Apple Notes application on his Google-issued MacBook, converting these notes into PDF files, and then uploading them from the Google network to his personal cloud account, helped him evade immediate detection. This suggests that Google's data loss prevention (DLP) systems and monitoring might not have been fully effective in detecting unconventional methods of data exfiltration.

  3. Insufficient Monitoring of Conflict of Interest and External Affiliations:

    Ding's involvement with PRC-based companies Rongshu and Shanghai Zhisuan Technology Co., where he held significant roles, was not detected by Google until after substantial confidential information had been exfiltrated. Google's policies regarding the reporting of external affiliations or potential conflicts of interest may not have been strictly enforced or monitored, allowing Ding to engage in activities that directly competed with Google's interests.

  4. Reliance on Employee Self-Reporting for Security Compliance:

    After Google detected some of Ding's unauthorized uploads, Ding signed a Self-Deletion Affidavit claiming he had deleted all non-public information obtained from his job at Google from personal possessions. This incident underscores a reliance on employee honesty and self-reporting for compliance with security policies, which can be a significant vulnerability when dealing with malicious insiders.

  5. Physical Security Bypass Through Collusion:

    Ding colluded with another employee to scan their badge while he was in the PRC, creating false records of being present at the Google office in the U.S. This apparently was never picked up even while Ding stayed in the PRC from October 2022 until March 2023 while on Google’s payroll.

A Wake Up Call for the AI Industry

The indictment of Leon Ding for stealing AI secrets from Google underscores a critical issue facing the tech industry: insider threats.

The case emphasizes the need for AI firms to continuously evolve their security strategies, integrating robust technological defenses with stringent monitoring and ethical workplace cultures. Insider threat strategies also need to extend beyond employees to contractors.

If you’re in the AI industry, this situation should immediately kick off a reassessment of your insider threat program.

Google also has some additional answering to do and will likely put their HR and managerial guardrails under the microscope. How was Ding not caught as he presumably worked for months from the PRC while an accomplice badged him in at Google? Where were his manager and colleagues to report not seeing him in the office? Goodbye remote work…

AI News to Know

Restrictions on Independent AI Safety Research: Internal AI red teaming of models is important to reduce misuse, bias, and safety issues, but it’s critical to be able to complement internal efforts with external independent safety research. A recent study reveals that AI companies’ policies often discourage independent safety research on their models, in part because independent research can increase their legal liability. While some companies have bug bounties and offer legal protections for security research, protections for safety research are limited.

AI on the Market

LLM API Offerings: API companies are rapidly increasing their offerings for LLM use cases. As Ankita Gupta, Co-founder and CEO of Akto.io notes, CloudFlare, Kong, and Akto have all released API offerings to secure LLMs within the last month.

When we work with AI, we're essentially working with APIs. Whether it's about consuming AI or making adjustments to it, APIs are the tools we use. So, the more we use AI, the more we're going to rely on APIs.

Ankita Gupta

Claude 3 Model Family Release: While its competitors — Google (See above) and OpenAI (Musk lawsuit) face tough weeks, Anthropic released its latest family of models. Opus, its most intelligent model, outperforms peers on most of the common evaluation benchmarks and “exhibits near-human levels of comprehension and fluency on complex tasks.” Anthropic's newest model demonstrated a striking level of meta-awareness by recognizing it was being tested with an artificial "needle in a haystack" scenario. This raises a significant question on risk around if AI could behave safely during testing, but pursue misaligned goals once in production.

💼 5 Cool AI Security Jobs of the Week 💼

AI Policy, Program Manager @ Mozilla to advance Mozilla’s vision in the AI Policy Space | Remote | $86k-$97k | 3+ yrs exp.

Head of Corporate Engineering and IT Security @ Anthropic to own the strategy and execution of Anthropic’s software engineering, security, and IT operations | Multiple Locations | $560k | 15+ yrs exp.

Principal AI Red Team Researcher @ Verizon to conduct comprehensive analysis and testing of AI/ML threats and vulnerabilities | Multiple Locations / Remote | 4+ yrs exp.

Staff ML and AI Security Researcher @ ServiceNow to conduct security audits of ML/AI systems within ServiceNow’s products and systems | Multiple Locations / Remote | 3+ years exp.

Sr. Manager Security Architecture (SOC and VM) @ Vanguard to define new AI/ML architectures for driving cyber and fraud outcomes | Multiple Locations

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington