🦾 Shadow AI - 2 May 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

This week, I cover:

🔎 Assessing DHS’s New AI Safety and Security Board

🤖 Extending the AI Risk Management Framework

🔒 Securing the AI Software Supply Chain

🙃 NVD Overhaul for AI?

⏫ The USG’s AI Hiring Push

💰 Microsoft Earnings

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Assessing DHS’s New AI Safety and Security Board

The Department of Homeland Security (DHS) announced the establishment of the AI Safety and Security Board, a significant step in the national strategy for the secure and responsible use of AI across our nation’s critical infrastructure. The board comprises experts from the private sector, academia, and government to guide the deployment of AI technologies that safeguard public interests while fostering innovation.

Points of Concern

1. Lack of Transparency in Member Selection

Critics have raised concerns about the transparency of the criteria used for selecting board members. Notably, the inclusion of representatives from aligned Foundation Model Providers and Cloud Service Providers like Microsoft and OpenAI and Anthropic and Amazon Web Services (AWS), as opposed to other significant players in the tech industry such as Meta, Databricks, Google, and X, struck me as particularly noteworthy. Why do OpenAI and Microsoft each get a seat on the Board? Same for Anthropic and AWS?

2. Membership Bias Towards Big Tech

The board's composition is heavily skewed towards big tech companies, which can lead to a bias in favor of proprietary technologies and strategies. This overrepresentation of big tech may influence the board’s recommendations and decisions in ways that prioritize the interests of these corporations over broader societal needs.

3. Lack of Open Source LLM Representation

There is a notable absence of experts from the open-source large language model (LLM) community on the board. This underrepresentation is significant because open-source models are crucial for promoting transparency, accountability, and the democratization of AI technology, contrasting the often closed-nature of proprietary AI systems developed by major corporations.

4. Lack of Clear Objectives and Concrete Priorities

Another significant critique of the board is the absence of clear, measurable objectives and concrete priorities at this stage. Without these, it will be challenging to gauge the board's progress and effectiveness in guiding AI deployment across critical sectors. The lack of clarity can hinder the board's ability to address specific AI-related challenges and risks effectively.

Strategic Suggestions

To address the concerns raised, the following recommendations are proposed to enhance the board's effectiveness and credibility:

Enhancing Transparency:

  • Criteria for Selection: DHS should publicly disclose the criteria used for selecting its members. This transparency would help build public trust and clarify why certain corporations or individuals are chosen over others.

Balanced Representation:

  • Diverse Expertise: Including more independent and open-source AI specialists would diversify the board’s expertise and enhance its ability to oversee AI deployment in a manner that aligns with public interest and ethical standards.

Community Engagement:

  • Strengthening Ties: Engaging with communities affected by AI deployment through public forums and partnerships would allow the board to gather diverse insights and foster public trust.

Setting Clear Objectives:

  • SMART Goals: The board should define specific, measurable, achievable, relevant, and time-bound objectives for their initiatives, providing clear benchmarks for success and accountability.

DHS’ AI Safety and Security Board is a good start, but, as it holds its first meeting this week, it should address these recommendations to ensure the Board operates transparently and inclusively, effectively utilizing AI technologies to benefit society while mitigating potential harms.

AI News to Know

Extending the AI Risk Management Framework: NIST released an initial draft of a companion document to its AI Risk Management Framework titled “NIST AI 600-1: Generative Artificial Intelligence Profile.” It primarily focuses on providing a comprehensive framework for managing risks associated with GenAI, including:

  1. Risk Identification and Categorization: The document highlights several risks unique to or exacerbated by GAI technologies, including data privacy issues, confabulation, dangerous recommendations, environmental impacts, and more. These risks are grouped under categories such as technical or model risks, misuse by humans, and ecosystem or societal risks.

  2. Actions to Manage Gen AI Risks: A series of actions are recommended for organizations to govern, map, measure, and manage the identified risks effectively. These include establishing policies for legal compliance, ensuring the security of AI systems, and managing data privacy rigorously.

  3. Sector-Specific Guidance: The framework is designed to be adaptable across various sectors, providing organizations with the flexibility to apply the guidelines according to their specific needs and risk profiles.

  4. Cross-Sectoral Profile: The document functions as a cross-sectoral profile, which means it provides guidance that can be applied broadly across different industries and sectors, facilitating a unified approach to GAI risk management.

The framework provides a useful starting point for security teams looking to maintain high standards of trust and security as their organizations adopt GenAI.

Securing the AI Software Supply Chain: Google Research examines how the AI software supply chain threat landscape continues to expand and highlights the unique software supply chain security challenges that need to be addressed, such as:

  1. Data and Model Provenance: One of the central themes is the importance of maintaining robust data and model provenance. This involves ensuring that there is a secure and verifiable record of where data comes from, how it is processed, and how models are trained. This is crucial for preventing issues like data poisoning and ensuring the integrity of AI models.

  2. Tampering and Data Integrity: AI introduces specific risks related to tampering with data and models. The paper discusses the need for systems that can verify the integrity of data and models throughout their lifecycle, from development to deployment. This includes protections against unauthorized changes that could affect the behavior of AI systems.

  3. Dependency and Artifact Management: Similar to traditional software, AI systems rely on a variety of dependencies and artifacts. However, the complexity and scale of AI models make managing these components more challenging. The paper emphasizes the need for close tracking of these dependencies to prevent security vulnerabilities.

  4. Model Serialization and Storage: Properly handling model serialization and securely storing models are pointed out as critical concerns. AI models often contain complex data structures that need to be safely serialized and stored to prevent tampering or unauthorized access.

  5. Training and Evaluation Environments: The security of the environments in which AI models are trained and evaluated is a significant concern. There is a risk of introducing vulnerabilities during the training process or through the evaluation data itself. Ensuring these environments are secure is essential to maintaining the overall security of AI systems.

  6. Human Factors in AI Security: The paper also touches on the role of human oversight in AI security, noting that human error or malice can introduce risks. Therefore, there needs to be strict controls and oversight on human involvement in the data labeling and model training processes.

These specific risks underline the need for a comprehensive and tailored approach to supply chain security in the AI domain, focusing not just on the technology but also on the processes and human factors involved.

NVD Overhaul for AI?: Senators Mark Warner and Thom Tillis have proposed legislation that would require changes in the National Vulnerability Database (NVD) to account for the tracking and processing of AI security incidents. The NVD, however, is cracking under pressure as there’s a huge backlog of vulnerabilities awaiting processing. As analysis by Jay Jacobs reveals, there’s been a significant reduction in analyzed vulnerabilities in 2024. Any overhaul of the NVD for AI security incidents needs to first address the systemic cracks in the system that are emerging to be successful.

AI on the Market

The USG’s AI Hiring Push: The White House's recent executive order by President Biden aims to strengthen AI safety and compliance across federal agencies. This includes a strategic hiring push to build a capable AI workforce which has seen “unprecedented” levels of interest in the tech talent programs. Federal agencies hired more than 150 individuals in AI roles since October and plan to hire more than 500 additional AI workers through the end of 2025.

Microsoft Earnings: Microsoft reported strong quarterly earnings and there were some interesting AI nuggets emphasizing how quickly enterprises are starting to adopt AI and the urgency in which security teams need to build strategies to enable secure adoption. Some key stats:

  • More than 65% of the Fortune 500 use Azure OpenAI

  • Nearly 60% of Fortune 500 use M365 Copilot

  • GitHub Copilot has 1.8M paid subscribers, with growth accelerating to 35% q/q

  • More than 90% of the Fortune 100 are now GitHub customers

  • Hundreds of paid customers are using Microsoft to access 3rd party models (Cohere, Meta, Mistral)

💼 5 Cool AI Security Jobs of the Week 💼

Senior Security Engineer @ Lakera to lead its security program maturation and protect model providers and users from adversarial misalignment | San Fran, Zurich, or London | $140k - $180k | 5+ yrs exp.

Principal Information Security Specialist @ Torch to secure a high-impact and scalable leadership development platform and address AI risks | Remote | $190k-$230k | 5+ yrs exp.

Privacy/AI Specialist, Privacy and Legal Information Security @ Comcast to support Comcast Cable’s global privacy and AI programs | Philadelphia | 2+ yrs exp.

Director of Architecture IT Risk Management and Security @ Merck to design and implement security solutions across on-premise, cloud, OT, and AI environments | Rahway, NJ | $164k-$259k | 10+ yrs exp.

Cyber Security Engineer @ Latitude AI to help secure Ford’s automated driving technology | Pittsburgh or Palo Alto | $126k- $174k | 5+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington