• Shadow AI
  • Posts
  • 🦾 Shadow AI - 2 November 2023

🦾 Shadow AI - 2 November 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

It’s been a big AI week with the release of the White House’s AI Executive Order and the UK’s AI Security Summit.

Shadow AI breaks it down for you in this week’s issue.

Let’s dive in!

Demystifying AI - The AI Executive Order

Shadow AI has been previewing the AI Executive Order (EO) and now that it has been released, here are a few of my key takeaways.

  • Ensuring Safe and Reliable AI: The scope of ensuring safe and reliable AI is broad and hinges around four key areas:

  1. Dual-use foundation models that could pose risks to “national security, national economic security, or national public health and safety”: Similar to many technologies like drones and thermal imaging, AI is dual use in that it can be used for civilian or military purposes. Invoking the Defense Production Act, which is usually leveraged in war or a national emergency like Covid response, for AI security is a significant measure. Implementation guidance to help AI companies determine if their models are dual-use and present a risk to national security will be critical.

  2. AI that meets certain technical thresholds based on compute power and capacity, which excludes nearly all AI services today: Today’s AI capabilities, however, are like what we experienced in the early 90s with the birth of the web browser. It’s hard to forecast how big and how fast AI models will become and the long-term impact these thresholds will have is unclear. A more prudent approach may have been to regulate at the application level based on risk. This would have required high risk AI applications like HR software, underwriting, healthcare, self-driving applications, etc meet the highest level of safety requirements rather than including all foundational models that eventually meet a technical threshold.

  3. Critical Infrastructure: By January 30, 2024 and annually thereafter, Agencies with regulatory authority over critical infrastructure will need to provide a report to DHS on the risks related to AI use in their sector. Critical infrastructure owners will need to follow guidance from an updated NIST AI Risk Management Framework.

  4. Chemical, Biological, Radiological, and Nuclear (CBRN) Threats: AI has the potential to lower the barrier of entry for non-experts to design and acquire CBRN weapons. Main focus is on mitigating AI model risks that could exacerbate these threats, with a particular focus in biotechnology. Interestingly, explosives (CBRNE), which was a core focus of the U.S. after 9/11 and during the wars in Iraq and Afghanistan, was excluded from the EO.

  • Red Team Safety Test Sharing: NIST will develop standards for AI red-team testing and companies with dual-use foundation models will need to submit the results of the red team assessments to the Department of Commerce. The assessment results, which may capture unexpected system behaviors, vulnerabilities, or potential misuses could pose a treasure trove of valuable data to competitors and hackers and will need to be well secured. Earlier this summer, Chinese hackers breached the email of Gina Raimondo, the Secretary of the Commerce Department. In 2020, Russia had unfettered access to the email systems at the Commerce Department and Treasury Department.

  • Devil is in the Details: OMB released draft implementation guidance on the AI EO for Federal Agencies yesterday, but details on how certain private-sector foundational model requirements like the sharing of red team safety test assessments and the framework for determining dual-use foundational models are not yet available. A strong public-private partnership will be critical to developing effective processes and programs that have the buy-in of AI foundational model companies.

Overall, the AI EO is a step towards establishing a framework for governing AI in the United States, but legislation is still needed to fully address the challenges AI presents.

I’m not overly optimistic, but the AI EO may have lit a fire on Congress to get moving on legislation. After being on pace for one AI Insight Forum a month, Senator Schumer hosted his third and fourth AI Insight Forums yesterday with a focus on impacts to the workforce and how to address AI’s biggest societal impacts.

AI News to Know

  • UK’s AI Safety Summit: China and 28 other countries signed the “Bletchley Declaration", agreeing to cooperate on two main areas of focus:

    1. “identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.

    2. building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks.”

  • Generative AI’s Role in Israel-Hamas Disinformation: While some feared the Israel and Hamas conflict would result in an increase in fake content generated by AI tools, Wired reports that the “technology has had a more complex and subtle impact.” To date, AI has been mainly used to bolster support for one side or the other with limited harmful real-world impacts.

AI on the Market

  • Microsoft’s Unfortunate AI Poll: Microsoft has suspended its AI-generated polls on news articles and launched an investigation after a poll asking about the reason behind a woman’s death appeared alongside the article.

  • AI Engineers: Hiring for AI tech talent has increased by 22% in the last three months with AI engineers averaging a 25% salary premium compared to security engineers.

    Axios AI

AI Prompt of the Week

The SEC’s complaint against SolarWinds and its former CISO rocked the cybersecurity community this week. I uploaded the SEC charges to Claude and asked it to summarize the role of other key C-level executives. I think it did a good job summarizing their involvement.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington