• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 30 May 2024

šŸ¦¾ Shadow AI - 30 May 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Big personal news this week. I have moved on from my full-time CISO role.

Iā€™m excited to focus on consulting work for security-minded companies while I find the right long-term opportunity. Iā€™d love to connect more with Shadow AI subscribers to hear about your perspectives on AI and security and explore areas to collaborate. If youā€™re interested in chatting, please feel free to reply to this email or set up some time via my calendar.

This week, I cover:

āš–ļø Transforming AI Governance at OpenAI

ā—¼ļø Peering into the AI Black Box

šŸ˜¬ Googleā€™s AI Search Results

šŸ¤ OpenAI and PwCā€™s Partnership

ļ£æ Appleā€™s Deal with OpenAI

šŸ”Œ Data Center Power Surge

šŸ’¼ 5 Cool AI Security Jobs of the Week

Letā€™s dive in!

Demystifying AI - Transforming AI Governance at OpenAI

OpenAIā€™s governance, safety, and security practices continue to be a hot topic. This week alone:

1) OpenAI announced they were establishing a new Safety and Security Committee that will be responsible for making recommendations to its full Board on critical safety and security decisions for all OpenAI projects and operations. The board is led by 4 directors, including Sam Altman, with support from internal technical and policy experts and former high-ranking USG cybersecurity officials Rob Joyce and John Carlin.

2) Jan Leike, the OpenAIā€™s former top safety executive, joined rival Anthropic.

3) Helen Tomer, former OpenAI board member, shared her perspective on the TED AI Show on why Sam Altman was fired. She said:

  • the board learned about ChatGPTā€™s launch on Twitter and was not informed in advance

  • Altman never disclosed that he owned the OpenAI Startup Fund

  • Altman gave inaccurate info about the companyā€™s safety processes ā€œon multiple occasionsā€ allegedly making it impossible for the board to understand how well the safety processes were working and what areas needed more attention

This scrutiny isn't surprising given the power and potential impact of the AI systems the company is developing. As OpenAI restructures its approach to AI governance, here are five key areas that I hope will be closely scrutinized over the next 90 days:

  • Robust Safety and Security Metrics: OpenAI must establish a comprehensive set of Key Risk Indicators (KRIs) and Key Performance Indicators (KPIs) to monitor and measure the safety and security of its AI systems proactively. These metrics should cover technical aspects like robustness, alignment, and information hazards, as well as operational risks such as data breaches or misuse. Clear thresholds and escalation protocols must be defined, with mandatory board review processes for breaches in risk tolerance.

  • Holistic Enterprise Risk Management: An effective risk management framework is crucial for balancing innovation with responsible development. OpenAI should clearly articulate its risk appetite, factoring in corporate values, strategic objectives, stakeholder concerns, resource constraints, financial implications, and measurable outcomes. It should also consider establishing an independent enterprise risk function that has board representation to provide crucial oversight and challenge to frontline operators.

  • Strengthened Board Governance: The newly formed Safety and Security Committee should have a well-defined, publicly available charter reviewed annually. Regular board-level tabletop exercises should stress-test processes and controls for strategic safety and security risks. Mechanisms for timely and transparent communication between the board and executive leadership are essential.

  • Independent External Audits: To rebuild trust and credibility, OpenAI should commission regular independent audits of its safety and security practices by respected third-party experts. These audits should evaluate technical safeguards, operational processes, and governance mechanisms.

  • Inclusive Stakeholder Engagement: With John Carlin and Rob Joyce supporting the new Safety and Security Committee, OpenAI must ensure they are not mere figureheads, and that they have a voice at the table to inform risk assessments, priority-setting, and the development of robust governance frameworks.

Transforming AI governance at OpenAI is a big task, but one that is imperative for ensuring the beneficial and trustworthy development of its foundational AI models. OpenAI should use the Safety and Security Committee launch as a reset to build a proactive risk management program that effectively balances safety and security with growth and profitability.

AI News to Know

Peering into the AI Black Box?: One of the reasons safety and security is such a big issue with AI is that even the foundational AI model companies themselves donā€™t understand how they truly work. Anthropic, however, has made advancements in understanding the internal workings of large language models (LLMs) like Claude 3 Sonnet to help illuminate the relationship between input and output. Anthropicā€™s breakthrough involves identifying "features" within the AI's neural network. These features represent specific concepts or topics. By mapping these features, researchers can see how different sets of neurons correspond to particular ideas or behaviors within the modelā€‹. Ultimately, by understanding and controlling these features, Anthropic hopes to make AI systems more transparent and less prone to safety risksā€‹.

Googleā€™s AI Search Results: As Google pivots its search to leverage AI, examples of the responses showing wrong or misleading results have been making the rounds on social media. Itā€™s difficult to assess how often false answers are being produced and whatā€™s real vs fake on social media, but a useful reminder one how critical good data is for AI models.

An ā€œideaā€ that came from ā€œThe Onionā€ and made it into AI Overview

AI on the Market

OpenAI and PwCā€™s Partnership: OpenAI has landed an enterprise agreement with PwC that will cover 100k out of their 328k employees making PwC OpenAIā€™s largest customer to date. Last month, OpenAI disclosed that ChatGPTā€™s enterprise tier had about 600,000 users so this is a huge deal for the company. In turn, PwC will also become OpenAIā€™s first partner for selling its enterprise offerings to other businesses. Generative AI will not only boost consulting revenue, but also change the way consultant businesses are run.

Appleā€™s Deal with OpenAI: Apple has reportedly inked a deal with OpenAI to bring ChatGPT functionality to iOS 18. Apple plans to use ChatGPT to answer user questions instead of developing an Apple-made LLM to replace Siri. Microsoft CEO, Satya Nadella, is said to have concerns about the deal, specifically around Appleā€™s AI competing with Microsofts and the server demand.

Data Center Power Surge: A new report from the Electric Power Research Institute contends that data centers could use up to 9% of total electricity generated in the United States by 2030, which would be more than double data centersā€™ current rate of consumption. While AI applications are estimated to use only 10%ā€“20% of data center electricity today, that percentage is growing rapidly and be a major driver in increased data center load growth. 80% of data center power demands are also concentrated across 15 states.

šŸ’¼ 5 Cool AI Security Jobs of the Week šŸ’¼

Information Security Engineer @xAI to establish and lead information security from the ground up | San Fran or Palo Alto | $180k - $370k | 8+ yrs exp.

Senior Security Engineer, AWS Gen AI Security @AWS secure all things Gen AI coming from AWS | Multiple Locations | $136k-$247k | 5+ yrs exp.

Principal Cybersecurity Engineer (AL/ML Open Source Security) @ Discover Financial Services to manage risks with AI/ML Open Source Security | Riverwoods, IL | $104k-$175k | 6+ yrs exp.

Security Analyst @Notable to manage GRC efforts for the leading intelligence automation company for healthcare | Remote / San Mateo

Information Security Manager, AI Offensive Security @AMD to develop the companies AI Red Teaming Capability | $164k-$246k | San Jose or Austin (Hybrid)

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington