• Shadow AI
  • Posts
  • 🦾 Shadow AI - 30 November 2023

🦾 Shadow AI - 30 November 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Wow, I took one week off with the newsletter for Thanksgiving and OpenAI became a case study in corporate governance, leadership, and communications.

Last night, Sam and the OpenAI Board released statements aim at rebuilding trust with their customers, employees, partners, and community members. A Three things stood out to me:

1) Sam Altman highlights increased investment in OpenAI’s “full-stack safety efforts” as a top priority, but there’s still no clarity on the misunderstanding between the prior board and Sam that led to his firing.

2) Bret Taylor, the new chair of OpenAI’s board, emphasized they are committed to building a diverse Board (right now it’s comprised of three white men) that covers the depth of the work OpenAI does from “technology to safety to policy.” Microsoft won’t be caught off-guard against as they will be a non-voting board observer.

3) Ilya’s role at OpenAI is unclear, although Sam says they hope to continue working together.

It’s good to be back… Let’s dive in!

Demystifying AI - Artificial General Intelligence

As we venture closer to the development of Artificial General Intelligence (AGI) – machines with human-like cognitive abilities – and have board level disagreements on safety of AI systems, the conversation around a responsible safety framework for AGI becomes increasingly urgent.

What is AGI?

Many AI experts have different definitions of AGI and Google DeepMind recently published a paper that defines the various levels of AGI based on performance and generality. We’re currently at Level 1: Emerging, but there have been reports of OpenAI’s Q* being a breakthrough development on the quest for AGI.

The leap to AGI carries a wide range of serious risks - from misuse to misalignment to the mass displacement of labor- and makes it imperative to have a Responsible AGI Safety Framework.

Towards a “Whole of Community” AGI Safety Framework

Early on in my career, I had the opportunity to author the National Prevention Framework for how the United States Government would prevent an imminent terrorist or cyber attack on the homeland. We built a “whole of community framework” recognizing the important role all levels - international, federal, state, local, and the public - had to play in preventing the attackers from achieving their outcomes.

Creating a responsible safety framework for AGI requires a similar comprehensive, “whole of community” approach that encompasses various domains such as ethics, technology, governance, and public engagement.

  1. Ethical Foundation and Value Alignment: We must ensure that AGI's decision-making aligns with human values and ethics. Ethical guidelines should be established internationally, taking into account diverse cultural and moral values.

  2. Transparency and Explainability: AGI systems must be transparent in their decision-making processes. Their actions should be explainable to developers, users, and regulators and we have seen that AI companies are not demonstrating strong transparency today.

  3. Robust and Reliable Design: AGI should be designed with robustness and reliability in mind. This includes building systems that can handle unexpected situations and are hardened against manipulation or errors. Regular audits and updates should be part of the lifecycle of an AGI system to maintain its integrity and safety.

  4. Human Oversight and Control: Despite their advanced capabilities, AGI systems should remain under human oversight. This involves developing control mechanisms to ensure that AGI systems do not act outside their intended boundaries and can be overridden or shut down by human operators when necessary.

  5. Privacy and Data Security: As AGI systems will process vast amounts of data, ensuring privacy and data security is paramount. We’ve yet to build a system that is resistant to cyber attacks and we’ll need to develop new ways of securing critical systems on the path to AGI.

  6. Collaboration and Global Governance: International bodies should be involved in setting standards and regulations for AGI development and deployment. We’re already seeing the challenges of developing international regulatory frameworks with the EU AI Act on the ropes.

  7. Public Engagement and Education: Educating the public about AGI and its implications is crucial to ensure an understanding of the risks and opportunities.

As the performance and generality of AI expands, we need to have a comprehensive, robust, and scaleable framework in place to manage the risk.

What else would you include?

AI News to Know

  • Improving Cancer Detection Through AI: One of the areas of AI I’m most excited about is improving health outcomes. A new AI cancer detection model trained to spot pancreatic malignancies outperformed expert radiologists.

  • Leaked GPT Prompt Instructions: A Github repo has been created collecting the leaked prompt instructions of over 165 custom GPTs. Prompt privacy, as we covered in the last issue of Shadow AI, is an issue, but this repo is also a great study on prompt engineering.

  • OpenAI Red Teaming Network Application Deadline Approaching: The deadline for applications to join OpenAI’s Red Team network and improve the safety of OpenAI’s models is tomorrow, December 1st!

AI on the Market

  • 2 Million New Materials?: Google DeepMind has used AI to predict the structure of more than 2 million new materials that could lead to better-performing batteries, solar panels and computer chips. It took 20 years to make lithium-ion batteries commercially available and AI has the potentially to significantly accelerate the development of new materials.

  • AI Business Resiliency Plan: It’s AWS reInvent and, not surprisingly, AI is the hot topic at the conference. One of the benefits of AWS Bedrock is enterprises have the option to pick and choose their AI models and aren’t locked-in to a single vendor. AWS Bedrock has integrations with AI21 Labs, Anthropic, Cohere, Meta, and Stability AI. As enterprises increasingly adopt AI, make sure you have a business resiliency plan in place. What is your plan if your foundational model provider folds?

  • Stability AI’s Instability: Speaking of AI business resiliency plans, what if your company has hitched its wagon to Stability AI? Investor Coatue is concerned about Stability AI’s financial position as the company explores a potential sale.

AI Prompt of the Week

I like how the output highlights the importance of conducting an impact assessment, having a robust data management and backup strategy, and having infrastructure redundancy. It doesn’t, however, mention strategies for building resiliency in foundational models themselves.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington