• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 22 December 2023

šŸ¦¾ Shadow AI - 22 December 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Iā€™m a day late on the newsletter due to holiday commitments. Hope you didnā€™t miss it too much!

This week, we cover:

šŸ’­ Reflections and Predictions

šŸ¦¾ Googleā€™s take on Shadow AI

šŸ“ OpenAIā€™s Responsible Scaling Framework

šŸšØ Microsoft Chatbot Election Inaccuracies

šŸšØšŸšØ NIST Budget Woes

šŸ“± Appleā€™s AI Breakthrough

šŸ¤‘ Anthropicā€™s Growing War Chest

šŸ‘‰ AI Prompt of the Week - Comparing OpenAIā€™s and Anthropicā€™s Responsible Scaling Policies

Thanks for supporting Shadow AI this year. It means a lot and Iā€™m looking forward to an even bigger 2024.

Letā€™s dive in!

Demystifying AI - Reflections and Predictions

This is the 16th and final issue of Shadow AI for 2023 and Iā€™ve learned a lot over the past 4 months since I launched the newsletter.

I created Shadow AI because I was fascinated by how AI is transforming the security and IT space and I wanted to share my learnings with the community. The newsletter is time consuming, but Iā€™ve really liked that it has held me accountable to dive into AI and the implications for security and IT each week regardless of how busy life is.

The next phase of this newsletterā€™s growth is to find ways to differentiate it from so many other newsletters that exist. I originally aimed to do that by focusing on:

  1. Signal: I spend hours each week researching, curating, and distilling the latest AI technology and trends to help current and future security and IT leaders like you stay ahead in safeguarding digital assets, enhancing employee productivity, and enabling business growth.

  2. Engagement: Youā€™ll hear different perspectives from expert guest contributors and receive tailored, unique content for our industry.

  3. Brevity: Easily digestible content so each week you walk away with something new in 5 minutes or less.AI is going to be a major issue in the 2024 Presidential election

I donā€™t think Iā€™ve succeeded in #2 yet, but I consciously havenā€™t pursued it even though I have a bunch of great former colleagues to reach out. I wanted to boost the subscription base first so their learnings get heard by as many people as possible. If you have feedback on whatā€™s worked well and what hasnā€™t in the infancy of this newsletter, Iā€™d love to hear it.

ā€˜Tis the season for predictions so Iā€™ll throw my top 3 AI predictions out for 2024ā€¦.

  1. A major AI-enabled attack is going to materially impact a public company and CISOs and vendors will be scrambling to catch up.

  2. AI regulation in the US wonā€™t pass. As a wise (and amazing) boss from the USG once said, ā€œnothing gets done in an election year.ā€

  3. Open source AI models augmented by proprietary corporate data sets will continue to pick up steam as enterprises look to increase AI adoption on specific use cases that deliver game changing efficiency.

AI News to Know

  • Googleā€™s take on ā€˜Shadow AIā€™: Google wrote a recent article highlighting the risk of shadow AI and how banning generative AI in the workplace is ā€œsecurity theater.ā€ Security and IT professionals are far better off developing a risk-based strategy to guide their organizationā€™s secure use of AI and how sensitive data is protected than pushing employee AI usage further into the shadows.

  • OpenAIā€™s Responsible Scaling Framework: OpenAI laid out a new beta ā€œPreparedness Frameworkā€ for how they plan to systemically ā€œtrack, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.ā€ As part of the framework, OpenAI has defined risk thresholds aligned to minimum safety measures. For cybersecurity, the defined thresholds are:

    Models with a post-mitigation score of ā€œmediumā€ or below can be deployed. This means that attackers could gain a marked efficiency in developing a known exploit and the model will be within an acceptable threshold to deploy.

    I'm looking forward to seeing this built out. At the high risk threshold, It's going to be critical to understand:
    1) How high value exploits is defined,
    2) How hardened targets are defined, and
    3) How we can help defend smaller organizations with limited resources who may be disproportionately impacted by the risk criteria.

  • Microsoft ChatBot Election Inaccuracies: Wired discovered that Microsoftā€™s Copilot, which is based on OpenAIā€™s GPT-4, was responding to political questions ā€œwith conspiracies, misinformation, and out-of-date or incorrect information,ā€ including incorrect polling numbers, inaccurate election dates, outdated candidates, and made-up controversies about candidates.

  • NIST Budget Woes: President Bidenā€™s Executive Order on AI charged NIST with developing new AI standards, but thereā€™s growing concern that NIST lacks the resources needed to complete the work by July 2024. In fact, NISTā€™s Associate Director of Emerging Technologies confessed itā€™s ā€œan almost impossible deadlineā€ for the agency to meet at a conference last week.

AI on the Market

  • Appleā€™s AI Breakthrough: Apple researchers have made a breakthrough in deploying LLMs on iPhones and other Apple devices with limited memory through an innovative flash memory utilization technique. The development opens new possibilities for future iPhones, such as more advanced Siri capabilities, real-time language translation, and sophisticated AI-assistants and chatbots.

  • Anthropicā€™s Growing War Chest: Anthropic is in negotiation to raise another $750M at an $18.4B valuation. This comes after Anthropic has already raised up to $2B from Google and $4B from Amazon in the past few months. While OpenAIā€™s Preparedness Framework has received recent publicity, Anthropic was actually the first company to publish a Responsible Scaling policy.

AI Prompt of the Week

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Iā€™m taking next week off so until next 2024, humans.

-Andrew Heighington