- Shadow AI
- Posts
- š¦¾ Shadow AI - 22 December 2023
š¦¾ Shadow AI - 22 December 2023
Arming Security And IT Leaders For The Future
Forwarded this newsletter? Sign up for Shadow AI here.
Hello,
Iām a day late on the newsletter due to holiday commitments. Hope you didnāt miss it too much!
This week, we cover:
š Reflections and Predictions
š¦¾ Googleās take on Shadow AI
š OpenAIās Responsible Scaling Framework
šØ Microsoft Chatbot Election Inaccuracies
šØšØ NIST Budget Woes
š± Appleās AI Breakthrough
š¤ Anthropicās Growing War Chest
š AI Prompt of the Week - Comparing OpenAIās and Anthropicās Responsible Scaling Policies
Thanks for supporting Shadow AI this year. It means a lot and Iām looking forward to an even bigger 2024.
Letās dive in!
Demystifying AI - Reflections and Predictions
This is the 16th and final issue of Shadow AI for 2023 and Iāve learned a lot over the past 4 months since I launched the newsletter.
I created Shadow AI because I was fascinated by how AI is transforming the security and IT space and I wanted to share my learnings with the community. The newsletter is time consuming, but Iāve really liked that it has held me accountable to dive into AI and the implications for security and IT each week regardless of how busy life is.
The next phase of this newsletterās growth is to find ways to differentiate it from so many other newsletters that exist. I originally aimed to do that by focusing on:
Signal: I spend hours each week researching, curating, and distilling the latest AI technology and trends to help current and future security and IT leaders like you stay ahead in safeguarding digital assets, enhancing employee productivity, and enabling business growth.
Engagement: Youāll hear different perspectives from expert guest contributors and receive tailored, unique content for our industry.
Brevity: Easily digestible content so each week you walk away with something new in 5 minutes or less.AI is going to be a major issue in the 2024 Presidential election
I donāt think Iāve succeeded in #2 yet, but I consciously havenāt pursued it even though I have a bunch of great former colleagues to reach out. I wanted to boost the subscription base first so their learnings get heard by as many people as possible. If you have feedback on whatās worked well and what hasnāt in the infancy of this newsletter, Iād love to hear it.
āTis the season for predictions so Iāll throw my top 3 AI predictions out for 2024ā¦.
A major AI-enabled attack is going to materially impact a public company and CISOs and vendors will be scrambling to catch up.
AI regulation in the US wonāt pass. As a wise (and amazing) boss from the USG once said, ānothing gets done in an election year.ā
Open source AI models augmented by proprietary corporate data sets will continue to pick up steam as enterprises look to increase AI adoption on specific use cases that deliver game changing efficiency.
AI News to Know
Googleās take on āShadow AIā: Google wrote a recent article highlighting the risk of shadow AI and how banning generative AI in the workplace is āsecurity theater.ā Security and IT professionals are far better off developing a risk-based strategy to guide their organizationās secure use of AI and how sensitive data is protected than pushing employee AI usage further into the shadows.
OpenAIās Responsible Scaling Framework: OpenAI laid out a new beta āPreparedness Frameworkā for how they plan to systemically ātrack, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.ā As part of the framework, OpenAI has defined risk thresholds aligned to minimum safety measures. For cybersecurity, the defined thresholds are:
Models with a post-mitigation score of āmediumā or below can be deployed. This means that attackers could gain a marked efficiency in developing a known exploit and the model will be within an acceptable threshold to deploy.
I'm looking forward to seeing this built out. At the high risk threshold, It's going to be critical to understand:
1) How high value exploits is defined,
2) How hardened targets are defined, and
3) How we can help defend smaller organizations with limited resources who may be disproportionately impacted by the risk criteria.Microsoft ChatBot Election Inaccuracies: Wired discovered that Microsoftās Copilot, which is based on OpenAIās GPT-4, was responding to political questions āwith conspiracies, misinformation, and out-of-date or incorrect information,ā including incorrect polling numbers, inaccurate election dates, outdated candidates, and made-up controversies about candidates.
NIST Budget Woes: President Bidenās Executive Order on AI charged NIST with developing new AI standards, but thereās growing concern that NIST lacks the resources needed to complete the work by July 2024. In fact, NISTās Associate Director of Emerging Technologies confessed itās āan almost impossible deadlineā for the agency to meet at a conference last week.
AI on the Market
Appleās AI Breakthrough: Apple researchers have made a breakthrough in deploying LLMs on iPhones and other Apple devices with limited memory through an innovative flash memory utilization technique. The development opens new possibilities for future iPhones, such as more advanced Siri capabilities, real-time language translation, and sophisticated AI-assistants and chatbots.
Anthropicās Growing War Chest: Anthropic is in negotiation to raise another $750M at an $18.4B valuation. This comes after Anthropic has already raised up to $2B from Google and $4B from Amazon in the past few months. While OpenAIās Preparedness Framework has received recent publicity, Anthropic was actually the first company to publish a Responsible Scaling policy.
AI Prompt of the Week
Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.
Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.
If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!
Iām taking next week off so until next 2024, humans.
-Andrew Heighington