• Shadow AI
  • Posts
  • 🦾 Shadow AI - 23 May 2024

🦾 Shadow AI - 23 May 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

This week, I cover:

🧠 Cognitive “DissonAInce”

đź’¨ Move Fast and Break Things Deja Vu

đź’¸ Does AI have a gross margin problem?

🔎 Foundational Model Transparency Index Update

⚔️ Foundational Model Wars

💰 Scale AI’s Series F

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Cognitive “DissonAInce”

As AI companies charge ahead, it’s important security practitioners are able to navigate the cognitive dissonance between what some AI companies are saying versus what they are doing to effectively assess and manage AI risk.

Let’s breakdown some recent developments in this space:

What they are saying

What they are doing

What’s the risk?

Microsoft

In early May, Satya Nadella, the CEO of Microsoft, said the company would prioritize security above all else in response to the Department of Homeland Security’s Cyber Safety Review Board report. He pledged to double down on Secure by Design and Secure by Default.

Microsoft introduced its AI-optimized Co-Pilot PCs that include a new feature called “Recall.” Recall takes snapshots of a user’s screens every few seconds and stores the data locally on a user’s device. Users will reportedly have the ability to “Opt Out” of Recall, but it should be an “Opt In” feature in keeping with their secure by default commitment.

Additionally, Microsoft’s FAQ states that “Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers.”

Regular screenshots of a user’s PC creates a treasure trove of sensitive information, including financial, health, and personal data.

If a bad actor gains privileged access to your device, they could have all this information at their fingertips.

Moreover, it’s unclear how sensitive software such as password managers, encrypted messaging apps, and HIPAA compliant applications will be able to run safely on Co-Pilot PCs.

To read more on the risks, check out Kevin Beaumont’s write up on how Recall fundamentally undermines Microsoft security.

OpenAI

As part of the AI Seoul Summit this week, OpenAI released a safety update that includes 10 practices they are using to safely develop and deploy frontier AI models, from red teaming, to alignment and safety research, to monitoring for abuse.

OpenAI dissolved their Superalignment AI safety team that was focused on managing long-term risks of AI.

Jan Leike, Head of the Superalignment team at OpenAI, recently announced his departure, and publicly noted that “over the past years, safety culture and processes have taken a back seat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.”

Over the long term, AI has the potential to build machines that are smarter than humans. Jan’s public comments raises concern that OpenAI is not doing enough to prioritize safety within their foundational model.

Slack

Slack overhauled their AI privacy principles in 2023, which contained text that indicated their AI models were being trained on customer data, including messages, content, and files submitted to Slack.

This change was recently picked up on by the security and privacy community and Slack has since updated their privacy principles to clarify that they only analyze customer data to better parse queries, help with autocomplete and come up with emoji suggestions.

Slack does not offer an Opt-In option for this feature. Instead, to turn it off, Slack requires the workspace owner email its customer experience team requesting an opt-out.

Privacy policies are constantly evolving. It’s critical that security teams work closely with privacy teams to have ongoing monitoring and visibility into how vendors is using your company’s data for AI purposes and the associated risk.

AI News to Know

Move Fast and Break Things Deja Vu: Zoe Kleinman breaks down the clash between Scarlett Johansson and OpenAI and how it reminded her of how the Silicon Valley approach of “seeking forgiveness rather than permission as an unofficial business plan” still rings true today.

Does AI have a gross margin problem?: CJ Gustafson dives into an important questions about long term profitability of AI companies and gross margin challenges, largely driven by huge datacenter costs. Gross margins at Anthropic, for example, are noticeably lower than major software companies. This point reiterates a trend we’ve highlighted in Shadow AI - that purpose-built, smaller generative AI models are increasingly in focus at enterprises because to more effectively manage costs and address specific business use cases.

Foundational Model Transparency Index Update: In October 2023, Shadow AI highlighted the first version of Stanford’s Foundational Model Transparency Index. This week, Stanford’s Institute for Human-Centered AI published a follow up study assess 100 transparency indicators. The report finds that “while there is substantial room for improvement, transparency has increased in some areas. The average score rose from 37 to 58 out of 100, with improved transparency related to risks and how companies enforce their policies to prevent such risks.” Opportunities exist for improved transparency on data practices, including copyright status and presence of PII in the data.

AI on the Market

Foundational Model Wars: The Elo rating system has been used by Anthropic and others to benchmark AI chatbot performance. Based on the Elo rating system, @chiefaioffice has pulled together an insightful visual breakdown of how the race to build the leading foundational model for chatbots has evolved over the last year.

Scale AI’s Series F: Accel, along with Amazon, Meta, and others participated in a $1B funding round for Scale AI that values the company at $13.8B, nearly double its last valuation. Scale AI is building the “data foundry for AI” and helps enterprises improve and fine tune massive dataset critical for AI models.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Security Tech Lead Manager @ Anyscale so that any developer or data scientist can scale an ML application from their laptop | San Fran | $238k - $285k

Compliance Analyst @ Weights and Biases to complete customer requests, audit tasks and security initiatives | San Fran | $89k-$125k | 1-2 yrs exp.

Technical Program Manager, Security @ Coreweave to lead cross-functional teams to implement security measures, navigate regulatory requirements | Roseland, NJ | $145k-$165k | 3+ yrs exp.

Staff Software Engineer, Security @ Abridge to serve as an application security expert in responsibly deploying AI across health systems | Multiple Locations | 7+ yrs exp.

Sr. Staff Security Engineer @ Databricks to secure non-engineering applications and products for Databricks and its customers | San Fran | $176k-$311k | 15+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington