• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 1 February 2024

šŸ¦¾ Shadow AI - 1 February 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

The number of subscribers to Shadow AI has nearly doubled in the last few weeks. Thank you for your support and getting the word out.

This week, I cover:

šŸ”“ How are OpenAI and Claude Doing Following Secure by Design Principles?

šŸ‡¬šŸ‡§ UKā€™s AI Cyber Assessment

šŸ“ˆ AI Risk Factors in Annual Reports

šŸ‘€ Googleā€™s AI Powered Vulnerability Fixing

ā˜ļø Microsoft Earnings

šŸ’° Aim Security Seed Round

šŸ’¼ 5 Cool AI Security Jobs of the Week

Letā€™s dive in!

Demystifying AI - Secure by Design

The Department of Homeland Securityā€™s Cybersecurity and Infrastructure Security Agency (CISA) and 17 U.S. and international partners published its Secure-by-Design initiative last year aiming to provide software manufactures with guidance on how to build products securely for their customers.

How are two of the biggest AI companies doing following secure-by-design principles for the consumer facing products?

Letā€™s take a look at what security features we know OpenAI and Anthropicā€™s enterprise plans offer:

Security Feature

OpenAI Enterprise

Anthropic Claude 2

Privacy

Customer prompts and company data are not used for training OpenAI models

Customer prompts and company data are not used for training OpenAI models

Data Security

Data encryption at rest and in transit

Date encryption at rest and in transit

Compliance

SOC2 Type II

SOC2 Type I

Administration

Admin Console with bulk user management

-

SSO

Yes

-

Domain Verification

Yes

-

Off the bat, there is one glaring omissionā€¦

DALL-E

Neither platform offers multi-factor authentication for its users or administrators!

The lack of MFA exists while the number of compromised ChatGPT accounts on the dark web increases. Group-IB found over 100,000 compromised ChatGPT accounts for sale between June 2022 to March 2023 alone.

Group-IB Threat Intelligence

Beyond the MFA concern, there are a few other areas security practitioners should understand:

  • While OpenAI offers SSO, it does so via an SSO Tax by only offering it to Enterprise accounts and not Teams accounts.

  • OpenAI has a SOC2 Type II whereas Anthropic currently has SOC2 Type I. Anthropic is in the observation period for its Type II.

  • OpenAI offers domain restrictions where as Anthropic does not.

  • Anthropic has a nifty Trust Center, but getting access as a prospective customer was not granted to me.

  • Neither AI company makes logs for security events available to its customers by default, but OpenAI offers an analytics dashboard for usage insights.

LLM security through red teaming, content filtering, federated learning, etc is important, but some of the leading AI companies appear to be missing an opportunity to incorporate secure by design principles in the last mile of their product offerings.

Agree or disagree? Iā€™d love to hear from you on why. Simply reply directly to this email.

AI News to Know

  • AIā€™s Impact on Cyber: The UKā€™s National Cyber Security Centre published an assessment on how AI will impact the efficacy of cyber operations and the implications of the cyber threat over the next two years. It includes a nuanced breakdown of how AI will uplift all types of attackers at varying degrees.

    The report also highlights how:

    • AI will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years.

    • This will largely be spurred by immediate attacker capability uplift in reconnaissance and social engineering which will make attacker operations more effective, efficient, and harder to detect.

    • AI also lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to an increased global ransomware threat over the next two years.

  • AI Risk Factors: AI is becoming a growing theme in public company annual report filings. The disclosures provide insights into the AI challenges enterprises are facing, including:

    • Netflix acknowledging that new technological developments like the development and use of generative artificial intelligence by their competitors could adversely impact their business.

    • Jeffries and Schlumberger acknowledging that AI tools may bring increasingly sophisticated cyber attacks.

    • ServiceNow acknowledging how AI regulations could impact their ability to efficiently and cost-effectively offer their services.

    • Adobe acknowledging that they face a risk of not keeping pace with the rapid evolution of AI to meet their customer needs. They also discuss risks related to the development and deployment of AI that could cause harm to individuals, customers, or societies.

    It also presents a big opportunity for forward leaning security professionals to help address the challenges these enterprises are facing.

  • AI Powered Vulnerability Fixing: Googleā€™s Machine Learning Security Team released an update on how they are scaling vulnerability detection and remediation by leveraging AI. One area that excites me is their experiment with an automated pipeline that intakes vulnerabilities and prompts LLMs to generate fixes and test them before selecting the best one for human review. The AI-enabled remediation approach resolved 15% of targeted bugs and saved engineers significant time. It has the potential to significantly reduce the friction between security teams and engineering teams when it comes to patching vulnerabilities.

AI on the Market

  • Microsoft Earnings: Microsoft Cloud AI offerings and GitHub Copilot offerings are growing quickly. Microsoft announced they have over 53,000 Azure AI customers, 1/3 of which are new. They also have 1.3 million paid GitHub Copilot subscribers and more than 50,000 organizations using GitHub Copilot Business. However, uptake on their Microsoft 365 Copilot is slower given its focus on drafting and summarizing.

  • Aim Security Seed Round: Aim Security raised a $10M seed round for its holistic AI SaaS platform to secure public AI apps, enterprise AI apps, and self-built AI apps.

šŸ’¼ 5 Cool AI Jobs of the Week šŸ’¼

Principal Offensive Security Engineer - AI Red Team @ Microsoft to Secure Microsoftā€™s Biggest AI Systems | Remote | 7+ yrs exp.

AppSec AI Security @ Amazon Stores to Ensure the Security, Integrity and Ethical Use of AI | Multiple Locations | $136k-$247k | 3+ yrs exp.

Director AI/ML Platform (Prisma Cloud) @ Palo Alto Networks to Deliver AI Solutions to Protect Applications from Code to Cloud | Santa Clara, CA | 8+ yrs exp.

Research Security Manager @ Google DeepMind to Protect Googleā€™s Most Sensitive AI Research Assets | NYC, hybrid | $96k-147k | No minimum yrs

Sr. Director, Product Marketing - Copilot for Security @ Microsoft to Help Security Teams Understand the Value of Copilot | Remote | $152k-$292k | 8+ yrs exp.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington