• Shadow AI
  • Posts
  • 🦾 Shadow AI - 4 April 2024

🦾 Shadow AI - 4 April 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Welcome back to another edition of Shadow AI. This week in Shadow AI, I cover:

🛡️ AI Enabled Security Teams

🧱 Databricks AI Security Framework

🚨 Many-Shot Jailbreaking

👤 Shadow AI Showdown

đź’» Microsoft Copilot for Security

⚖️ Don’t Overstate Your AI Capabilities

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - AI Enabled Security Teams

Imagine you’re a Chief Information Security Officer with a mandate to build a security team from scratch for a fast growing company.

While there’s no one size fits all approach to designing a security team, there are some common core functional areas like Product Security, Security Operations, Threat Management, and Security Assurance / GRC.

There are a number of emerging ways AI could serve as a force multiplier to your new security team, helping to maximize productivity and efficiency:

Product Security

AI can enhance the security of customer-facing software and products. Here are a few examples:

  • Automated Code Analysis: AI-powered tools like Corgea can analyze source code, identify vulnerabilities, and suggest remediation strategies, significantly reducing the time and effort required for fixing vulnerabilities.

  • Secure Coding Assistance: AI-powered coding assistants like Github Co-Pilot can provide real-time guidance on secure coding practices, helping developers write more secure code from the outset and reducing the time security teams spend finding and governing vulnerability remediation.

  • Application Security Assessment: AI powered application security assessment capabilities are starting to emerge like Staris and Qwiet AI, which can examine code, build a POC exploit of findings as a code test or a code property graph, and provide remediation guidance.

Security Operations

AI and machine learning has been used in security operations for a while, but new applications are starting to emerge to streamline operations. Microsoft Copilot for Security, for example, allows security and IT teams to ask questions in natural language and receive actionable responses to common security and IT tasks in seconds. It can help:

  • resolve and report on incidents

  • generate policies and configure devices with best practices

  • generate or summarize access policies

  • identify and summarize data and user risks

Threat Management

AI can enhance threat intelligence and help security teams stay ahead of emerging threats:

  • Threat Intelligence Gathering and Analysis: AI can scour the web, dark web, and other sources to gather and analyze threat intelligence data, identifying potential threats and enabling proactive defense measures. It can help threat analysts summarize TTPs, IOCs, and provide executive summaries to security leadership. Recorded Future, for example, recently announced a generative AI-based intelligence assistant.

  • Threat Modeling: As discussed in last week’s issue, LLMs like Claude 3 Opus and GPT-4 have made significant advancements in threat modeling, conducting security design and architecture reviews, and recommending security controls.

Security Assurance / GRC

AI can streamline security assurance processes, ensuring compliance with industry standards and regulations:

  • Automated Policy and Control Mapping: AI can map security controls and policies to relevant industry standards and regulations, ensuring comprehensive coverage and while reducing compliance overhead.

  • Vendor Management: AI-powered platforms can reduce time spent answering security questionnaires to minutes, streamlining the vendor risk assessment process. One example of this is SafeBase who just announced a recent release with this capability.

The CISO toolkit of the future will look very different than it looks today. Gen AI won’t replace security teams, but it does have the potential to disrupt how security teams are designed and operate.

I’d love to hear what AI use cases you are most excited about exploring and which ones you feel are overhyped. What are the AI building blocks you think are most promising for security teams?

Reply to this email with your feedback!

(Note: The vendors mentioned is not an endorsement, but an illustration of the interesting capabilities being built for security teams)

AI News to Know

Databricks AI Security Framework (DASF): On the heels of its DBRX open source LLM release, Databricks released DASF, an actionable framework for managing AI security designed to help create an end-to-end AI risk profile for your organization and a concrete set of controls to implement. The DASF breaks down AI systems into 12 core components, identifies 55 technical security risks across the 12 components, and 53 mitigation measures.

Many-Shot Jailbreaking: Anthropic’s research on "Many-Shot Jailbreaking" exposes a critical vulnerability in LLMs due to expanded context windows. By flooding LLMs with hundreds of undesired prompts, attackers can manipulate the LLM in providing harmful outputs, a technique termed Many-Shot Jailbreaking (MSJ). The paper also evaluates mitigation strategies, revealing that while certain methods like supervised fine-tuning and reinforcement learning can delay the attack, they fall short of fully preventing it. The research underscores a challenge in ensuring LLM safety without undermining their learning capabilities, highlighting the complex balance between enhancing model features and maintaining robust security measures.

Shadow AI Showdown: 1Password released its State of the Enterprise Security Report on balancing security and productivity in the age of AI. It includes some key statistics security professionals should keep in mind when designing their AI security strategies:

  • 92% of security pros have security concerns about generative AI. Among their top worries: employees entering sensitive company data into an AI tool (48%), using AI systems trained with incorrect or malicious data (44%), or falling for AI-enhanced phishing attempts (42%).

  • 26% of employees don’t understand the security concerns over using AI tools at work. 

  • 22% of employees admit to not always following their company’s AI policies. 

  • 57% of employees say that using generative AI tools at work saves them time and makes them more productive.

AI on the Market

Microsoft Copilot for Security: As of April 1st, Microsoft Copilot for Security is now generally available to businesses of all sizes. The pricing is consumption-based with no up-front costs. Copilot for security works with other Microsoft Security product like Microsoft Defender, Microsoft Sentinel, and Microsoft Entra and also offers numerous partner integrations.

Don’t Overstate Your AI Capabilities: Evolve Technologies, a Massachusetts based public safety company that uses Artificial Intelligence to scan and detect weapons has been sued for violating Federal Securities Laws after failing to detect some knives. Their AI system is installed in large venues like Fenway Park and Gillette Stadium as well as school systems and mass transit systems. The lawsuit alleges that the company materially overstated the efficacy of its products. It’s a useful reminder that security teams need to be mindful of snakeoil AI products and really stress test them against their use cases.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Head of Information Security @ TetraScience to secure the scientific data and AI Cloud | Remote | 4+ yrs exp.

Sr. Software Security Engineer - ML Platform @ Apple to define and drive the data security roadmap for Apple’s data platform | Seattle | $161k-$284k | 10+ yrs exp.

Security Governance Partner - ML/Data Science @ Cash App to build Cash App’s secure ML pipelines and infrastructure | Remote | $148k-$223k | 5+ yrs exp.

Sr. Cybersecurity Software Engineer (Gen AI) @ Travelers to build secure Gen AI software solutions | Hartford, CT or St Paul, MN | $133k-$220k | 3+ yrs exp.

Principal Cybersecurity Engineer (AI/ML Open Source Security) @ Discover Financial to develop and implement capabilities to identify and mitigate security risks in open source AI/ML models | Riverwoods, Illinois | $104k-$175k | 6+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington