• Shadow AI
  • Posts
  • 🦾 Shadow AI - 26 October 2023

🦾 Shadow AI - 26 October 2023

Arming Security And IT Leaders For The Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

We are two months into the Shadow AI newsletter and I really appreciate your support. We have readers from Fortune 25 banks, Fortune 100 insurance and technology companies, venture capital firms, cybersecurity vendors, and many others.

With such diverse backgrounds, I’d love to hear what your enjoying in the newsletter and what else you’d like to see.

If you’re finding value in this, please spread the word to others who may enjoy it too.

Let’s dive into this week’s issue!

Demystifying AI - Generative AI for Defenders

I’ve spent a lot of time over these past 8 weeks writing about some of the emerging challenges Generative AI presents for security and IT professionals, but there are also some emerging advantages for security and IT professionals. With Microsoft’s announcement of Security Copilot’s Early Access Program and contention that Security Copilot can “save up to 40 percent of time on core security operations tasks,” let’s explore 5 key ways Generative AI can boost the productivity of defenders and reduce operational toil.

  1. Streamlined Incident Management: Generative AI can assist in automating the detection and response to security incidents, reducing the time and effort required to investigate and mitigate threats. It can produce incident reports that include relevant information, such as the root cause, impact, and recommended remediation steps. It can also analyze historical incident data and best practices to recommend the most effective remediation and containment steps for each incident.

  2. Threat Hunting: Generative AI can be used to automate the process of gathering and analyzing data from a variety of sources, such as security logs, network traffic, and threat intelligence feeds. It can generate a list of the most likely threats to an organization, based on the organization's industry, assets, and threat intelligence. The analyst can then use the list to focus threat hunting efforts on the most critical threats.

  3. Malware analysis: Generative AI can automate the analysis of malware samples. By generating dynamic analysis reports and identifying behavioral patterns, it can help defenders understand malware's functionality without manual reverse engineering.

  4. Secure Software Development: Generative AI can assist in writing secure code by producing secure code templates that developers can use to write code more easily and prevent common vulnerabilities. A generative AI model, for example, could be used to generate a secure code template for a financial services application. The template would include all of the necessary steps to comply with financial industry regulations, such as the Payment Card Industry Data Security Standard.

  5. Secure System Configuration: Generative AI can ensure configurations are automatically created and maintained. A security administrator could use a generative AI-powered tool to generate a secure system configuration template for a new web application. The template could include recommendations for firewall rules, access control settings, data security, and other security controls. The administrator could then use the template to automatically configure the web application's servers.

AI/ML in security is not new, but Generative AI offers some important emerging use cases to help protect enterprises and enhance the productivity of their defenders.

What other blue team use cases are you excited about?

AI News to Know

  • AI Executive Order: The White House is expected to release its planned Executive Order on Monday. It is expected to require “advanced AI models to undergo assessments before they can be used by federal workers” and make it easier for highly skilled workers to immigrate to the United States. The Wall Street Journal reports it will also focus on developing AI as a national security tool. We’ll be watching if it takes the non-binding safeguards from the AI Bill of Rights and makes them requirements for the Executive Branch.

  • 2nd AI Insight Forum: Senator Schumer’s second AI Insight Forum was on Tuesday and it focused on AI innovation. Tech Policy Press reports that the discussion covered topics on transformational and sustainable innovation, government research and development funding that incentivizes equitable and responsible AI innovation, and balancing national security concerns with open source AI models.

  • AI-enabled Phishing - Good but not Great (yet): IBM researchers published the results of an experiment they ran across 1,600 employees at a global healthcare company. Half the employees received a phishing email crafted by IBM’s X-Force team which took about 16 hours to develop. The other half received a phishing email crafted by ChatGPT which took 5 minutes to develop. 14% of employees clicked on the malicious link in the email crafted by IBM whereas 11% of employees clicked on the malicious link in the email crafted by ChatGPT.

An 11% click rate on an email that took 5 minutes to develop is still a successful campaign and more than double the industry average click rate of 5%.

ChatGPT Generated Phishing Email in IBM X-Force Study

AI on the Market

  • AI Shared Responsibility Model: Similar to cloud deployments, companies have choices on how to implement AI capabilities in their organizations. Microsoft summarizes the shared responsibility model for IaaS, PaaS, and SaaS across three AI layers - AI usage, AI application, and AI platform.

    • AI Usage Security: Implement similar controls to any computer system, including identity and access controls, device security, monitoring, data protection. Training users is critical as their behavior can influence the output of the models.

    • AI Application Security: Build an application safety system that inspects the prompting content being sent to the AI model and any other integrations.

    • AI Platform: Protect against harmful inputs and outputs to reduce the likelihood that harmful content may be generated.

 

  • Large Action Models: Silvio Savarese with Salesforce Research writes about Large Action Models, “a more active, autonomous variation on LLMs that don’t merely generate content like text or images but accomplish entire tasks and even participate in workflows, whether alongside people or on their own.” He argues that multiple smaller, purpose-built LLMs trained on high fidelity proprietary datasets can be orchestrated to do hyper-specialized steps in a larger process. With these types of actionable and orchestrated LLMs, security of each model and across each model will be critical.

AI Prompt of the Week

ChatGPT hasn’t sold me on any of these, but my son is being a Ninja for Halloween so maybe I’ll be a Cyber Ninja.

Have a favorite, funny, or doomsday security or IT prompt to share with the Shadow AI community? Submit it here and you could make the newsletter.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington