🦾 Shadow AI - 18 April 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

I hope you’re having a great week and you get at least one useful insight from this newsletter that you can’t find anywhere else in your inbox each week. Thank you for subscribing!

This week, I cover:

🔎 Evaluating the ROI of AI Tools in Cybersecurity: A Practical Framework for Going Beyond the Buzz

☝️AI Index Report

🔝 AI 50 List

💼 5 Cool AI Security Jobs of the Week

Let’s dive in!

Evaluating the ROI of AI Tools in Cybersecurity: A Practical Framework for Going Beyond the Buzz

Two weeks ago, Shadow AI explored the ways AI can reshape and enable cybersecurity teams in the future. Nowadays, nearly every cybersecurity vendor incorporates AI into their pitches, and security practitioners must be able to distinguish between genuine utility and mere hype. Assessing the return on investment (ROI) of AI tools, particularly large language models (LLMs) for cybersecurity, is crucial. Here’s the framework I use to evaluate the ROI of AI tools in cybersecurity.

1. Define Clear Objectives

Before diving into any ROI calculation, it’s essential to establish clear objectives for the AI tool. What specific cybersecurity challenges is it intended to address? Objectives could range from reducing the time to detect and respond to threats, to minimizing false positives, or improving compliance with regulatory requirements. Clearly defining these goals will not only guide the choice of the AI solution but also provide a benchmark against which to measure its effectiveness.

2. Consider the Full Range of Costs

To accurately assess ROI, understand the end to end costs and financial commitment required. These include:

  • Initial Investment: The upfront cost of the AI tool whether your building or buying

  • Implementation Costs: Expenses related to integrating the AI tool into existing systems, including any necessary infrastructure upgrades or software customizations.

  • Training Costs: Investment in training staff to effectively utilize the new tool.

  • Operational Costs: Ongoing expenses such as maintenance, updates, and support services.

  • Indirect Costs: Potential downtime or reduced productivity during the transition period.

3. Quantify the Benefits

Quantifying the benefits of AI in cybersecurity can be challenging, but is achievable by focusing on several key areas:

  • Efficiency Gains: Measure improvements in the speed and accuracy. For example, in the threat detection and response space, this could be calculated by the reduction in hours spent analyzing false positives or the mean time to respond to incidents.

  • Cost Savings: Assess cost reductions resulting from the AI tool, such as streamlining security operations, reducing tool sprawl, or right-sizing headcount.

  • Risk Mitigation: Evaluate potential gains in how specific AI use cases will enhance risk management practices. This could be demonstrated in a number of ways by more effectively mitigating third party risk, streamlining secure code development, and more easily complying with global regulations.

4. Understand the Nuts and Bolts

If you’re considering a vendor solution, go deep in understanding the design of the AI tool and underlying LLM. Consider it a red flag if the vendor doesn’t transparently answer questions like:

  • What foundational model is used?

  • What type of training data does the AI model use?

  • Will company data inputted to the model be used to further train it?

  • How do you check the provenance of the data in the training set?

  • How does the AI system mitigate inaccurate, biased, and underrepresented outputs? Can you provide evidence?

  • Is the data retention period configurable?

  • Are penetration tests or other types of security assessments conducted on the model?

5. Conduct a Proof of Concept

Conducting a Proof of Concept Conducting (POC) is a critical step in evaluating the ROI of an AI tool in cybersecurity. During the POC, you can:

  • Test the AI tool's capabilities against your specific use cases and security challenges. Assess how it compares to your existing tools and workflows.

  • Gather feedback from your security teams on the usability, effectiveness, and efficiency of the AI tool.

  • Analyze the quality and accuracy of the AI tool's outputs, such as threat detections or vulnerability assessments.

  • Measure the time and resources required to deploy, configure, and maintain the AI tool.

6. Use Metrics to Monitor ROI

If you’ve proven the value of the use case and onboard a new AI tool, your work isn’t over. ROI is not a one-time calculation but a continuous process. Establish specific, measurable metrics and regularly review the AI tool’s performance against the initial objectives and metrics.

7. Review, Adjust, and Learn

Often times, cybersecurity tools are not configured in a way to maximize their value. By continually monitoring and measuring, adjustments may be made to maximize ROI, such as refining the tool’s configurations, expanding its deployment, or additional personnel training. It can also provide use lessons learned to feedback into your evaluation process for potential future AI deployments.

Conclusion

Evaluating the ROI of AI tools in cybersecurity involves a balanced approach of understanding costs, quantifying direct and strategic benefits, and continuously monitoring performance. By implementing this structured framework, cybersecurity practitioners can make informed decisions that align with their organization's security needs and financial constraints, ensuring that their investment in AI technology drives meaningful value.

AI News to Know

Artificial Intelligence Index Report: Stanford University’s Institute for Human-Centered AI released the seventh edition of its AI Index Report. There wasn’t a lot of surprising security implications it in, but it did reinforce several key security areas we’ve covered:

  1. Lack of Standardized Evaluations for AI Safety: The report underscores a significant absence of standardized methods to evaluate the safety and responsibility of AI models. This fragmentation in safety benchmarks makes it challenging to systematically assess and compare the security risks associated with different AI systems​​.

  2. Vulnerability to Political Deepfakes: AI's capacity to generate political deepfakes, which are increasingly influencing elections, is noted as a major security concern. The report points out the varying levels of detection success, indicating ongoing challenges in effectively mitigating this threat​​.

  3. Emergence of New AI Vulnerabilities: The discovery of new and complex vulnerabilities in language models, particularly through less obvious adversarial strategies, poses increased security risks. This highlights the evolving nature of AI threats that require continuous and adaptive security measures​​.

  4. Risks to Businesses and Global Operations: Businesses worldwide express concerns about AI-related security issues such as data privacy, system reliability, and the potential misuse of AI technology. The global perspective on these risks emphasizes the need for international cooperation and standards in AI security​​.

  5. Copyright Issues with AI Outputs: There is an ongoing legal and security debate over whether outputs generated by AI systems, which may include copyrighted material, constitute copyright violations. This legal uncertainty further complicates the security landscape surrounding the use of AI technologies​​.

  6. Transparency and Disclosure Issues: AI developers often score low on transparency, particularly concerning the disclosure of training data and methodologies. This lack of transparency can impede efforts to evaluate and ensure the security and robustness of AI systems​​.

  7. Rising Number of AI-related Incidents: The report cites a significant increase in AI-related incidents, including misuse cases, which underscores the growing impact of AI on security and the need for robust mechanisms to track and mitigate these risks​​.

AI on the Market

AI 50 List: Forbes, in partnership with Sequoia and Meritech, released its annual list of top AI companies at the application and infrastructure level transforming enterprises. Sequoia highlights the impact AI is starting to have across enterprises.

“Workflow automation platform ServiceNow is achieving case avoidance rates of nearly 20% with their AI-powered Now Assist. Palo Alto Networks has reduced the cost of processing expenses with AI. Hubspot has scaled customer support with AI. And Swedish fintech Klarna recently announced over $40 million in run-rate savings by building AI into their customer support.”

💼 5 Cool AI Security Jobs of the Week 💼

Cloud Security Engineer @ Harvey (one of the AI Top 50) to build a secure GenAI product to help law firms tackle complex legal challenges | San Fran | $200k - $280k | 4+ yrs exp.

Cloud Security Risk Manager @ ASML to solve security challenges for the world’s leading chipmakers | San Diego | $115k-$191k | 4+ yrs exp.

Security Engineer @ Hebbia to be the first security engineer responsible for securing a leading AI platform that shows its work, letting users verify, trust, and collaborate with AI | New York | 10+ yrs exp.

Data and AI Security Senior Manager @ Accenture to  grow Accenture's security offerings related to data and AI security | Multiple Locations | $121k-$336k | 7+ yrs exp.

VP, Information Security @ SOPHiA Genetics to secure data that is driving the democratization of data-driven medicine for the ultimate benefit of cancer and rare disease patients across the globe | Multiple Locations | 10+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington