• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 16 May 2024

šŸ¦¾ Shadow AI - 16 May 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

This week was a huge week for AI announcements. There were major product announcements, Congressional announcements, and personnel announcements. Read on for coverage of:

šŸ§—ā€ā™‚ļø AIā€™s Potential to Tackle Core CISO Challenges

šŸ‘‰šŸ‘ˆ OpenAI and Googleā€™s Dueling AI Announcements

šŸ›ļø The Senateā€™s AI Policy Roadmap

šŸ‘‹ Major Departures at OpenAI

šŸ‘‹ Major Hire at Anthropic

šŸ’¼ 5 Cool AI Security Jobs of the Week

Letā€™s dive in!

Demystifying AI - AIā€™s Potential to Tackle Core CISO Challenges

I wasnā€™t able to attend RSA this year, but I have been closely monitoring the biggest AI takeaways from the week. One keynote recap that really resonated with me was by Caleb Sima, Chair of the Cloud Security Alliance AI Safety Initiative. He spoke about why core security challenges persist despite the market being flooded with vendors and technologies. His hypothesis is that existing security products suffer from a fundamental flaw of being unable to reach the root of three core problems CISO face - context, coverage, and communication.

Caleb argues that large language models (LLMs) are well positioned to address these challenges with advancements in context windows, automated fine-tuning, specialization, and localization.

Core Challenges and AI Solutions:

  1. Context:

    • Challenge: Determining the severity of vulnerabilities, like public AWS S3 buckets, and effectively managing risk requires comprehensive context.

    • AI Solution: AI can analyze and provide context, such as tracking changes in data exposure and correlating them with potential risks to ensure accurate vulnerability assessments.

  2. Coverage:

    • Challenge: Breaches often result from coverage gaps, such as unmonitored exceptions to security policies or incomplete control implementation. Change Healthcare's lack of MFA on their remote access portal despite their policy requiring it is a prime example.

    • AI Solution: AI can provide extensive coverage, monitoring all access rights, logs, and communications to identify security gaps and anomalies in real-time.

  3. Communication:

    • Challenge: Inefficient communication and reporting processes consume significant time and resources.

    • AI Solution: AI can streamline communication by integrating data from multiple security management systems, providing concise and comprehensive reports for the CISO and the executive team.

Caleb argues that AI's ability to automate and scale coverage, context, and communication eventually will transform cybersecurity workflows, enhancing situational awareness and decision-making while reducing manual labor.

So Whatā€™s the Roadmap for CISOs?

1. Embrace Generative AI and LLMs:

  • Identify Use Cases: Determine specific areas where AI can provide the most value, such as vulnerability management, incident response, threat intelligence, and policy/regulatory management.

  • Invest in Technology: Allocate budget for acquiring, piloting, and implementing AI technologies.

2. Integrate AI into your Cyber Strategy:

  • Set Clear Objectives: Define the goals and expected outcomes of integrating AI into your cybersecurity framework.

  • Create an AI Roadmap: Outline the steps for AI adoption, including pilot projects, scaling efforts, and continuous improvement initiatives.

3. Harness your Security Data:

  • Integrate Data Sources: Use AI to correlate data from various sources, such as cloud security posture management, data security posture management, identity and access management, network, and email.

4. Foster a Culture of Continuous Learning and Adaptation:

  • Training and Education: Invest in training programs to upskill security teams on AI technologies and their applications.

5. Implement a Pilot Program:

  • Select a Pilot Project: Choose a specific area to implement AI on a small scale.

  • Evaluate and Iterate: Assess the performance of the AI implementation, gather feedback, and make necessary adjustments before scaling.

6. Establish Governance and Ethical Guidelines:

  • Develop Policies: Create policies and guidelines for the ethical use of AI in cybersecurity, ensuring compliance with regulations and standards.

  • Monitor and Audit: Regularly monitor and audit AI systems to ensure they operate within defined ethical boundaries and maintain transparency.

7. Measure Success and ROI:

  • Define Metrics: Establish key performance indicators (KPIs) to measure the effectiveness of AI implementations in improving security outcomes.

  • Analyze ROI: Continuously assess the return on investment (ROI) of AI technologies to ensure they deliver value and justify further investment.

By following this 7-step plan, CISOs can move towards addressing the root cause of three core challenges - context, coverage, and communication - that existing vendors struggle to address.

AI News to Know

Dueling AI Announcements: OpenAI announced a faster GPT-4o model thatā€™s free for all GPT users and can reason across audio, vision, and text in real time. The real-time translation capabilities will have significant implications for security practitioners. Real-time translation, for example, will allow threat actors to engaged in vishing (phishing via voice calls) and chat-based attacks in the targetā€™s preferred language.

At the same time, Google made a series of big announcements, including integrating Generative AI in search and a scam call detector using their on-device Gemini Nano AI model. The scam call detector will look for fraudulent language and other conversation patterns typically associated with scams, such as gift card requests and urgent money transfers. Users will receive real-time alerts during calls where these red flags are present and have the option to end the call immediately.

AI Policy Roadmap: Shadow AI has been following the Senateā€™s AI Insight Forums since their first meeting in September 2023 and yesterday the bipartisan working group released their findings. The report identifies areas of consensus that merits bipartisan consideration to harness the full potential of AI while minimizing short-term and long-term AI risk. A few security and privacy related takeaways stood out:

  • Privacy: The working group supports a comprehensive federal data privacy law that address issues related to data minimization, data security, consumer data rights, consent and disclosure, and data brokers.

  • Security: The working group advocated for the development and standardization of risk testing and evaluation methodologies, including red-teaming, commercial AI auditing standards, and bug bounty programs.

  • AI ISAC: The working group suggests exploring whether thereā€™s a need for an AI-focused Information Sharing and Analysis Center to serve as an interface between commercial AI entities and the United States Government.

I remain skeptical any major legislation will pass in an election year, but hopefully the AI Insight Forums and this roadmap will build a bipartisan coalition motivated to get something done after the election.

AI on the Market

Major Departures at OpenAI: Ilya Sutskever, OpenAI co-founder and chief scientist who architected the attempted ouster of Sam Altman, announced he was leaving the company. This felt inevitable given the drama in late 2023. Interestingly, and a bit undercovered, Jan Leike, who was running the ā€œSuperalignmentā€ team Sutskever established to ā€œsteer and controlā€ advanced AI, has also resigned. His responsibilities are going to be assumed by John Schulman, another OpenAI co-founder who supported Altman during the failed coup.

Major Hire at Anthropic: Anthropic announced that Mike Krieger, the former CTO and co-founder of Instagram is joining them as Chief Product Officer. With AI product releases picking up steam, Jason Clinton and the security team at Anthropic will need to be closely aligned to Mike to ensure new product features are threat modeled and designed securely. I hope security features will also make the product roadmap, including offering MFA for all Anthropic users šŸ™‚ 

šŸ’¼ 5 Cool AI Security Jobs of the Week šŸ’¼

Account Executive @ Series A AI Security Company to serve as their first AE hire | San Fran | 8+ yrs exp.

AI Security Researcher @ Robust Intelligence to help build the future of secure, trustworthy AI | Remote | $160k-$220k | 3+ yrs exp.

Principal Software Engineer @ Microsoft to ensure the safety of all Microsoftā€™s generative AI systems | Redmond, Washington | $133k-$256k | 5+ yrs exp.

Senior AI and Information Security Analyst @ RAND to inform government and industry AI Policy | Multiple Locations | $137k-$209k | 8+ yrs exp.

Manager, Fraud and Risk @ OpenAI to lead a team to ensure safety and compliance on OpenAIā€™s platforms | San Fran | $285k | 10+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington