• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 11 July 2024

šŸ¦¾ Shadow AI - 11 July 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Itā€™s good to be back writing this week after taking last week off for the 4th of July holiday.

This week, I cover:

ā˜ļø Building Generative AI on AWS

āš–ļø OpenAIā€™s Board Governance Adjustment

šŸ›”ļø Defending against AI-powered Scams

šŸŒ¦ļø Goldmanā€™s Mixed AI Outlook

 šŸ’¼ 5 Cool AI Security Jobs of the Week

Letā€™s dive in!

Demystifying AI - Building Generative AI on AWS

I attended the AWS Summit in NYC this week and Generative AI (GenAI) was the major theme from the conference. Here are my biggest takeaways:

1. AI is a Huge Revenue Driver for AWS

  • Hundreds of thousands of customers are running AI/ML workloads on AWS

  • AWS Bedrock is one of fastest growing AWS services in last decade

  • AWSā€™ GenAI products are already a multibillion Annual Recurring Revenue (ARR) business

  • 96% of AL/ML unicorns run on AWS

2. Regulated Industries are Moving Fast to Adopt AI

At first blush, you may think regulated industries would be slow adopters of GenAI, but AWS is actually seeing customers in regulated industries such as NYSE, SunLife, and Pfizer move quicker.

Why?

Over the years, regulatory compliance has driven these companies to adopt the right behaviors - strong data governance, model governance, and security controls - that enable GenAI. The incremental lift for them to start using GenAI for specific business use cases is smaller than companies without strong data practices.

3. AWS Offers Solutions across the GenAI Tech Stack

The GenAI tech stack is compromised of three key layers that AWS customers are leveraging depending on their use cases:

Bottom Layer: Consists of the compute required to build and train your own foundational models and generate inferences (or predictions) through AWS Sagemaker.

Middle Layer: Consists of tools to build with a variety of existing or custom Foundational Models and LLMs through AWS Bedrock. Bedrock allows customers to mix and match models, including models from Anthropic, Cohere, Meta, Mistral, and open source, based on their needs.

Top Layer: Applications that leverage Foundational Models and LLMs such as Q, Amazonā€™s GenAI agent which can connect to enterprise data sources and coding repositories to streamline business and developer processes.

4. AWS GenAI Security Services are Expanding

AWS offers Guardrails for Bedrock that reduces the output of harmful content by 85%. Using Guardrails, teams can configure word filters, topic filters, harmful content filters, and PII filters as well as conduct security checks for malicious prompts.

At the summit, AWS announced the release of Contextual Grounding Checks within Guardrails so enterprises can better detect and block hallucinations. Contextual Grounding Checks is based on validating two key questions:

  • Is the result found in the source material?

  • Related to the query?

Although Guardrailsā€™s availability has been limited to Bedrock, AWS announced that they are releasing a Guardrails API so the same functionality can extend to the bottom layer of the GenAI tech stack with Sagemaker and EC2.

So What for Security?

Security practitioners at organizations venturing into GenAI with AWS should incorporate the following 10 elements into their GenAI security strategy:

  1. Leverage AWS Guardrails: Implement AWS Guardrails or a similar solution as a foundational security measure.

    • Take advantage of the configurable word, topic, and harmful content filters. Tailor these to your industry-specific needs and organizational risk appetite.

    • Configure PII filters to prevent inadvertent exposure of sensitive information in AI outputs.

    • Utilize the new Contextual Grounding Check feature to mitigate hallucinations. Develop processes to regularly review and refine these checks based on your use cases.

  2. Extend Security Across the AI Stack: Ensure you implement consistent security measures across all layers of your GenAI implementation - from the top to the bottom.

  3. Develop AI-Specific Security Policies: Create or update security policies to address AI-specific risks, including prompt injection, data poisoning, and model theft.

  4. Establish Monitoring and Auditing: Set up robust monitoring and auditing processes for your GenAI systems. Regularly review logs and AI outputs for potential security issues.

  5. Conduct Regular Security Assessments: Perform periodic security assessments of your GenAI implementation, including penetration testing and vulnerability assessments.

  6. Train End Users: Update security training for end users leveraging GenAI so they understand the risks and how to use GenAI systems securely.

  7. Collaborate with Data Governance and Privacy: Work closely with your data governance and privacy teams to ensure that data used for training or inference meets compliance requirements and ethical standards.

  8. Plan for Incident Response: Update your incident response plans to account for AI-related security incidents, such as the generation of harmful content or data breaches.

  9. Consider Supply Chain Risk: Develop continuity plans that detail how you would reduce business impact if a foundational model supplier you rely upon is breached.

  10. Compliance Mapping: Map AWS's GenAI security features to your compliance requirements (e.g., GDPR, SOC2, HIPAA) to ensure regulatory alignment.

AI News to Know

OpenAIā€™s Board Governance Adjustment: Microsoft has dropped its board observer seat as it faces regulatory scrutiny. In November, Microsoft pushed for the spot after Sam Altman was ousted temporarily. Microsoft, however, said they have ā€œwitnessed significant progress from the newly formed board and are confident in the company's direction."

Microsoftā€™s move has downstream impact on Apple, which was recently granted an OpenAI board observer seat as part of their iPhone and Mac partnership announcement. Rather than board observer seats, OpenAI will standup a new forum to inform and engage key strategic partners ā€” such as Microsoft and Apple ā€” and investors.

The changes to OpenAIā€™s board come as Microsoft and Apple face continued regulatory scrutiny. UK regulators started seeking views on Microsoftā€™s partnership with OpenAI in December. EU regulators are also looking into the partnership, alongside other Big Tech AI deals. The FTC is also investigating Microsoft, Amazon, and Google investments into OpenAI and Anthropic.

Russian AI-Powered ā€˜Bot Farmā€™ Takedown: The FBI, in coordination with the Cyber National Mission Force and allies, disrupted a Russian state-sponsored media organization using covert AI to create fictitious online personas, and post content on X at scale. Attacker TTPs included:

  • Operators sought to avoid detection by using a backend code designed to auto-assign a proxy IP address to the AI generated persona based on their assumed location

  • Code was inserted into the project which would allow for the server to bypass X verification methods used for bot prevention. When X sends an authentication code to an account, the email is sent directly to the server (because the email associated with the account is located on the same server); the attackerā€™s code responded by scraping the verification code and providing it to X

Defending against AI-powered Scams: TechCrunch wrote a practical guide on the potential AI-powered scams today and how individuals can protect themselves. Although we lack good data on how prevalent AI-powered scams are, share this article with family and friends to help raise awareness of how scams are evolving through voice cloning, personalized phishing emails, and deepfakes.

AI on the Market

Goldmanā€™s Mixed AI Outlook: Goldman Sachsā€™ released a research report that throws some cold water on the AI hype cycle. GS Head of Global Equity Research Jim Covello argues that to earn an adequate return on the ~$1tn estimated cost of developing and running AI technology, it must be able to solve complex problems, which, he says, it isnā€™t built to do today. While the emergence of GenAI is often compared to the transformational advent of the Internet, Covello points out that truly life-changing inventions like the internet enabled low-cost solutions to disrupt high-cost solutions even in its infancy, unlike costly AI tech today. Other research analysts at Goldman, however, remain more optimistic about AIā€™s economic potential and its ability to ultimately generate returns.

šŸ’¼ 5 Cool AI Security Jobs of the Week šŸ’¼

SVP, Technology Risk Control Management / AI @ BNY to set the strategy for identifying and managing IT and AI risks to the Clearing Markets business units | New York, NY | 10+ yrs exp.

Director, AI Products - Cybersecurity @ SimSpace to deliver successful AI cyber security products to market that support the overall company long term vision | Remote | $195k-$280k | 7+ yrs exp.

Lead Security Governance and AI Analyst @ Solventum to establish AI governance frameworks and mitigate AI risk | Remote | $183k-$224k | 5+ yrs exp.

Sr. AI/ML Engineer, Security @ Optum to control GenAI usage across the company | Remote | $88k-$173k | 5+ yrs exp.

Full Stack Engineer, Security @ Abridge to serve as an application security subject matter expert for a GenAI healthcare startup | NYC or San Fran | $180k-$250k | 7+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington