• Shadow AI
  • Posts
  • šŸ¦¾ Shadow AI - 18 July 2024

šŸ¦¾ Shadow AI - 18 July 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Last month, I joined Tribe AI, the premier network of global AI experts helping businesses unlock the full potential of AI. Itā€™s been great so far learning from fellow Tribe members, including:

  • how enterprises are building innovative GenAI solutions with Tribe AI support

  • the AI infrastructure and software best meeting customer needs

  • the different facets of LLM guardrails

So I was excited to see Tribe AI announced that they raised $3.25M, led by Bryce Roberts at INDIE ENTERPRISES this week. Looking forward to seeing whatā€™s next!

This week, I cover:

ā˜ļø What Companies Can Learn from an AI Data Cloud Breach

šŸ‡ŖšŸ‡ŗ EU AI Act

šŸŒŽ Patagonia in Hot Water

šŸšØ SAP AI Core Vulnerabilities

šŸ“ˆ AI Funding Leads the Way

šŸ’°Pindropā€™s $100M Financing

šŸ’¼ 5 Cool AI Security Jobs of the Week

Letā€™s dive in!

Demystifying AI - What Companies Can Learn from an AI Data Cloud Breach

Snowflake, which markets itself as an ā€œAI Data Cloud,ā€ is used by nearly 8,000 companies, including Disney, Canva, and Mastercard, to simplify their companyā€™s data foundation and scale their AI applications.

Recently, 165 Snowflake customers, including AT&T, Santander Bank, and Ticketmaster, had massive amounts of customer data exposed due to poorly configured Snowflake environments. Although Snowflake customers hold the ultimate responsibility for securing access to their cloud environments, there are some lessons in the breaches for both businesses adopting GenAI and AI infrastructure and software providers.

Anatomy of the Breach

April 2024: Mandiant received threat intelligence on database records that were subsequently determined to have originated from a victimā€™s Snowflake instance. Mandiant notified the victim, who then engaged Mandiant to investigate suspected data theft involving their Snowflake instance. During this investigation, Mandiant determined that the organizationā€™s Snowflake instance had been compromised by a threat actor using credentials previously stolen via infostealer malware. The threat actor used these stolen credentials to access the customerā€™s Snowflake instance and ultimately exfiltrate valuable data. At the time of the compromise, the account did not have multi-factor authentication (MFA) enabled.

May 2024: On May 22, 2024, Mandiant obatined additional intelligence identifying a broader campaign targeting additional Snowflake customer instances. Mandiant immediately contacted Snowflake and began notifying potential victims.

The threat actor obtained access to ~165 Snowflake customer instances via stolen customer credentials. Per Mandiant, these credentials were primarily obtained from multiple infostealer malware campaigns that infected non-Snowflake owned systems. This allowed the threat actor to gain access to the affected customer accounts and export large volumes of customer data from the respective Snowflake customer instances. The threat actor subsequently began to extort many of the victims directly and actively attempted to sell the stolen customer data on cybercriminal forums.

Lessons Learned for Businesses Consuming Generative AI

Snowflake was built to disrupt legacy, on-premise data warehousing technology with its cloud data warehousing platform was architected to separate storage and compute resources, a major departure from traditional data warehouse design where these are intrinsically linked. This allowed for companies to take advantage of improved scalability, performance, cost, and flexibility. It also meant, however, that customers were migrating tons of critical business data to a third party provider. Securing their data warehouse shouldā€™ve been a critical priority, but for some major companies, basic security controls like multi-factor authentication (MFA) was not leveraged.

This is a prime example of how traditional vendor risk management is failing enterprises. Under no circumstances should companies have allowed user accounts to access their data warehouse without MFA. The breach emphasizes 4 important third party risk management takeaways that are increasingly critical as companies explore taking the next step in building generative AI solutions on top of their data.

1. Companies must assess 'secure by design' and 'secure by default' features during the vendor selection process. Does a foundational model provider, for example, allow companies to enforce MFA for user accounts? Make the availability of security features a core requirement in the vendor selection process.

2. Companies must create a secure deployment plan and partner with the IT admin team to implement. It must include the basics, like is MFA required for all user accounts! The environment should also be continuously monitored for configuration drift.

3. Companies must integrate business resiliency and third party risk management. Develop tailored business resiliency playbooks for critical third party systems (e.g., data warehouses, identity providers, cloud providers, etc) in the event they suffer a breach or become unavailable.

4. Leverage buying power to push for security features. Many of these rapidly scaling tech companies are prioritizing feature releases over security. If youā€™re a customer with buying power, push them to release the security features you need to make it easier to secure your environment.

The Snowflake customer breaches are an important reminder for every enterprise, even those that werenā€™t impacted, that they need to move beyond vendor security questionnaires and periodic reassessment and implement measures to better manage risk across the entire vendor lifecycle.

Lessons Learned for Generative AI Providers

Although Snowflake customers hold most of the culpability here, Snowflake is not blameless. They couldā€™ve made it easier for customers to secure their environments by allowing admins to enforce MFA and offering a tool to measure and monitor compliance against a Snowflake security baseline.

As a result of the breach, Snowflake scrambled to make many of these security enhancements available to their customers now.

Generative AI providers can learn lessons from Snowflake and should have a secure by design and secure by default strategy to customers. For major players in this space, it should cover items like:

  • Do you require MFA for all new users or allow admins to enforce MFA?

  • What types of MFA methods do you allow?

  • Do you provide granular authentication policies?

  • Can customers control data retention policies?

  • What monitoring and alerting of admin activity is available?

  • What monitoring and alerting of data exfiltration is available?

  • Is data encrypted at rest? Can customers bring their own encryption key?

  • How are API calls authenticated and authorized?

  • How is customer data isolated from other customers' data?

  • Are models trained on customer data by default?

So What for Security?

The Snowflake customer breaches serve as a important wake-up call for both AI data consumers and providers. For businesses, it underscores the critical need to prioritize security in vendor selection, deployment, and ongoing management. Basic security measures are still the most important! For AI providers, it highlights the imperative to build robust security features into their platforms from the ground up. As we rush to harness the power of AI, we canā€™t forget that the foundation of innovation is built on trust and security.

AI News to Know

EU AI Act : After last weekā€™s issue of Shadow AI was published, the EU AI Act was published in the EU Official Journal. The AI Act will officially come into force August 1 and applies to providers of AI systems located within the EU or from another country servicing the EU. Here are some key dates to know:

  • February 2, 2025: The list of prohibited uses goes into effect (these include: social scoring, manipulation, exploiting vulnerabilities, risk assessment/predicting criminal offenses, compiling facial recognition databases from public/CCTV sources, real-time biometric analysis)

  • May 2, 2025: Code of practices made available for providers of general-purpose AI (GPAI) models

  • August 2, 2025: GPAI providers must be in compliance with provision of the Act

  • February 2, 2026: Commission must provide guidelines specifying the practical implementation of the Act and a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

  • August 2, 2026: The Act fully applies.

Patagonia in Hot Water: Outdoor apparel retailer Patagonia is being sued for allegedly breaking California privacy law due to its partnership with Talkdesk, an artificial and data intelligence company. The plaintiffs say the partnership led to their communications with Patagonia being intercepted, recorded and analyzed by Talkdesk without their permission. The lawsuit is not without precedent. In May, plaintiffs filed a similar class action lawsuit against Navy Federal Credit Union for allegedly using AI provided by the software company Verint to intercept, record and assess customer calls without providing proper notice to clients nor obtaining their consent. The cases are a reminder on the importance of customer notification and consent when sharing personal information with a vendor using AI.

SAP AI Core Vulnerabilities: Wiz Research (which may be Google Research soon?) discovered vulnerabilities in SAP AI Core, allowing malicious actors to take over the service and access customer data. By executing legitimate AI training procedures, which requires running arbitrary code by definition, Wiz was able to use SAPā€™s infrastructure to move laterally and take over the service ā€“ gaining access to customersā€™ private files, along with credentials to customersā€™ cloud environments: AWS, Azure, SAP HANA Cloud, and more. The vulnerabilities Wiz found could have allowed attackers to access customersā€™ data and contaminate internal artifacts ā€“ spreading to related services and other customersā€™ environments. Wizā€™s Research is an important reminder that appropriate guardrails should be in place to assure that untrusted code is properly separated from internal assets and other tenants. 

AI on the Market

AI Funding Leads the Way: According to CB Insights, AI startups received nearly 30% of all venture funding in Q2 with $18.3B raised, the highest share ever recorded.

Pindrop's $100M Financing: Pindrop, a voice security and authentication solution provider that helps protect against deepfakes, raised $100M in debt financing from Hercules Capital. Pindrop solutions help detect fraudsters and authenticate genuine customers, reducing fraud and operational costs while improving customer experience and protecting brand reputation.

šŸ’¼5 Cool AI Security Jobs of the Week šŸ’¼

Deepfake Cyber Strategist@ WWTto accelerate capabilities in identifying and mitigating Deepfake threats | Remote | 10+ yrs exp.

Sr. AI Security Engineer@ MetLife to focus on securing cloud and AI platforms | Multiple Locations | 2+ yrs exp.

GenAI Cloud Security Architect @ S&P Global to develop and implement comprehensive AI/ML security strategies | Hybrid or Remote | $155k-$270k | 7+ yrs exp.

Sr. Security Engineer, Vulnerability Management @ Coreweaveto build and improve vulnerability management tools, processes, and procedures | Remote | $175k-$210k | 3+ yrs exp.

Security Operations Center Analyst@ Shield AI to monitor and response to threats on Shied AIā€™s network | Washington, DC or Dallas | $158k-$237k | 5+ yrs exp.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington