• Shadow AI
  • Posts
  • 🦾 Shadow AI - 25 January 2024

🦾 Shadow AI - 25 January 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

This week in Shadow AI, I cover:

🎈 3 Risks that Can Pop the AI Hype

🧩 Using Multi-modal LLMs to Solve Captchas

🤖 Primary Election Deepfake of Biden

💰 Prompt Security Fundraise

🔒 Protect AI’s Secure Gateway

💼 5 Cool AI Security Jobs of the Week - NEW!

Let’s dive in!

Demystifying AI - What Can Pop the Hype

One of my goals each week is to ensure Shadow AI provides a balanced view of AI by unpacking both the opportunities and risks. We’ve explored how AI is changing the threat landscape while also offering cybersecurity defenders new tools and productivity gains. We’ve also discussed the challenges of guarding against deepfakes, influencing elections, and governing AI. And, last week, we covered the prevailing optimism with AI by the world’s elite at Davos.

Today, however, I want to take a step back and look at three risks that can slow down AI adoption across enterprises.

1. Copyright Infringement

Generative AI has raised new questions about intellectual property and where to draw the lines. The New York Times, for example, filed suit against Microsoft and OpenAI challenging how their models train on their content and “engage in widescale copying.” Other copyright holders such as major music companies, authors, and comedians, have also sued AI companies on similar grounds. Complex legal questions about fair use, especially when AI-generated content closely mirrors or incorporates elements of copyrighted works without clear attribution or licensing, are going to get solved in court. In the interim, enterprises must navigate these legal waters carefully to avoid costly litigation and potential damage to their reputation. Depending on the outcome in courts, it could also result in increased costs of AI tools to defray the cost of deals that may need to be brokered to addressed copyright issues.

2. Energy Consumption

Training a 1.5 billion parameter Large Language Model (LLM) can consume as much energy as 170 homes in a year. GPT-4 is said to be 1.76 trillion parameters today which means training that model alone consumes as much energy as 1750+ houses a year. This is just one piece of the overall energy pie and demands are only going to increase as the models advance. The significant energy requirements and carbon footprint of AI models not only conflicts with global sustainability goals, but also raises operational costs for enterprises. As enterprises set their environmental sustainability goals, achieving them while using power-hungry AI models is going to be a challenge. And, a major energy breakthrough like nuclear fusion, nuclear fission, and solar plus storage is needed to achieve more advanced AI models. Enterprises face tough decisions on balancing their AI usage with sustainability goals and could opt for smaller, more tailored LLMs to meet specific business use cases in a more efficient way.

3. Misalignment

Misalignment of AI systems with human values and organizational goals is a subtle yet key risk that could impede AI adoption. AI misalignment can manifest in various forms, from AI systems that optimize for incorrect or harmful outcomes to those that fail to understand nuanced human ethics and social norms. For example, an AI system designed to maximize user engagement might inadvertently promote polarizing or extremist content, leading to societal harm. Ensuring that AI systems act in ways that are aligned with broader human values and ethical standards is crucial for their acceptance and integration into enterprise operations. To manage this risk effectively, enterprises will need to implement strong AI governance, ethical AI design, and continuous monitoring to ensure alignment over time.

For security leaders, these risks underscore the importance of adopting a holistic and forward-thinking approach to AI integration within their organizations. We should not only focus on the immediate cybersecurity threats and vulnerabilities associated with AI, but also to consider the broader operational, legal, and ethical implications. Taking a multidisciplinary approach to AI adoption can help your organization not only leverage AI to enhance security and operational efficiency, but also navigate other complex challenges that come with it.

AI News to Know

  • Using Multi-Modal LLMs to Solve Captchas: AI is accelerating the decreasing utility of captchas for bot prevention. As Aashiq Ramachandran covers in his recent article, there are four major types of captchas used today: 1) text, 2) math, 3) image selection, and 4) puzzle. He finds that models can break certain captchas such as simple text ones and text with more complicated backgrounds like this:

    Where AI currently falls short is with more complicated captchas like this one:

    These, however, are some of the captchas that create the highest amount of customer friction.

  • Primary Election Deepfake: Up to 25,000 New Hampshire residents received a robocall from fake President Joe Biden telling them not to vote in the state’s Democratic presidential primary Tuesday. The robocall showed up on caller ID as a local New Hampshire number that belongs to Kathy Sullivan, a former New Hampshire Democratic Party chair and treasurer for the Granite for America PAC. As we regularly discuss here, this is just the tip of the iceberg of what’s to come.

    Joe Biden Reaction GIF by MOODMAN

    Giphy

AI on the Market

  • Prompt Security Fundraise: Prompt Security raised a $5M seed round to help enterprises protect the full range of AI risks to their applications, employees, and customers. Prompt inspects each prompt and model response to prevent the exposure of sensitive data, block harmful content, and secure against Gen AI-specific attacks. It also provides security leaders with visibility and governance to address Shadow AI 🙂 .

  • Protect AI’s Secure Gateway: Protect AI announced their secure gateway which enables organizations to enforce security policies on open source AI/ML models to prevent malicious code from entering their environment. If your organization is downloading open source models from places like Hugging Face, Protect AI’s gateway will scan the model for potential attacks and execution of dangerous code.

5 Cool AI Jobs of the Week

You’ve expressed an interest in jobs at the intersection of AI and security so I’m launching a new section of the newsletter to highlight some of the most interesting AI jobs available.

Security Operations Engineer @ Scale AI to Accelerate the Development of AI Applications | Hybrid San Fran, NYC, or Seattle | $154k-184k | 4+ yrs exp.

Senior Software Security Engineer @ Anthropic to Build Reliable, Interpretable, and Steerable AI systems | San Fran | $320k-$405k | 8+ yrs exp.

Sr. Technical PM - Cybersecurity @ Nvidia to Define the Next Era of Computing | Santa Clara, CA | $152k-$287k | 7+ yrs exp.

AVP, AI Security @ MetLife to Safeguard the Use of AI Across the Organization | Remote | 5+ yrs leadership exp.

Elections Program Manager @ OpenAI to Coordinate Efforts around Election Security and Integrity | San Fran | $190k-$280k | 10+ yrs exp.

Reply directly to this email with any feedback, including your thoughts on the AI Prompt of the Week. I look forward to hearing from you.

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington