• Shadow AI
  • Posts
  • 🦾 Shadow AI - 22 February 2024

🦾 Shadow AI - 22 February 2024

Arming Security and IT Leaders for the Future

Forwarded this newsletter? Sign up for Shadow AI here.

Hello,

Hope you’re having a great week!

Shadow AI is here to help curate the most relevant news for security and IT professionals as AI hits a “tipping point” according to NVIDIA’s CEO. This week, we cover:

🔓 Three Ways AI can Transform GRC Functions

đź‘® How Trust and Safety is Changing with AI

👀 Hugging Face’s Security Vulnerability

🛡️ Google’s AI Cyber Defense Initiative

🪖 Scale AI’s Partnership with the Pentagon

🤑 NVIDIA’s Mindblowing Earnings

đź’Ľ 5 Cool AI Security Jobs of the Week

Let’s dive in!

Demystifying AI - Transforming GRC Functions with AI

Last week, it caught my eye that the CISOs at major financial institutions are hiring for AI roles. It’s a signal how cybersecurity practitioners need to be prepared to offer solutions for how enterprises, including their own security teams, can use AI securely. In prior issues of Shadow AI, we’ve unpacked how AI can enable security operations and secure development. Today, we explore how AI can enhance an organization’s ability to manage risk, ensure compliance, and streamline governance processes. Here are three practical ways AI can be utilized to bolster cyber GRC functions:

1. Streamlining Governance Processes

Level of Effort: 🦾 

AI can help accelerate routine governance tasks, such as updating company policies, crafting executive reporting, or responding to third party security questionnaires. I tested this out by creating Security Companion, a GPT to help answer third party security questionnaires based on a sample company’s self-assessment and it helped reduce the response time while ensuring responses were consistent. I can also see AI helping companies document and maintain their risk registers and bring efficiency to their issue management processes.

2. Enhanced Compliance Monitoring

Level of Effort: 🦾 đź¦ľ 

Global businesses face a web of regulations and standards and it takes an army of people to ensure compliance. AI can streamline this process by automatically monitoring compliance with relevant laws and regulations. Gen AI can read and understand regulatory documents, identify relevant control requirements, and assess an organization's compliance status. It not only reduces the workload on compliance teams, but also minimizes the risk of non-compliance fines.

3. Automated Risk Assessments

Level of Effort: 🦾 đź¦ľ đź¦ľ

Traditional risk assessment methods like Risk and Control Self Assessments (RCSAs) are time-consuming, point in time assessments that may not always identify all potential risks. Generative AI can analyze vast amounts of data from various sources to identify potential risks quickly and continuously, allowing organizations to take proactive measures to mitigate them. It can also generate executive level risk reports summarizing findings.

Conclusion

We’re still in the early stages of enterprise AI adoption, but the possibilities for integrating AI into cyber GRC functions is exciting. Start with an easy use case to prove out the concept (e.g., third party security questionnaires, policy creation and updating) and build from there. And, it goes without saying, Gen AI is far from infallible. You still need to have a human in the loop to test and tune the model, and validate the outputs.

What other ways can Generative AI supercharge a GRC function?

AI News to Know

How Trust and Safety is Changing with AI: Katie Harbath, founder and CEO of Anchor Change, reflected in her newsletter how AI is requiring trust and safety teams to think much more broadly than traditional content moderation. Holistic trust and safety strategies need to consider safety in terms of “how models are trained, what questions, people can ask, how content is summarized, and how it is stress tested.” Last night, Google apologized for “missing the mark” after Gemini produced “inaccuracies in some historical image generation depictions,” including racially diverse Nazis and this morning they paused image generation.

Hugging Face Security Vulnerability: Hugging Face is home to over 500,000 machine learning models and Hidden Layer published a detailed post on a vulnerability that could increase the risk of targeted supply chain attacks. Hugging Face created a Safetensors conversion bot to help reduce the risk that machine learning models uploaded to the platform are vulnerable to malicious code injection through insecure file formats. The conversion bot allows users to covert their model into a safer, malware free alternative. However, Hidden Layer’s research shows how the process can be hijacked with a backdoor to trigger malicious behavior. Hidden Layer recommends investigating any repositories you leverage on Hugging Face to determine if there has been any form of illicit tampering of model weights and biases as a result of the insecure conversion process.

Google’s AI Cyber Defense Initiative: Security practitioners are constantly struggling to keep up with the latest threats and Google has launched an AI Cyber Defense Initiative to help reverse the “defender’s dilemma.” Phil Venables and Royal Hansen argue that AI can disrupt cybersecurity and tilt the advantage to defenders. The report is well worth diving into, especially the roadmap for reversing the Defender’s Dilemma which breaks down the current state and future state capabilities of defenders and attackers across core cybersecurity functions.

Scale AI’s Partnership with the Pentagon: DoD’s Chief Digital and Artificial Intelligence Office (CDAO) entered in a one year contract with Scale AI to develop the framework, methods, and technology CDAO can use to test and evaluate Generative AI so it can be deployed safely for military support applications.

AI on the Market

NVIDIA’s Mindblowing Earnings: NVIDIA, the chipmaker at the center of the AI boom, reported 256% revenue growth in one year and a 22% increase from Q3. Their 2023 growth is staggering and the best snapshot one can provide on the state of AI on the Market.

đź’Ľ 5 Cool AI Security Jobs of the Week đź’Ľ

Information Security Manager, AI Offensive Security @ AMD to accelerate and secure next generation computing experiences | San Jose or Austin | $152k-$228k

Engineering Manager, Risk and Attack Defense @ Okta to build a next generation security detection and risk platform for customers | San Fran | $186k-$280k | 10+ yrs exp.

AI/ML Sr. Security Engineer @ Apple to secure the most sophisticated and scalable consume-facing virtual assistant and AI systems | Cupertino, CA | $170k-$300k | 5+ yrs exp.

AI/ML Product Security Engineer @ Boeing to develop enterprise security AI/ML applications | Arlington, VA | $91k-$162k | 3+ yrs exp

Multiple Roles @ Centre for Governance of AI to help humanity navigate the transition to a world with advanced AI | Remote | Various levels

If you enjoyed this newsletter and know someone else who might like Shadow AI, please share it!

Until next Thursday, humans.

-Andrew Heighington