Is Your AI Deployment a Risk Waiting to Happen?

HackerOne Team
Image
staying ahead of AI risk

As generative AI continues to shape the way organizations operate, innovate, and scale, the pressure to deploy these tools responsibly is intensifying. But with opportunity comes risk — and for many leaders, navigating AI safety and security remains uncharted territory.

In a recent webinar, Is Your AI Deployment Risky? How to Proactively Address AI Safety,” HackerOne co-founder Michiel Prins and Haize Labs co-founder Leonard Tang broke down the risks of AI deployment and laid out actionable strategies to get ahead of them.

Here are the top takeaways from the conversation:

1. AI Risk Is Already at Your Doorstep

According to McKinsey, 74% of organizations are already deploying generative AI, but only 18% are embedding risk assessments into their systems. The result? A gap that’s leaving many unprepared for the reputational, legal, and operational risks tied to AI failures — like the well-known Air Canada case, where a chatbot hallucination led to a court ruling against the company.

The key message: AI risk isn’t theoretical. It’s already happening.

2. Understand the Trifecta: Reliability, Safety, and Security

Prins and Tang outlined a simple but powerful framework for understanding AI risk:

  • AI Security: Protecting your AI system from threats from the outside world.
     
  • AI Safety: Protecting the outside world from your AI system (think toxic outputs).
     
  • AI Reliability: Ensuring the AI system behaves as expected.

Each element is distinct but interconnected. Tang emphasized that unexpected outputs are the root cause of many failures across all three categories.

“The underlying issue that leads to reliability, safety, and security problems is that AI is extremely sensitive. Slightly different inputs can lead to wildly different outputs.” – Leonard Tang, Co-founder of Haize Labs

3. Own the Risk Together

AI systems are like modern cars: made up of many complex parts, from models and APIs to user interfaces and backend storage. Because AI risk spans so many layers, no single team can manage it alone.

That’s why more companies are forming AI risk councils — cross-functional teams spanning security, legal, engineering, product, and privacy. This trend reflects a growing realization: managing AI risk isn’t just an IT or compliance issue; it’s a business-critical concern.

4. Start With a Code of Conduct and Threat Model

One of the most practical recommendations from the session was to create an AI code of conduct: a custom rulebook for your AI system that defines acceptable behaviors and red lines based on your use case.

This should be paired with an AI threat model, a cybersecurity technique adapted for AI that maps out potential entry points and attack scenarios, from prompt injections to data leakage.

Together, these tools help define what safe and unsafe behavior looks like and set the foundation for proactive testing.

5. AI Testing Isn’t a One-and-Done Exercise

Prins and Tang highlighted the importance of continuous testing. They advocate for a human + AI hybrid approach:

  • HackerOne’s AI red teamers bring creativity and persistence to uncover novel threats.
     
  • Haize Labs’ automated tools scale up those insights into repeatable, ongoing testing.

It’s a continuous feedback loop that adapts as systems evolve and new risks emerge.

“AI systems aren’t static. Every change — a new model, a new system prompt, a new tool integration — introduces new risk. You need to continuously test for the past, present, and unknown.” – Michiel Prins, Co-founder of HackerOne

6. Scale and Creativity Matter

AI systems are constantly changing, and so are the threats. In-house testing teams can’t always keep up. The power of scale, creativity, and automation from platforms like HackerOne and Haize Labs allows organizations to move fast without sacrificing safety.

As HackerOne customer Snap put it: “I was surprised that many of the researchers did not know much about AI but were able to use creativity and persistence to get around our safety filters.” 

That’s the power of the hacker mindset.

Final Advice: Do the Homework

Both speakers left attendees with the same message: There’s no silver bullet for AI risk. Every organization needs to invest the time to define what risk looks like in their context — and then embed testing and oversight across the AI lifecycle.

“You really don’t want to be caught not doing your homework.” – Michiel Prins, Co-founder of HackerOne

Want to learn more?
Watch the full webinar replay and take the first step toward proactive AI safety.