HackerOne Comments on the U.S. AI Action Plan

Ilona Cohen
Chief Legal and Policy Officer
Image
U.S. AI Action Plan

Artificial Intelligence isn’t just changing the world—it’s accelerating change at a pace we’ve never seen before. From transforming healthcare to reshaping national security, AI is powering the next wave of innovation. But with this extraordinary potential comes a critical challenge: securing these systems before they become targets.

The reality is stark: according to IBM, the average cost of a data breach in 2024 was a staggering $4.88 million. As AI continues to integrate more deeply into every facet of society, the risk of breaches seems likely to grow exponentially—yet we are still in the early stages of securing these powerful systems. The question we face now is not if AI will impact our future, but can we secure it before the risk outpaces the reward?

At HackerOne, we believe that the time to act is now. We’ve spent years working at the intersection of cybersecurity and innovation, leveraging the world’s largest community of ethical hackers to identify vulnerabilities before malicious actors can exploit them. Our response to the Office of Science and Technology Policy’s Request for Information outlines key steps to help secure AI systems and protect the incredible potential they bring.

Encourage and Prioritize AI Red Teaming 

Testing AI systems for vulnerabilities is not merely a best practice – it is a necessity to secure critical infrastructure and protect public safety. We recommend that the government incentivize red teaming for high-risk AI systems, both pre-deployment and periodic testing to identify potential vulnerabilities. Additionally, updating legal frameworks to encourage the sharing of red teaming results and protecting good-faith researchers from legal risks is essential. 

Strengthen the Framework for AI Security and Flaw Disclosures 

A robust system for reporting and addressing AI flaws is vital to maintaining trust and security. We advocate for the establishment of standardized processes for disclosing vulnerabilities in AI models, ensuring that independent researchers can do so without fear of legal repercussions. An AI Action Plan should prioritize creating safe harbors for these disclosures, promoting transparency and collaboration across the AI ecosystem. 

Support Research and Development in AI Red Teaming 

Continued investment in research and development is crucial for advancing AI red teaming methodologies and developing tools that can more effectively assess the security of AI systems. This includes promoting AI red teaming test beds, developing more sophisticated tools for testing AI models and validating results, and expanding the availability of skilled professionals who can conduct AI red teaming. These steps will foster innovation in AI red teaming and strengthen the resilience of AI systems through a competitive edge. 

At HackerOne, we believe that the successful and secure development of AI technologies will be a cornerstone of future progress. By adopting these recommendations, we can work together to ensure AI systems are not only innovative but also secure and trustworthy. We remain committed to collaborating with the federal government to support the effective implementation of the AI Action Plan.