02

Whether You Think AI is a Threat or an Opportunity, You're Right

Security for AI

We asked 500 security professionals their opinions about AI

48%

of security professionals said that GenAI was one of the most significant risks they saw impacting their organization

Security professionals rate the top threats to their organization

“The downside of AI is that it introduces more vulnerabilities. If a company uses it, we’ll find bugs in it. AI is even hacking other AI models. It’s going so fast and security is struggling to catch up."

Jasmin Landry,
@jr0ch17

Security Researcher and HackerOne Pentester

Of the threats to GenAI, security professionals said they were worried about:

64%

of respondents believe GenAI will have a major impact on their organization

62%

are confident in their ability to secure its use

70%

believe that AI legislation will help enhance safety and security

are concerned about the reputational risks tied to AI

highlight that basic security practices are being overlooked in the rush to implement GenAI

67% believe that an external, unbiased review of GenAI implementations is the most effective way to uncover AI safety and security issues.

Forward-thinking organizations are taking proactive steps to avoid AI-related security incidents. AI red teaming—where organizations invite security researchers to identify safety and security flaws in their AI products—is gaining traction as a best practice for testing GenAI deployments.

The number of AI assets included in HackerOne programs has surged by 171% over the past year.

The 5 Most Commonly Reported Vulnerabilities on AI Programs

AI Safety vs. AI Security

What’s the difference between AI safety and AI security?

AI Safety

AI safety focuses on preventing AI systems from generating harmful content, from instructions for creating weapons to offensive language and inappropriate imagery. It aims to ensure responsible use of AI and adherence to ethical standards.

AI Security

AI security involves testing AI systems with the goal of preventing bad actors from abusing the AI to, for example, compromise the confidentiality, integrity, or availability of the systems the AI is embedded in.

The reduced barriers to entry for AI safety reports means bounties for these reports are slightly lower

AI Safety Programs

$401

average payout

AI Security Programs

$689

average payout

Recommendation

Establish continuous testing, evaluation, verification, and validation throughout the AI model life cycle.

Establish an AI governance framework outlining roles, responsibilities, and ethical considerations, including incident response planning and risk management.

Train all users on ethics, responsibility, legal issues, AI security risks, and best practices such as warranty, license, and copyright.

Download the Full 8th Annual Hacker-Powered Security Report

Get researcher insights, customer testimonials, industry data, analysis and advice, and more.

AI for Security

AI and automation are powerful efficiency tools.

$2.2M

average saved by organizations per breach1

Companies without AI and automation face longer response times and higher breach costs.

We surveyed 2000 members of our security researcher community about their use of AI

20%

of security researchers see AI as an essential part of their work

38%

of security researchers reported using AI in some capacity

Security researchers are using AI to:

“I leverage AI-powered vulnerability scanners to quickly identify potential weak points in a system, allowing me to focus on more complex and nuanced aspects of security testing. I also use AI for reporting. Previously, I spent 30-40 minutes writing reports to ensure all details were included, the tone was appropriate, and there were no grammatical mistakes. AI has streamlined this process, reducing the time to an average of 7-10 minutes per report."

Hazem Elsayed

@hacktus

Accelerate Vulnerability Remediation with Hai

33% of security researchers are using AI to summarize information and write reports. You can also use AI to streamline and enhance your vulnerability management process via HackerOne’s GenAI copilot, Hai.

Recommendations

Use Hai’s tailored advice to quickly interpret complex vulnerability reports with concise summaries and deeper insights for faster decision-making.

Use Hai to also craft clear and succinct messages for effective communication between security, development teams, and researchers.

Automate tasks by integrating Hai for to assist with writing assistance, generating custom vulnerability scanner templates, and managing large reports, reducing manual effort.

“Hai is a game-changer for our communication with researchers. Managing relationships and keeping messages clear and concise can be challenging, especially with high expectations from both researchers and managers. Short replies can be misunderstood, while longer responses are time-consuming. Hai helps us craft more precise and neutral messages, proofreads our communications, and maintains a consistent tone. This efficiency allows us to engage with more researchers and allocate time to other critical tasks."

Cybersecurity Consultant, Enterprise, Financial Services

In This Report

01

To Beat Cyber Threats, You Need Smarter Tools, Not Just Stronger Ones

Read More
02

Whether You Think AI is a Threat or an Opportunity, You’re Right

Read More
03

Automation Can’t Compete: Security Researchers Prove Their Edge

Read More
04

Beyond Bounties: What Makes a High-Performance Program

Read More
05

Explore Your Top Ten Vulnerabilities

Read More
06

The Best Defense Has Layers of Depth

Read More
07

A New Success Metric: Return on Mitigation

Read More

Download the 8th Annual Hacker-Powered Security Report

X