Technology

Curl Creator Considers Ending Bug Bounty Amid AI Report Deluge

2025-07-15

Author: Nur

Daniel Stenberg, the mastermind behind the widely-used curl command line utility, is fed up with the flood of misleading bug reports generated by AI tools. The situation, which he and other curl maintainers have dubbed "AI slop," has become a significant challenge since early 2024.

Over the past year and a half, the influx of unhelpful AI-generated submissions has exponentially increased, making it difficult to distinguish between genuine human reports and the so-called "human slop"—low-quality submissions where the origin is unclear. Stenberg recently revealed that in 2025, nearly 20% of all bug submissions are AI-generated.

"This year, we’ve seen a drastic decrease in the validity of submissions, with only about 5% turning out to be true vulnerabilities since July," he wrote in a recent blog post. "Our security team is facing an uphill battle against this increasing tide of AI-generated garbage."

Rethinking the Bug Bounty Program

Stenberg is now contemplating whether to scrap curl's bug bounty program, which has awarded over $90,000 since its launch in 2019. With the volume of submissions rising, he’s forced to rethink how to manage this pressing issue.

Currently, the bug bounty program, managed through HackerOne, requires reporters to disclose if they utilized generative AI. While it does not ban AI-assisted reports outright, it strongly advises against them. "You should check and double-check all facts and claims any AI told you before you pass on such reports to us," the policy states. "You’re generally much better off avoiding AI altogether."

A Heavy Toll on Resources

Though an average of two submissions per week may not sound overwhelming, it becomes a significant burden when you consider the curl security team consists of only seven members. Three to four reviewers analyze each report, a process that can take between 30 minutes to three hours per submission.

Stenberg expressed frustration over the inefficiency: "I already dedicate an insane amount of time to curl. Wasting even three hours on these erroneous reports limits my capacity for other work. My colleagues, who can only spare a few hours each week, are feeling the strain as well."

Learning from Others' Woes

The issue isn’t unique to curl. Last December, Seth Larson from the Python Software Foundation raised similar concerns, criticizing the influx of poor security reports affecting the Python community. In May 2025, Benjamin Piouffle from Open Collective echoed these sentiments, lamenting their own struggle against a tide of "AI garbage" in their systems.

Searching for Solutions

Stenberg recognizes the need for change and is exploring potential solutions. He’s even considered charging a submission fee or doing away with the bounty entirely, though he’s apprehensive about both options. "Removing monetary incentives might not eliminate the flood of reports, especially since many of these reporters genuinely believe they are contributing positively, misled by AI hype," he concluded.

As the situation evolves, one thing is clear: swift action is needed to protect the integrity of open-source projects like curl from the chaos of unchecked AI contributions.