
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Used correctly, AI, as with Anthropic and Mozilla, can help open source.
- Used badly, as with Google and FFmpeg, AI hurts open source.
- Linux is using AI to handle many boring but necessary tasks.
Recently, there was some great news about AI and open source: Anthropic’s Claude Opus 4.6 AI is helping clean up Firefox’s open-source code. According to Mozilla, the parent company of Firefox, Anthropic’s Frontier Red Team found more high-severity bugs in Firefox in just two weeks than people typically report in two months. Mozilla proclaimed: “This is clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers’ toolbox.”
Also: AI is getting scary good at finding hidden software bugs – even in decades-old code
That’s great, right? Right!? Well, not so fast. There’s another darker side to the use of AI in open-source software. Daniel Stenberg, creator of the popular open-source data transfer program cURL, has pointed out that his project has been flooded with bogus, AI‑written security reports that drown maintainers in pointless busywork.
Mozilla knows about this issue. Brian Grinstead, a Mozilla distinguished engineer, and Christian Holler, a Mozilla principal software engineer, wrote, “AI-assisted bug reports have a mixed track record, and skepticism is earned. Too many submissions have meant false positives and an extra burden for open-source projects.”
Also: 7 AI coding techniques I use to ship real, reliable products – fast
You can say that again. At FOSDEM 2026 in Brussels, Belgium, Stenberg said that, until early 2025, roughly one in six security reports to cURL were valid. That’s because, “in the old days, you know, someone actually invested a lot of time [in] the security report. There was a built-in friction here, but now there’s no effort at all in doing this. The floodgates are open. Send it over.”
Stenberg said: “The rate has gone up too now; it’s more like one in 20 or one in 30, that is accurate.” This rise has turned security bug report triage into “terror reporting,” draining time, attention — and the “will to live” — from the project’s seven‑person security team. He warned that this AI‑amplified noise doesn’t just waste volunteer effort but also risks the broader software supply chain: if maintainers become numb to these junk reports, real vulnerabilities in code will be missed.”
Also: Is Perplexity’s new Computer a safer version of OpenClaw? How it works
Indeed, last summer, Stenberg wrote, “We need to reduce the amount of sand in the machine. We must do something to drastically reduce the temptation for users to submit low-quality reports.” The result? More slop than ever kept coming in, so he decided to close down cURL’s bounty for security bug reports: “A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.”
This trend cannot continue. Volunteers run most open-source projects, even mission-critical ones, largely on a shoestring. They don’t have the time or resources to dig through hundreds of AI slop bug reports.
Thankfully, Anthropic took a different approach, Mozilla reported: “Anthropic’s team got in touch with Firefox engineers after using Claude to identify security bugs in our JavaScript engine. Critically, their bug reports included minimal test cases that allowed our security team to quickly verify and reproduce each issue. Within hours, our platform engineers began landing fixes, and we kicked off a tight collaboration with Anthropic to apply the same technique across the rest of the browser codebase.”
That’s how AI and open-source should work together. However, my concern is that this approach will be the exception rather than the rule in the future. You see, this collaborative approach required real work from the people using AI. All too often, open-source fixes are produced by inexperienced or lazy developers trying to vibe code their way into open-source projects. Sorry, people, it doesn’t work that way.
Worst, some companies are using AI to dump accurate, but stupid, bug reports on tiny projects. For example, Google recently discovered numerous minor security problems in FFmpeg. This project is used by everyone, from your TV to the web and beyond, to play video and audio media files and streams.
Also: I got 4 years of product development done in 4 days for $200, and I’m still stunned
So, how small are these bugs? One is a playback bug in the first 10 to 20 frames of Rebel Assault 2, a 1995 game. The FFmpeg team relies on volunteer efforts and doesn’t have the resources to deal with this kind of nonsense. And most importantly, Google isn’t fixing the problems either or paying for bug fixes.
AI and Linux
Now, that’s not to say that, in the right hands, AI can’t be a big help to open source. As Linus Torvalds, creator of Linux and Git, said at the Linux Foundation’s Open Source Summit Korea 2025: “We have people who are doing a lot of work in using AI, to help maintainers deal with the flow of patches and backboarding patches to stable versions and things like that.”
A few weeks later, Torvalds said that, while he hates AI hype, he’s “a huge believer in AI as a tool.” Specifically, he’s “much less interested in AI for writing code” and far more excited about “AI as the tool to help maintain code, including automated patch checking and code review before changes ever reach him.”
That’s not to say Torvalds won’t use AI for writing code. In fact, he’s used Google’s Antigravity LLM to vibecode his toy program AudioNoise, which he uses to create “random digital audio effects” using his “random guitar pedal board design.”
Also: 10 ChatGPT Codex secrets I only learned after 60 hours of pair programming with it
In the Linux community as a whole, there’s already agreement on some ways that AI should be used. Sasha Levin, an Nvidia distinguished engineer and stable-kernel maintainer, declared that human accountability is non-negotiable. Some form of disclosure is needed when AI is used, and maintainers will decide for themselves how to use AI tools.
Additionally, Levin revealed he’d already wired LLMs into two of the most thankless jobs in the project: identifying backports and security fixes. AI is now used in AUTOSEL, the system that identifies kernel patches for backporting to stable releases and Linux’s in-house CVE workflow. This linkup eliminates a lot of tedious scut work.
Torvalds also said he believes LLMs should be treated as the next step in compiler evolution rather than as replacements for humans. He compared AI’s adoption to the shift from assembly to higher-level languages. This shift was initially controversial, but eventually accepted as a way to free developers from drudge work, such as writing boilerplate or meticulously drafting commit messages in a second language.
Coding responsibly
Dan Williams, an Intel senior principal engineer and kernel maintainer, agreed that AI has proven useful for reviewing code and improving productivity. However, he warned, “I do career talks at high schools, and I tell them the most important thing you can learn in school, and you will use it, is to ‘show your work.’ And I feel like AI is the ultimate, ‘I don’t have to show my work because the AI told me it is correct.'”
Williams is right, and that lack of responsibility is unhelpful. As IBM distinguished engineer Phaedra Boinodiris and Rachel Levy, North Carolina State University’s executive director of the Data Science and AI Academy, observed recently, AI literacy is a must going forward, and that means far more than just knowing how to write LLM prompts. Students must learn the basics, and everyone must be welcome at the table when determining how to use AI successfully in open source or elsewhere.
One important reference comes from Stormy Peters, AWS head of open source strategy, who said in a speech at the recent Linux Foundation Members Summit, “I was worried that AI would kill open-source software because I would generate this code or this pull request so quickly that I wouldn’t see any value in it. Why would I spend my time pushing it upstream when anyone could just generate it on demand?”
Also: 10 things I wish I knew before trusting Claude Code to build my iPhone app
That situation hasn’t proven to be the case in reality. As Peters explained, “What has actually happened is that people are submitting all of the slop that they’re generating out of AI.”
While the AI-aided coders might have wanted to do good — “it’s really quick, so I should, and it’s useful, so I should contribute it” — there’s no follow‑through because these people don’t understand what the AI produced: “What happens is, it’s not mine, and I don’t know how to maintain it. So if anybody asked me to simplify it or defend it, I can’t, and probably the maintainer of the project also can’t easily figure out what’s going on.”
This state of affairs is not good. Worse, evidence suggests developers are 19% slower with AI-enabled coding due to the time spent revisiting and analyzing code. Meanwhile, other research suggests that AI-generated code tends to have 1.7 times more issues.
Also: AI agents are fast, loose, and out of control, MIT study finds
Nevertheless, Peters and the other open-source leaders I’ve been speaking to, yes, even Stenberg, think AI can be very useful to open source.
We must use AI carefully and consider how it’s changing open-source technology. Used intelligently and with real effort, as Anthropic and Mozilla have, AI and open source can form a beautiful friendship. But if we don’t pay such levels of attention, we’re in for a real mess.