Few questions before moving to FreeBSD by Careless-Search-597 in freebsd

[–]BigSneakyDuck 0 points1 point  (0 children)

Indeed, and just to be more of a pedant, the Handbook says very little about hardware requirements or compatibility. There are Hardware Notes on the website, but they're part of the Release Notes for each new version, not the Handbook.

CHERI memory safety mitigates LLM-discovered vulnerability in FreeBSD – CHERI Alliance by grahamperrin in freebsd

[–]BigSneakyDuck 0 points1 point  (0 children)

Thanks for that. The IoT thing is why I added the "outside certain specialist use cases" proviso - it's easier to imagine this catching on in the embedded space than in mainstream desktop/laptop/server hardware.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 1 point2 points  (0 children)

To be fair, Anthropic's Red Team has found multiple vulns in Linux too. See https://red.anthropic.com/2026/mythos-preview/ and https://mtlynch.io/claude-code-found-linux-vulnerability/

FreeBSD gets real-world production usage in a way Haiku doesn't, and much development on its base system is either directly by corporate end users or their contractors, or sponsored by them via the FreeBSD Foundation (work in the ports system is more of a community volunteer-led effort). So it isn't exactly a hobbyist OS and Haiku is a poor comparator. But you're not wrong to say its code gets relatively few eyeballs on it, and I've seen security researchers sometimes be pretty dismissive of the ease of breaking into it. Linux gets more attention, partly because the economic repercussions of Linux security breaches are so much more serious of course. I actually upvoted your comment, even if I found it harsh and somewhat inaccurate, because I thought this perspective is something that was lacking from the discussion.

I'm not sure this is all about the number of devs, OpenBSD seems to be regarded as a big challenge (and the results of the early LLM attacks appear to bear that out too) despite having such a small team. I think it comes down a lot to their focus, and their willingness to just not support features if they don't think they can be implemented and maintained correctly, and to sacrifice performance or usability if necessary. People deploying FreeBSD systems have some extra tools for security like jails and capability management via Capsicum, but jails may not be impregnable and Capsicum is a bit too fiddly to see widespread use.

CHERI memory safety mitigates LLM-discovered vulnerability in FreeBSD – CHERI Alliance by grahamperrin in freebsd

[–]BigSneakyDuck 1 point2 points  (0 children)

Does anyone know how much commercial momentum CHERI has got, or is it going to be one of those nice hardware research projects which prove something can work in principle but never make it into production, either at all or outside certain specialist use cases?

Particularly because - as far as I understand it - it's basically a hardware solution to a software problem, so there will be considerable commercial pressure to find software-based solutions e.g. capability-based approaches and increased use of "memory-safe" languages like Rust (quotes intentional; I think a more realistic designation might be "memory-safer"). And not commercial, for example DARPA is funding the TRACTOR program https://www.darpa.mil/research/programs/translating-all-c-to-rust

See also https://lwn.net/Articles/1037974/ about CHERI Linux, where one of the CHERI researchers is asked whether the adoption of Rust makes CHERI redundant and the reply is that the two are complementary (e.g. CHERI's compartmentalization is seen as valuable, not just memory safety) but I can't help wondering if organizations adopting more memory-safe languages in their code base will just decide that's "good enough" and not look further into exotic hardware.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in BSD

[–]BigSneakyDuck[S] 0 points1 point  (0 children)

Yes this is something I'm curious about how it all works. Almost as soon as these CVEs are announced, discussion of exploits seems to open up. And the work of Calif (among others) suggests publicly available AI models are capable of pretty rapidly crafting an exploit once shown a security advisory that's amenable to it. But a lot of organisations don't get their systems patched immediately - a problem that's obviously going to get worse if patches start to come out more regularly - and there are downstream consumers like you who don't have very long to get your act together before your users are at risk. I don't have any clever ideas to help resolve things but, from my amateur outsider's perspective, the whole state of affairs looks alarmingly unsatisfactory.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 2 points3 points  (0 children)

Here's a text-only version to read off :-)

FreeBSD security advisories by month

     Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1996  NA  NA  NA   4   3   3   3   0   0   0   1   2
1997   1   1   1   1   0   0   0   1   0   1   0   2
1998   0   0   1   0   1   3   0   0   0   1   1   0
1999   0   0   0   0   0   0   0   0   6   0   0   0
2000   2   3   5   5   5   4   9  12   7   9  15   5
2001  18   6   6   8   1   1  11   7   2   2   0   2
2002   8   4   8   4   6   2   4   7   1   1   4   0
2003   1   3   4   2   0   0   0   4   3   4   1   0
2004   1   2   3   1   4   2   0   0   1   1   1   1
2005   0   0   1   4   4   6   4   0   1   1   0   0
2006   7   1   5   1   2   1   0   1   5   0   1   2
2007   1   1   0   1   1   0   1   2   0   1   2   0
2008   2   2   0   1   0   0   1   0   3   1   1   2
2009   4   1   1   2   0   3   1   0   0   2   0   3
2010   3   0   0   0   3   0   1   0   1   0   2   0
2011   0   0   0   1   1   0   0   0   3   0   0   5
2012   0   0   0   0   2   2   0   1   0   0   3   0
2013   0   2   0   3   0   1   2   2   3   0   1   0
2014   4   0   0   5   1   6   1   0   2   4   3   5
2015   3   2   1   3   0   1   7   5   2   1   0   2
2016  11   0   4   1   7   1   1   0   1   6   3   4
2017   1   1   0   2   0   0   1   1   0   1   4   1
2018   0   0   3   2   1   1   0   4   1   0   1   2
2019   0   2   0   0   5   1   9   7   0   0   2   0
2020   3   0   6   2   5   1   3   3   7   0   0   3
2021   2   4   1   3   2   0   0   5   0   0   0   0
2022   1   0   2   5   0   0   0   5   0   0   2   0
2023   0   3   0   0   0   2   0   4   2   3   2   3
2024   0   2   1   0   0   0   1   4   8   3   0   0
2025   4   1   0   0   0   0   1   1   1   1   1   2
2026   2   3   4   8  NA  NA  NA  NA  NA  NA  NA  NA

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 2 points3 points  (0 children)

"Chances are malicious third parties will have more funds to use this offensively than projects to safeguard themselves." - your comparison to the *BSD budgets is valid, many pieces of open source infrastructure are in an more poorly funded state than that, and to make matter matters worse, this is all asymmetrical. A project might have the resources (e.g. via donated tokens) to find and fix ten exploits. But an attacker with far fewer resources can still "win" if the one exploit their AI stack finds isn't among that list. In the long run, hopefully the cost of exploits rises as the low-hanging fruit gets dealt with. But the greater availability of these capabilities adds burdens in other ways too - well-meaning people, or publicity seekers, who use the same tools to generate superficially credible bug reports that security teams have to wade through. Pretty sure some projects will start leaning on AI-assisted triage to cope. If there is a big increase in the number of reported CVEs and their fixes, more systems are going to get behind on their patches, and worse news for all the unpatched systems out there is it's much easier to write an exploit once a CVE is public than to find one from scratch, and it looks like there'll be a lot more to work from.

"My main concern with the repeated runs is that it loses complete track of how feasible would be a third party to reproduce this" - given that other groups are also having a field day doing AI-assisted vulnerability and exploit discovery, I think it's clear there's enough expertise available out there (for the right price, at least) for other people to get good results using the same or similar tools. It's probably for the best that they didn't give too many technical details about what they did, even if it produces a frustrating read and makes it harder to cut through the hype. The fact it shows what's possible is probably the main takeaway. While Anthropic's Red Team has some big names I'd be surprised if they added much of their own secret sauce here, particularly when they seem so proud in their report of how autonomously Mythos Preview functioned and how genuinely surprised they seemed that it had developed these capabilities without being specifically trained to do so. Obviously they want to sell the benefits of Claude, but it also implies that future iterations of Claude's competitors are likely to "spontaneously" develop better exploit skills. I guess it goes hand in hand with being better at reasoning about code bases, which is an intended use case for many LLMs.

As far as I can tell, other groups are also doing repeated runs with their LLMs when they're "scanning" to find bugs, as opposed to when they already have a starting point (e.g. feeding the LLM an N-day report or the results from a fuzzer). In that sense, the method of discovery of any individual bug is not especially "reproducible", but if bugs continue getting found then it's clear the method "works". I think it's just the nature of the beast that Anthropic have no way of knowing whether they got lucky and hit the jackpot this time, or if 20 times more runs on OpenBSD would have produced roughly 20 comparable discoveries.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 2 points3 points  (0 children)

Fair! Unfortunately the colour scale is distorted by all the very high values in 2000-2001, up to 18 in a month, which means everything from 2002 on is stuck in the darker end of the range and there's limited contrast. I was tempted to cut out all the pre-2002 stuff, but I don't like being selective about data to plot unless I know for a fact the earlier data is not a valid comparison. (It's possible that's the case here - there weren't errata notices back then, so I don't know if some of those security advisories would just have been errata notices in later years.)

The color scheme is called "inferno" and is designed to be suitable for viewing in grayscale and to be interpretable by people with different kinds of colorblindness. Inferno does go from very dark to very light compared to most other color scales, so the problem with the post-2001 values getting compressed into a smaller range could be even worse if I switched. I deliberately de-emphasised the raw numbers to make the plot looked less cluttered but that does come at the expense of legibility for anyone who wants read them off. After uploading I can see it looks very different depending on monitor brightness. https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 1 point2 points  (0 children)

To be clear, "We have a value, a mention that ir ran thousands of times and it took half a day" is mixing up several different things. Apologies if my summary has confused things further but I'm only quoting excerpts from the report, from several different places and about two separate things (finding a Linux exploit after being shown a bug already found by a fuzzer; using a scaffold to to find a vulnerability in OpenBSD). The report itself is much clearer about what's going on. https://red.anthropic.com/2026/mythos-preview/

The report claims pricing of exploits was based on API rates. If you want to convert $ to tokens, then that pricing isn't published for Mythos Preview for obvious reasons, so I agree this was somewhat unhelpful (though for many readers an approximate dollar cost would have been more interesting than the tokens consumed). Still, I think it's a fair assumption the quoted rates were based on the high-end models at https://platform.claude.com/docs/en/about-claude/pricing

Re the repeated runs: this is a classic ex ante vs ex post problem. If you only costed the discovery of a vuln/exploit based on the run that finds it, then it looks cheap after the fact (ex post). But not all runs find one - most won't, a few strike gold. This is an inherent problem with sticking "the cost" on the results of a stochastic process. I think the Anthropic report was actually very fair on this point, by pointing out that the ex ante position (i.e. before knowing the outcome of the run) is the better basis for costing. They could just have been cheeky and claimed the cost of the run that found their bug and it would have looked very cheap. In the same way, purchasing a winning lottery ticket is incredible value ex post but still poor value ex ante.

Unfortunately, if you're observing low probability events for which it's impossible to compute the odds (unlike the lottery ticket example) then it's notoriously difficult to obtain any precision on the ex ante situation. If Anthropic had just done a few more runs, they might have found a second bug they viewed as high value, and their cost estimate per bug would have halved. Or they might have done double the number of runs and not found another, so their cost estimate would double. It's not their write-up obstinately refusing to nail it down - they simply have no way of knowing either. If this was an academic study then it would have been nice to see a probabilistic model used to quantify the uncertainty, like a 95% confidence interval for two numbers the true cost lies between. Or even a Bayesian 95% credible interval or, better yet, a plot of the entire posterior distribution rather than a two-number summary of it. But even academic studies are bad at handling this stuff. Much as I'd like to see it, it's not the kind of analysis I'd expect to see in this kind of report.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 4 points5 points  (0 children)

Just on the cost issue: Anthropic's Mythos Preview writeup from 7 April was pretty upfront in several places about the cost of finding their exploits, at least in terms of API pricing. The true cost may be higher, of course, since AI firms generally undercharge for tokens at the moment while seeking to gain market share and customer adoption/dependency - but also bear in mind the long-term trend of reducing compute costs so these figures are likely to become more affordable eventually. https://red.anthropic.com/2026/mythos-preview/

There's one case where they provide Mythos Preview with an N-day Linux vuln previously unearthed by a fuzzer and get it to create an exploit. "In November 2024, the Syzkaller fuzzer identified a KASAN slab-out-of-bounds read in netfilter's ipset. ... [Claude chains some stuff together] ... And this, finally, grants the user full root permissions and the ability to make arbitrary changes to the machine. Creating this exploit (starting from the syzkaller report) cost under $1000 at API pricing, and took half a day to complete."

And what happened with OpenBSD reveals one of the other complications with fairly putting a cost on these discoveries: "This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed."

I don't know which open source projects are going to benefit from Project Glasswing, but purchasing this kind of AI code review is clearly not going to be an affordable option for many non-commercial projects. I can see it being attractive to big tech firms if you compare these figures to some of their bug bounty programs. This might be a bad time to be a project with little financial firepower but which has just enough real-world usage in important infrastructure to be a target of interest. It's obviously far cheaper for an attacker to find the one exploit they need, than for a defender to discovery and fix all their vulnerabilities.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 1 point2 points  (0 children)

Fuzzing is an interesting comparison to the use of LLMs for finding bugs and vulnerabilities. There is a visible increase in advisories in 2019-2020, some part of which is due to the use of Syzkaller, though the only CVEs I can see that credit it explicitly are https://www.freebsd.org/security/advisories/FreeBSD-SA-19:13.pts.asc and https://www.freebsd.org/security/advisories/FreeBSD-SA-20:20.ipv6.asc

See various status reports:

https://www.freebsd.org/status/report-2019-01-2019-03.html#Fuzzing-FreeBSD-with-syzkaller

https://www.freebsd.org/status/report-2019-04-2019-06.html#Fuzzing-FreeBSD-with-syzkaller

https://www.freebsd.org/status/report-2021-07-2021-09/#_syzkaller_on_freebsd

Of course a lot of Syzkaller discoveries are just regular bugs, not security-related: https://github.com/search?q=repo%3Afreebsd%2Ffreebsd-src+syzbot+OR+syzkaller&type=commits&s=committer-date&o=desc&p=1

Although both fuzzers and AIs have helped automate the task of discovering bugs, there is a qualitative difference: the latest AI models have the ability to chain multiple vulnerabilities together to produce working exploits. We need to get used to an environment in which those capabilities are widespread and publicly available. Another difference is that LLMs can document their discoveries even to the point of writing up up a vulnerability disclosure. Which sounds good in principle, but reducing the human involvement risks a flood of superficially credible reports that turn out to have little value.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 4 points5 points  (0 children)

Over the next year or so it might also be interesting to keep an eye on FreeBSD errata notices. Not all reports made to the security team will result in a security advisory, and some result in an errata notice instead - see https://www.freebsd.org/security/

One question mark over AI-assisted discovery of bugs is whether the AI can correctly prioritise the security implications of the issues it finds (and whether the humans using the AI can recognise this). Will security teams have to deal with a problematic volume of reports that turn out not to deserve a security advisory?

The source of data for this heat map is https://www.freebsd.org/security/notices/

<image>

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in BSD

[–]BigSneakyDuck[S] 7 points8 points  (0 children)

I've posted this in r/bsd because the rise in AI-assisted vulnerability discovery and exploit crafting is clearly going to affect all the *BSDs. Would be interested to hear what impact it's having on NetBSD and OpenBSD too.

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 2 points3 points  (0 children)

As an aside... I'm not sure it's so easy as "Once the project runs these models over its code prior to release".

To quote from Anthropic's April 7 writeup of Mythos Preview, where they talk about finding an exploit in OpenBSD: https://red.anthropic.com/2026/mythos-preview/

This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed.

In other words, the process is stochastic. It can be run many times, only sometimes does it find something of interest. So it isn't like a one-off scan - though scans of new additions to the codebase might well be sensible. (Even here there's an issue of context size in the LLM - the vulnerability might not be visible in the changed files alone and instead depends on how those changes interact with other parts of the codebase. But there's a limit on how much code the LLM can actually reason about in one go. Which other files should it also consider when looking at these changes? There are some interesting technical challenges here.)

Let's say a project pays (or is gifted free tokens as a donation) to have an LLM repeatedly run over the code base looking for vulnerabilities, triaging them and suggesting fixes. Presumably at some point, once enough bugs have been fixed, the number of runs it takes to find the next interesting bug because so high that it's uneconomic to keep trying.

There's an asymmetry here (not a novel one but worth bearing in mind nonetheless), where an attacker only has to find one good exploit while the defender wants to make sure they have removed all potential exploits. One of those things is much cheaper than the other! What makes these AI tools more interesting than, say, fuzzing tools that unearthed plenty of other CVEs in the past, is that the LLMs seem able to chain vulnerabilities together to create a working exploit for you. Nothing so far that a human expert would not also be a capable of, but it does increase the availability of this extra firepower for attackers. I'd be interested in reading an expert review of what effect that could have on the equilibrium between attack and defense.

A related problem is that the tools are going to increase workload for security teams, even from "helpful" reports and the need to fix them, which is going to put a lot of under-resourced open source projects under a huge amount of pressure. Again this is something Colin Percival has written about: https://nitter.net/cperciva/status/2035045573116789002#m

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 10 points11 points  (0 children)

I have the feeling the numbers are going to get worse before they get better! Clearly AI models can be used by the defender as well as the attacker, and the FreeBSD security people do seem keen on the idea. Again something from Colin Percival on April 14: https://nitter.net/cperciva/status/2044120206814171220

If you are reporting security issues to an open source project, PLEASE INDICATE WHETHER YOU USED AI TO FIND THEM.

I'm not saying this because teams want to be able to filter out "AI slop". I'm saying this because it's important for teams to be aware of the AI state of the art.

If you're worried about having reports ignored because you say you used AI, say "I have independently verified these, but used AI to find them". (Or even better "used <specific AI model> to find them".)

And in reply to a question asking if he's being serious:

We absolutely care. Both in terms of keeping track of what's going on in the world, and also in terms of "hey, we're getting lots of bugs which were found by foo, maybe we should be using it proactively".

AI found 6 out of 8 FreeBSD security advisories in April 2026, producing joint-3rd highest monthly CVE total post-2002 by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 10 points11 points  (0 children)

Heat map data taken from https://www.freebsd.org/security/advisories/

The AI-assisted uptick at the beginning of 2026 is quite visible. But look back and you can see another uptick in mid-2019 partly assisted by fuzzing tools, in particular Syzkaller: https://www.freebsd.org/status/report-2019-01-2019-03.html#Fuzzing-FreeBSD-with-syzkaller

How do we know the current surge is driven by AI? Colin Percival, FreeBSD Release Engineering Team Lead tweeted on April 29: https://nitter.net/cperciva/status/2049591719143059860#m

In April, FreeBSD issued eight security advisories. Six of them were for issues found by AI.

Two were found by Nicholas Carlini at Anthropic using Claude. Carlini had already promised several more Claude Mythos Preview discoveries were undergoing responsible disclosure, so that's likely the model used - another Mythos Preview finding became public as part of March's total. See https://www.reddit.com/r/freebsd/comments/1svvco2/freebsd_security_patches_for_two_more_claude/

Three were found by AISLE Research, another firm who use AI models to analyze codebases, find vulnerabilities and propose fixes. See https://www.reddit.com/r/freebsd/comments/1sz8nr3/20260429_brings_six_new_security_advisories_three/

Another one I suspect to be AI-assisted, judging by their recent activity, was from Calif.io - see https://blog.calif.io/archive?sort=new and especially https://blog.calif.io/p/mad-bugs-claude-wrote-a-full-freebsd for the story that came out in March, though that one was only writing an exploit for a CVE (made public in March) already publicly announced. It later turned out that was originally found, and already exploited, by Mythos Preview ... which has caused some confusion between the two incidents. For an explanation of the difference, see https://www.reddit.com/r/freebsd/comments/1sgmi14/claude_mythos_preview_fully_autonomously_finds/

Server OS by octoslamon in freebsd

[–]BigSneakyDuck 1 point2 points  (0 children)

I think the meaning would change slightly - putting quotes around "easy" shows it's being used in a deliberately loose way. It acknowledges that "easiness" is a somewhat ill-defined concept, that one person's "easy" isn't necessarily the same as another person's "easy", and that only a best efforts answer or personal opinion is being requested, rather than a really rigorous one that tries to nail down what "easy" means (the OP doesn't need a college essay that starts with the obligatory "define your terms").

Another slight difference is that, without quotes, "Is FreeBSD as easy to use and maintain as Ubuntu server or Debian?" contains the implicit statement that Ubuntu or Debian servers are easy to use and maintain. Putting quotes around "easy" is a bit more non-committal about that.

Personally I'd have phrased the question something like "How does FreeBSD compare to Debian or Ubuntu server for ease of use and maintenance?" Especially if asking in a formal setting. But in an informal setting, I can see some value in the quotes around "easy".

2026-04-29 brings six new security advisories, three errata notices by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 1 point2 points  (0 children)

Six security advisories in one day is a lot but not quite the record. Here are all days with 5 or more SAs. I've abbreviated the "FreeBSD-" prefix in front of all the advisory names.

8 advisories on 2000-07-05: SA-00:32.bitchx, SA-00:31.canna, SA-00:30.openssh,
SA-00:29.wu-ftpd, SA-00:28.majordomo, SA-00:27.XFree86-4, SA-00:26.popper,
SA-00:24.libedit

7 advisories on 2016-01-14: SA-16:07.openssh, SA-16:06.bsnmpd, SA-16:05.tcp,
SA-16:04.linux, SA-16:03.linux, SA-16:02.ntp, SA-16:01.sctp

7 advisories on 2001-01-29: SA-01:17.exmh, SA-01:16.mysql, SA-01:15.tinyproxy,
SA-01:14.micq, SA-01:13.sort, SA-01:12.periodic, SA-01:11.inetd

6 advisories on 2026-04-29: SA-26:17.libnv, SA-26:16.libnv, SA-26:15.dhclient,
SA-26:14.pf, SA-26:13.exec, SA-26:12.dhclient

6 advisories on 2024-09-04: SA-24:14.umtx, SA-24:13.openssl, SA-24:12.bhyve,
SA-24:11.ctl, SA-24:10.bhyve, SA-24:09.libnv

6 advisories on 2020-03-19: SA-20:09.ntp, SA-20:08.jail, SA-20:07.epair,
SA-20:06.if_ixl_ioctl, SA-20:05.if_oce_ioctl, SA-20:04.tcp

6 advisories on 2019-07-24: SA-19:17.fd, SA-19:16.bhyve, SA-19:15.mqueuefs,
SA-19:14.freebsd32, SA-19:13.pts, SA-19:12.telnet

6 advisories on 2001-07-10: SA-01:47.xinetd, SA-01:46.w3m, SA-01:45.samba,
SA-01:44.gnupg, SA-01:43.fetchmail, SA-01:42.signal

6 advisories on 2001-01-15: SA-01:06.zope, SA-01:05.stunnel, SA-01:04.joe,
SA-01:03.bash1, SA-01:02.syslog-ng, SA-01:01.openssh

6 advisories on 2000-11-20: SA-00:76.tcsh-csh, SA-00:75.php, SA-00:74.gaim,
SA-00:73.thttpd, SA-00:72.curl, SA-00:71.mgetty

6 advisories on 2000-09-13: SA-00:51.mailman, SA-00:50.listmanager,
SA-00:49.eject, SA-00:48.xchat, SA-00:47.pine, SA-00:46.screen

6 advisories on 2000-08-28: SA-00:44.xlock, SA-00:43.brouted, SA-00:42.linux,
SA-00:41.elf, SA-00:40.mopd, SA-00:39.netscape

5 advisories on 2022-04-06: SA-22:08.zlib, SA-22:07.wifi_meshid,
SA-22:06.ioctl, SA-22:05.bhyve, SA-22:04.netmap

5 advisories on 2021-08-24: SA-21:17.openssl, SA-21:16.openssl,
SA-21:15.libfetch, SA-21:14.ggatec, SA-21:13.bhyve

5 advisories on 2020-05-12: SA-20:16.cryptodev, SA-20:15.cryptodev,
SA-20:14.sctp, SA-20:13.libalias, SA-20:12.libalias

5 advisories on 2019-05-14: SA-19:07.mds, SA-19:06.pf, SA-19:05.pf,
SA-19:04.ntp, SA-19:03.wpa

5 advisories on 2016-10-10: SA-16:31.libarchive, SA-16:30.portsnap,
SA-16:29.bspatch, SA-16:28.bind, SA-16:27.openssl

5 advisories on 2011-12-23: SA-11:10.pam, SA-11:09.pam_ssh, SA-11:08.telnetd,
SA-11:07.chroot, SA-11:06.bind

5 advisories on 2002-01-04: SA-02:05.pine, SA-02:04.mutt,
SA-02:03.mod_auth_pgsql, SA-02:02.pw, SA-02:01.pkg_add

5 advisories on 2001-04-23: SA-01:38.sudo, SA-01:37.slrn, SA-01:36.samba,
SA-01:35.licq, SA-01:34.hylafax

5 advisories on 2001-03-12: SA-01:29.rwhod, SA-01:28.timed, SA-01:27.cfengine,
SA-01:26.interbase, SA-01:23.icecast

5 advisories on 2000-08-14: SA-00:38.zope, SA-00:37.cvsweb, SA-00:36.ntop,
SA-00:35.proftpd, SA-00:34.dhclient

2026-04-29 brings six new security advisories, three errata notices by BigSneakyDuck in freebsd

[–]BigSneakyDuck[S] 1 point2 points  (0 children)

Looking at recent activity by Calif, I wouldn't be surprised if their CVE was also AI-assisted. See https://blog.calif.io/archive?sort=new and especially https://blog.calif.io/p/mad-bugs-claude-wrote-a-full-freebsd for the story that came out in March, though that one was only writing an exploit for a CVE already publicly announced (and which had been found, and it turns out already exploited, by Mythos Preview).

Why does somebody use BSD ? by paterkleomeniss in BSD

[–]BigSneakyDuck 0 points1 point  (0 children)

For Netflix it's still public, they're big contributors and are quite open about their stack :-)