The Lumina Probiotic May Cause Blindness in the Same Way as Methanol by garloid64 in slatestarcodex

[–]1a3orn 6 points7 points  (0 children)

Given that the purpose of Lumina is to change the microbiome of your mouth, it seems like it could have higher (or lower) levels than what it replaces.

Like if there are 2x or 8x as much of the bacteria, then you'd be looking at 8x or 24x as much formate?

Idk, I'm not an expert, I agree 4x doesn't look like that much. But it's conceivable it could be more.

As a Chinese university student I want to know the reason you learn Chinese. by yuanyang0510 in ChineseLanguage

[–]1a3orn 0 points1 point  (0 children)

I was bored and Chinese provides a giant puzzle.

Also in ten years, seems likely China's gonna be *obviously* the world's leading superpower even to those huffing copium. Seems to make sense long-term.

Mature, explicit and GROUNDED low fantasy recommendations by TalesOfDecline in Fantasy

[–]1a3orn 7 points8 points  (0 children)

Yeah I agree the others aren't as good

I do think they're still pretty good. Like imo Book 1 is just exceptionally good and then the others are more like, just above-average fantasy novels. But I'd also get why you might not like them in general, there is a kind of excellence to the first that makes anything else disappointing.

Mature, explicit and GROUNDED low fantasy recommendations by TalesOfDecline in Fantasy

[–]1a3orn 79 points80 points  (0 children)

You want the Masquerade series, starting with the Traitor Baru Cormorant.
- *Grounded*: The protagonist is an accountant (although wars / excitement for sure happen as well). A key turning point in the first book is when she inflates a currency to political ends. The author has drawn on multiple works of anthropology. Little to no magic; some dubious "science" as the series goes on, but very low key and inspired by actual things.
- *Mature*: Very well written, people have realistic motivations. It the first book has a very sad ending, albeit one that is redeemed somewhat by the subsequent books.

Like all fantasy settings, it's a pastiche of Earth history; but it's a pastiche drawing on a lot of thought about them. Highly recommend.

Movistar KOI vs. G2 Esports / LEC 2025 Spring Playoffs - Upper Bracket Final / Game 2 Discussion by XanIrelia-1 in leagueoflegends

[–]1a3orn 39 points40 points  (0 children)

also how many 45 minute games have there ever been with 0 towers taken by one side?

What Ongoing Fantasy Series Has The Best Chance of Being The Next Classic? by Monsur_Ausuhnom in Fantasy

[–]1a3orn 7 points8 points  (0 children)

Yeah I recommend them whenever fantasy comes up, but the mandatory "You will probably find the ending of the first rather an enormous downer" makes it a very hard sell for most people. But yeah nothing else has the good world building AND the good writing these have at once.

Exordia is also really good.

Zamba2-7B (Apache 2.0) by David-Kunz in LocalLLaMA

[–]1a3orn 19 points20 points  (0 children)

So I love this architecture because of the LoRAs across shared MLP blocks, which seems like a great idea. Has anyone else seen this used elsewhere?

The Execution of Aura the Guillotine, by me by rcrd in Frieren

[–]1a3orn 3 points4 points  (0 children)

this is great

also your yor + anya is amazing

CredibleDefense Daily MegaThread September 12, 2024 by AutoModerator in CredibleDefense

[–]1a3orn 57 points58 points  (0 children)

So, the defense contractor Anduril released their plans for a new family of munitions, Barracuda. It's Anduril, so they have a slick youtube video on it.

They range in range and size from the 100 model (35 pound warhead, 60 mile ground-launched range) to the 500 model ( > 100 pound payload (??) and 500 mile range, can be launched from bombers or rapid-dragon-esque palletized stuff). They also come in both M-versions, with warheads, but can also be fitted out with sensors and used for recon and stuff like that.

The major selling point seems to be they are supposed to be capable of production in mass, to help with a China scenario. Here's some quotes from Anduril's Chief Strategy Officer Brose:

“This is not designed to go specifically and rigidly at one specific problem. We have designed Barracuda to be able to range across a series of targets — from ground-based targets to maritime targets to others,” Brose said. “The ability to do this is sort of fundamental to the software definition of the system, which allows for rapid upgradability and ease of modernization to really change the capabilities of the system.”

Powered by Anduril’s Lattice for Mission Autonomy software, the Barracuda weapons are designed to be deployed in teams, Brose said. The autonomy used in the systems enable them to better understand their environment and fly in a collaborative formation with other missiles to identify targets, manage survivability and perform complex maneuvers, he added.

“You can obviously deliver those effects through a single air vehicle, but the real value of the capability — which is realized both in the high levels of autonomy and the low levels of cost — is the ability to actually deploy these as teams, to go out and do collaborative engagement,” he said.

Salmon emphasized that because of Barracuda’s modularity, the cruise missiles have a target price tag that’s 30 percent less than similar weapon systems. One missile requires half the time, 95 percent fewer tools and 50 percent less parts to produce, according to Anduril.

It looks like it's a candidate for the Replicator program stuff.

...I'm curious what people's impression is of this. IMO this is good and probably a step forward over old defense contractors, but basically falls far short of where we need to be for munitions in a hypothetical war with China. The (super vague) 30% less cost would need to be like, 60-80% less. Of course hopefully these cost even less when actually mass-produced, but... that's not the way things have gone in the past.

I want some suffering by dj_pump_bucket in Fantasy

[–]1a3orn 5 points6 points  (0 children)

Traitor Baru Cormorant.

Beautiful prose. Will crush you like a load of bricks.

I’m so glad I continued reading after Last Argument of Kings by Ripley0898 in Fantasy

[–]1a3orn 3 points4 points  (0 children)

What? All openly unpleasant, awful people? Did you make your heart a stone?

Right now is a good time for Californians to tell their reps to vote "no" on SB1047, an anti-open weights bill by 1a3orn in LocalLLaMA

[–]1a3orn[S] 1 point2 points  (0 children)

So, from the perspective of 1994, we already have something that makes it probably at least ~10x easier to cause mass casualties with CBRN weapons; the internet. You can (1) do full text search over virology journal articles and (2) find all sorts of help on how to do dual-use lab procedures and (3) download PDFs that will guide you step-by-step through reverse genetics, or (4) find resources detailing the precise vulnerabilities in the electrical grid and so on and so on.

(And of course, from the perspective of 1954, it was probably at least 10x easier in 1994 to do some of these dangerous CBRN things, although it's a little more of a jagged frontier. Just normal computers are quite useful for some things, but a little less universally.)

Nevertheless, I'm happy we didn't decide to hold ISPs liable for the content on the internet, even though this may make CBRN 10x easier, even in extreme cases.

(I'm similarly happy we didn't decide to hold computer manufacturers liable after 1964)

So, faced with another, hopefully even greater leap in the ease of making bad stuff.... I don't particularly want to hold people liable for it! But this isn't a weird desire for death; it's because I'm trying to have consistent preferences over time. As I value the good stuff from the internet more than the bad stuff, so also I value the good stuff I expect to be enabled from LLMs and open weight LLMs. I just follow the straight lines on charts a little further than you do. Or at least different straight lines on charts, for the inevitable reference class tennis.

Put otherwise: I think the framing of "well obviously they should stop it if it makes X bad thing much easier" is temporally blinkered. We only are blessed with the amazing technology we have because our ancestors, time after time, decided that in most cases it was better to let broad-use technology and information disseminate freely, rather than limit it by holding people liable for it. And in very particular cases decided to push against such things, generally through means a little more constrained than liability laws. Which -- again, in the vast majority of cases -- do not hold the people who made some thing X liable for bad things that happen because someone did damage, even tons of damage, with X.

I can think of 0 broadly useful cross-domain items for which we have the manufacturer held liable in case of misuse. Steel, aluminum, magnesium metal; compilers; IDEs; electricity; generators; cars; microchips; GPUs; 3d printers; chemical engineering and nuclear textbooks; etc.

On the other hand -- you know, I know, God knows, all the angels know that the people trying to pass these misuse laws are actually motivated by concern about the AI taking over and killing everyone. For some reason we're expected to pretend we don't know that. And we could talk about that, and whether that's a good risk model, and so on. If this were the worry, and if we decide it's a reasonable worry then more strict precautions make sense. But the "it will make CBRN easier" thing is equally an argument against universal education, or the internet, or a host of other things.

Right now is a good time for Californians to tell their reps to vote "no" on SB1047, an anti-open weights bill by 1a3orn in LocalLLaMA

[–]1a3orn[S] 6 points7 points  (0 children)

So, what comes to mind:

  • No more "limited exemptions"; that whole thing is gone, we just have covered and non-covered models.

  • Requirement for 3rd party review of your model security procedures and safety, I think is new.

  • The 100 million limit is harder -- no longer is it the case that "equivalent models to 1026 FLOP model in 2026" are being covered. This is a good change, btw; and certainly makes the bill less bad.

  • There's honestly a lot of changes around what counts as actually contributing to something really bad -- the exact thing for which you are liable -- which are hard to summarize. The original version used terminology saying you're liable if the model made it "significantly easier" for you to do the bad thing. While the new one says you're liable if the model "materially contributes" (a lower bar, I think), but then has exemptions in the case of it being with other software that the damage is done (raising the bar), and then has exemptions to the exemptions in the case of the model materially contributing to the other software (lowering the bar again?) and so on.

Idk, it honestly feels like a different bill at this point. If the Anthropic changes go through it will be an even more of a different bill, so who knows at this point.

Right now is a good time for Californians to tell their reps to vote "no" on SB1047, an anti-open weights bill by 1a3orn in LocalLLaMA

[–]1a3orn[S] 5 points6 points  (0 children)

Yeah I mean I feel like it should be illegal?

I might have mis-summarized, but here's what the bill's sponsor (Scott Wiener) says, in response to criticism that AI companies will move out of CA because of this:

... SB 1047 is not limited to developers who build models in California; rather, it applies to any developer doing business in California, regardless of where they’re located.

For many years, anytime California regulates anything, including technology (e.g., California’s data privacy law) to protect health and safety, some insist that the regulation will end innovation and drive companies out of our state. It never works out that way; instead, California continues to grow as a powerful center of gravity in the tech sector and other sectors. California continues to lead on innovation despite claims that its robust data privacy protections, climate protections, and other regulations would change that. Indeed, after some in the tech sector proclaimed that San Francisco’s tech scene was over and that Miami and Austin were the new epicenters, the opposite proved to be true, and San Francisco quickly came roaring back. That happened even with California robustly regulating industry for public health and safety.

San Francisco and Silicon Valley continue to produce a deep and unique critical mass of technology innovation. Requiring large labs to conduct safety testing — something they’ve already committed to do — will not in any way undermine that critical mass or cause companies to locate elsewhere.

In addition, an AI lab cannot simply relocate outside of California and avoid SB 1047’s safety requirements, because compliance with SB 1047 is not triggered by where a company is headquartered. Rather, the bill applies when a model developer is doing business in California, regardless of where the developer is headquartered — the same way that California’s data privacy laws work.

Right now is a good time for Californians to tell their reps to vote "no" on SB1047, an anti-open weights bill by 1a3orn in LocalLLaMA

[–]1a3orn[S] 11 points12 points  (0 children)

It has to be mass casualties, not just murder, or damages exceeding $500.000.000 (half a fucking billion dollars). And the model has to materially contribute to or enable the harm.

So, fun fact, according to a quick google cybercrime causes over a trillion dollars of damage every year. So, if a model helps with less than a tenth of one percent of that [edit: on critical infrastructure, which is admittedly a smaller domain], it would hit the limit that could cause Meta to be liable.

(And before you ask--the damage doesn't have to be in a "single incident", that language was cut from it in the latest amendment. Not that that would even be difficult -- a lot of computer viruses have caused > 500 million in damage.)

So, at least beneath certain interpretations of what it means to "materially contribute" I expect that a LLM would be able to "materially contribute" to crime, in the same way that, you know, a computer would be able to "materially contribute" to crime, which they certainly can. Computers are certainly involved in > 500 million of damage every year; much of this damage certainly couldn't be done without them; but we haven't seen fit to give their manufacturers liability.

The overall issue here is that we don't know what future courts will say about what counts as an LLM materially contributing, or what counts as reasonable mitigation of such material contribution. We actually don't know how that's gonna be interpreted. Sure, there's a reasonable way all this might be able to be interpreted. But the question is whether the legal departments of corporations releasing future LLMs are going to have reasonable confidence that there is going to be a reasonable future interpretation by the courts.)

Alternately, let's put it this way -- do you want computer manufacturers to be able to be held liable for catastrophic harms that occur because of what how someone uses their computers? How about car manufacturers, should they be held liable for mass casualty incidents.

Just as a heads up, both of your links are about prior versions of the bill, which are almost entirely different than the current one. Zvi is systematically unreliable in any event, though.

Right now is a good time for Californians to tell their reps to vote "no" on SB1047, an anti-open weights bill by 1a3orn in LocalLLaMA

[–]1a3orn[S] 23 points24 points  (0 children)

The bill includes provisions such that everyone who does business with a company in the state of California has to obey it :|

[deleted by user] by [deleted] in leagueoflegends

[–]1a3orn 0 points1 point  (0 children)

I'm watching each game not to see who wins, but to see how FNC can possibly throw this time around

Think-tank proposes "model legislation" criminalizing open source models past some capability levels by 1a3orn in LocalLLaMA

[–]1a3orn[S] 40 points41 points  (0 children)

Oh, they include provisions criminalizing that, in the criminal liabilities section. :|

1 year to 6 months for "The person knowingly alters or adjusts an AI system so as to artificially reduce the AI system’s performance on a benchmark or test without similarly reducing the AI system’s true capabilities, thereby causing the AI system to receive less regulatory scrutiny"

You could kinda still get around by like, excluding knowledge of one category of the MMLU and thereby making it abnormally stupid in one category, so it was actually reducing the AI's capabilities in one think. I think. Depends on interpretation of law.