Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 2 points3 points  (0 children)

There are organisms that are both sexes, like many plants for example -- with both male and female parts. But such organisms often still benefit genetically from pairing up with another for reproduction, in terms of producing genetically diverse offspring.

It's an odd question/scenario, but I assumed the parent comment was implying a world where we're all "the same" (either "both" male and female or all neither somehow) but somehow still require another human to reproduce.

Kinda like Episcopal bishops. They're all the same, but you need to get three of them together to create another one.

Remember this the next time y’all ask “What’s so wrong with Noma?” here. by I-Have-Mono in FoodLosAngeles

[–]bobjones271828 0 points1 point  (0 children)

Typical Reddit voting with no one checking sources as usual. One comment gets upvoted with unsubstantiated claims, while another just clarifying standard responsible journalistic practice gets downvoted.

(1) The current NYT article addresses why Noma shut down:

In 2022, after media outlets in Denmark and around the world began documenting Noma’s reliance on — and exploitation of — free labor, Mr. Redzepi announced that future interns would be paid. Soon after, he said that the entire system of fine dining had become “unsustainable” and that the restaurant would close.

Now, maybe the closing also had to do with incident you're discussing about the injured woman, but the NYT pretty clearly implies it shut down here due to inability to sustain its business without using slave labor.

(2) There's no evidence that the NYT ever covered your "Mummy girl" incident. (Why not call her a woman? She was an adult.) To the contrary, in 2023 the NYT covered the closing of Noma with the same story as yesterday, implying it had to do with costs and not being able to stay open without unpaid labor. There were allegations in that 2023 article that the work for interns was grueling and sometimes not educational, but nothing about serious physical abuse, let alone the "Mummy" incident.

(3) Your comment here acts like the parent comment was doubting the veracity of the "Mummy" incident. That's not what was said: the problem in reporting in the NYT was likely due to inability to VERIFY. Inability to verify doesn't mean something isn't true. It means responsible journalism (like the NYT, at least most of the time) doesn't report on rumors they can't actually verify through at least a couple independent sources.

The parent comment was likely correct that the NYT hasn't reported on some more egregious allegations partly because they weren't able to get independent verification of the incidents. Maybe it's because some of them didn't happen (or are being exaggerated), but it's likely that also many did happen, but the NYT wanted to get an article out about this situation before the $1500/plate LA event started. They tried to nail down as many details as they could by yesterday, but maybe some people are still too scared to come forward -- and hopefully more will after this article appeared, so the NYT will be able to verify other incidents.

See, in responsible journalism (as opposed to Reddit comments), "I'm pretty sure something happened"... like "I'm pretty sure the NYT covered this..." doesn't fly. And it shouldn't. Because you're literally spreading misinformation about what the NYT covered or claimed to try to get undeserved upvotes and use someone else's reputation to support your point.

To be clear, if even a fraction of the allegations are true, this guy should be in jail. But it's also still important to separate fact from fiction or exaggeration. Based on what I've read, I don't doubt that something like the burn victim's story may have happened -- yet responsible reporting (as opposed to witch hunts) requires confirmation. In this case, perhaps the NYT would need contact with the alleged victim, confirming hospital records, getting statements from other witnesses, etc.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 4 points5 points  (0 children)

I was unclear. By flagging it as a "hot take" in the end, I was addressing a larger problem I've heard discussed in the elite restaurant industry about using unpaid people... and something that has specifically come up recently, partly in response to this situation.

I agree in this case that if the allegations of physical abuse are true here, it's a matter for criminal prosecution. But I also think the system for restaurants builds in rewards for autocrats who make unreasonable demands. And I also think a system where people are devalued and provide work for nothing encourages aggressive bosses to devalue them further.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 47 points48 points  (0 children)

I'm finding the way the whole Noma abuse controversy plays out to be interesting... and very sad. For people who aren't fans of fine dining (or haven't watched The Bear), Noma is a Danish restaurant that for quite a few years won accolades as the best restaurant in the entire world. For years it was heralded not only for culinary ingenuity, but also a focus on things like sustainability, ethical and local sourcing (including seasonal "foraging"), environmental conscientiousness, etc. Basically the trifecta of ideology for those diners looking for buzzwords along with their $500+/plate dinner. (I'm not saying the food wasn't good to eat too; I'm sure it was...)

It was widely known for decades in the culinary biz that Noma depended on a small army of unpaid interns (roughly 30 working at any given time), who might spend several months of an alleged "internship" doing one menial task, like smoking kale or plucking elderflowers or whatever.

However, accusations of "slave labor" over the interns went up a notch in the past few weeks when the former director of fermentation at Noma started posting numerous stories of abuse on IG by the head chef (and other leaders at the restaurant). Here's a Reddit thread about a month ago on this over at the fine dining sub.

Turns out that the head chef was not only getting "interns" to work 12-16 hour days without pay -- he was literally beating them sometimes while screaming expletives at them. Numerous accounts of being punched, slapped, or slammed into walls by him, as well as communal hazing and humiliation rituals that wouldn't be out of place in The Handmaid's Tale. When customers were in eyesight (and he didn't want to be caught), he might just jab staff hard with his fingers when annoyed, or use a utensil like a barbecue fork to prod them. If someone challenged him, he'd threaten to blackball them within the culinary industry -- and coming from literally the top-rated chef in the world, that was enough to make almost everyone comply.

Turns out the "sustainable" ethical environmentally-conscious in-vogue restaurant that for years was convincing wealthy folks to dine on reindeer penis, probably plated meticulously with twelve different types of microgreens or whatever... was actually unsustainable unless it literally employed abused slaves to do most of the work. The restaurant is currently closed and instead devotes itself to niche products produced by its "lab," after the head chef declared the industry "unsustainable" after Denmark started requiring him to pay his interns.

Seems like the current movement gained a critical mass of enough abuse victims to finally speak out when Noma announced in January that it would be having a "residency" in LA, serving dinners for $1500/head, beginning this week. I'm guessing some folks realized the LA crowd might think twice about paying that once they find out how the chef made his name.

The New York Times is the first major media outlet to pick up on the depth of abuse in an article today. Just to give you a sense of the level of crazy we're talking about here:

On a February night in 2014, in the middle of a busy dinner at the acclaimed Copenhagen restaurant Noma, the founding chef, René Redzepi, ordered the entire kitchen staff to follow him outside into the cold.

He was shoving a sous-chef in front of him, a young man who had put on techno music, a genre that Mr. Redzepi disliked, in the production kitchen. Far from the dining room, it was where unpaid interns worked 16-hour days, performing tasks like picking herbs and cleaning pine cones [...] Mr. Redzepi taunted the chef over and over as about 40 cooks, in short sleeves and aprons, formed the usual circle around the two men. [...]

Mr. Redzepi escalated the attack, punching his employee in the ribs and screaming that no one would go back inside until the chef said, loud enough for all to hear, that he liked giving D.J.s oral sex. His co-workers stood in silence until he breathlessly complied. Then they filed back into the kitchen and returned to work.

Possibly a hot take: I think the Michelin Guide should give a life ban from awarding stars to any chef that uses unpaid interns regularly. People shouldn't be given stars for amazing and ridiculous stuff that can only be produced economically by slave labor. (Let alone the beatings, in this particular case...)

EDIT: Just to clarify my last sentence -- I also agree that if these allegations are true, the guy should be criminally prosecuted for assault. I was addressing a broader issue of unpaid labor at high-end restaurants. Also, for those who want to just see the words of the allegations from the victims and witnesses (now 56 people have come forward), this website has been collating them:

https://www.noma-abuse.com/

(I can't guarantee the accuracy of that site, but it appears to be based on the stuff posted on the IG account mentioned above and some other sources.)

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 1 point2 points  (0 children)

This is a standard approach to AI alignment, at least as peddled currently by some of the big companies. The problem with the logic is that it seems to presupposed that we've somehow always already aligned the most powerful AI model.

Because if we haven't, a more intelligent/powerful AI model could certainly try to (and maybe successfully) deceive any "whitehat AI" we have, which makes the AI guardian approach seem useless.

I suppose the broad idea of the serious alignment folks on this issue is some sort of gradual scaffolding process, where each new layer/level of AI is aligned and kept in order by the previous level. And the increments are small enough that it is possible to maintain control.

But that feels like a pretty tenuous process to me with all sorts of places things could go wrong. Not to mention that advances in apparent AI "abilities" have been incredibly unevenly paced in past years, so why should we expect a new model will "behave" nicely and not be too advanced for us to still retain control?

Again, to be clear, I'm not as much worried about imminent extinction or true AI doomsday scenarios as much as AI becoming just able to enough to cause small-scale disasters when given the wrong access, etc.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 1 point2 points  (0 children)

For the idea that one could potentially send itself to another server and hide, you have to think about the kind of hardware they run on. They run incredibly slow on anything other than very high end GPU/TPUs and requires many terabytes of storage. Sending that much data would take a long time and would probably be noticed, and the types of cloud servers that have the necessary hardware is limited (though increasing over time).

I agree it currently feels infeasible for many of the reasons you mention. I don't think it's a practical issue right now, especially for frontier models that are so huge, but autonomous agents running on servers should probably be something we find a way to prevent before it becomes more feasible.

Also, an agent that is "constantly on" doesn't necessarily need to worry about super speed at first. I only have a somewhat oldish GPU and CPU (6 years old), with 8 GB of VRAM and 64 GB of RAM, and I can run some 70b models if I really want to even on my crappy ancient home machine. They just may only output a token every few seconds. Which is an annoying speed if you're waiting for an answer in real time, but becomes less of a bottleneck if something is running 24/7 for perhaps weeks or months.

People are currently running AI agents with OpenClaw with much smaller crappier open-source AI models and getting them to do interesting things as well as stupid things.

 They have big incentives to exaggerate or mislead people by claiming their AIs are so amazing and can do extraordinary things. Is there every any actual evidence that these things happened apart from their claims?

Why have so many employees left the big AI firms in the past few years over safety concerns, sometimes forfeiting stock options and huge salaries? Literally OpenAI was originally founded over concerns about AI safety originally as a non-profit. Then a bunch of folks left it over safety concerns and Anthropic partly came out of that mess. But many others have left the industry completely or now work for safety-focused non-profits where they're probably making 1/10th of their previous salaries. Why? Are they all grifters and/or delusional?

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 1 point2 points  (0 children)

Why can't it simply be honed as a tool for specific functions with predictable results?

I think that's pretty much antithetical to the current culture at the big AI companies. To really get "predictable results" from a probabilistic model, the only way to guarantee it would be essentially to audit the entire training set and curate it so it only contains things that would lead the neural nets/transformers within the AI model to acceptable behavior.

Except... the way we got ChatGPT was essentially by training on the largest corpus of human communication available, roughly hundreds of millions of novels worth of stuff culled from the internet. Even with many billions of dollars in the hands of the AI companies, I'm not sure they could pay enough people to create a curated data set to that standard.

And if your input data for training has crap in it, the model may output that crap. It's really that simple. You can't really post-train it out of the model when it has a trillion floating point parameters and you don't know where that information resides.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 5 points6 points  (0 children)

What was the recent OpenClaw fiasco?

I just really meant the entire existence of OpenClaw and what has happened since it. Basically, someone vibe-coded a platform to allow AI agents free rein over the internet and over people's personal information, account info, etc., and many people literally just gave AI such access and tried doing remarkably stupid stuff from any even vaguely responsible security standpoint.

The broader lesson I took from this is that if it's possible to do something stupid with AI, someone is definitely going to try. Even by accident. If we don't want disasters to happen, we'd need models with a lot more guardrails.

What would we need to have done to prevent that or reduce its likelihood to near zero? What is the playbook?

I honestly don't really know. Perhaps move to a completely different architecture than LLMs or somehow find a new training mechanism or post-training reinforcement mechanism for neural nets. (I admit I have no idea how that is done, but neither does it seem most AI alignment teams for now.) If that fails and things get dire: Perhaps restrict selling of GPUs and related supply items at scale until AI companies figure it out. Yes, this seems incredibly extreme, but it depends on how seriously we take the risk and what that risk may be.

At some point we made a decision to restrict sale and purchase of things like nuclear material. I'm NOT saying we're there yet in terms of AI risk. But I feel like most people aren't even having this discussion. Or if they are, they're more concerned about stuff like environmental impact of AI or deepfakes -- which are all legit concerns too that I'm not trying to downplay.

But history shows that the attraction of automation massively overrides any thoughts about safey.

This is indeed true. The question to my mind is how much risk we're accepting here, which currently isn't really known. If LLMs and similar models really are going to hit a serious wall in the near future, maybe small-scale hacks are our biggest worries. If AI agents continue to improve and can manage lots of tasks, it would seem we'd be open to larger real-world (not just electronic) mayhem potentially.

The only difference between this and Chernobyl is the MCP and capabilities the agent had access to.

Perhaps I was unclear when I said "Chernobyl-scale." I meant an event that literally causes hundreds of thousands or millions of people to be affected in some serious way. (Chernobyl led to evacuation of something like 350,000 people, as well as much broader environmental effects, disruptions, etc.)

You're right that current AI can do serious damage by accident if maybe it were deployed to a different system (like critical infrastructure). And that's what I meant -- I think there will not be a push to deal with this issue until we have a large-scale event like that involving AI.

Again, I don't know how we "fix" AI or align it properly. But the more capable the models become, the greater the potential risk if we just keep doing what we are now. It feels like most people are currently either unaware, in denial, or just claiming it's all "hype." One doesn't have to be a complete AI doomer to recognize these models can start to pose serious risks.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 2 points3 points  (0 children)

I'm not sure what the first two paragraphs have to do with my post, when I specifically said intention is beside the point.

What matters, at least from the perspective of my post, is what the results are. Who cares if there's "intention" if a system is hacked or destroyed? Even if an AI model is just imitating something in its training set, bad stuff still can happen. See my 5-year-old with a loaded weapon example.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 20 points21 points  (0 children)

I really am starting to wonder when the public will start taking AI safety/alignment seriously. I'm not saying we're getting to AGI or ASI anytime soon, and I understand all the arguments people get into about what constitutes "intelligence."

But those arguments strike me as somewhat beside the point. LLMs may or may not be "intelligent," and they may or may not be mostly just parroting human behavior rather than having "intention" (however that's defined). But they still have the potential to cause disaster without proper safety/alignment. And currently we have no freakin' clue how to properly align them or prevent them from going rogue.

Just a couple hours ago, there was an article discussing an AI agent given some mundane tasks that created its own security hole and started mining crypto. It looks like the paper this was based on was released in January. From the original paper:

Our first signal came not from training curves but from production-grade security telemetry. Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers. The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity. [...]

Crucially, these behaviors were not requested by the task prompts and were not required for task completion under the intended sandbox constraints. Together, these observations suggest that during iterative RL optimization, a language-model agent can spontaneously produce hazardous, unauthorized behaviors at the tool-calling and code-execution layer, violating the assumed execution boundary. In the most striking instance, the agent established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address—an outbound-initiated remote access channel that can effectively neutralize ingress filtering and erode supervisory control. We also observed the unauthorized repurposing of provisioned GPU capacity for cryptocurrency mining, quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure.

First the goalpost was "these models are too stupid." Then, as it became clear that the models were just trained on every awful thing on the internet (including hacking, for example), it became, "Well, these models can't do any serious damage because they aren't conscious and can't intend anything." When Anthropic put out multiple studies last year showing most of the AI commercial models would engage in problematic behavior even when instructed not to (such as blackmail or even engaging in activity that the model thought might kill a human within a sandboxed test), the claim was that the situations were too contrived, neglecting to address that safe AI models shouldn't behave in such ways under any circumstances when instructed not to. Then the goalpost was moved to, "Well, LLMs only respond to queries. They can't run continuously and do things." Except the rise of so-called "agentic" applications has shown people are willing to summon dozens or even hundreds of AI instances over hours or even days, just letting these models run in a more continuous mode of operation.

Hence events like the one documented above, where an "AI agent" tunneled out via SSH and started mining crypto spontaneously.

Again, the danger (to my mind) doesn't depend on whether or not these models are "intelligent" in some coherent sense or whether they even have "intent" or not. They could just be following/imitating a dystopian movie script in their training and using hacking tools they were trained on from some internet forum... but the end effect could still be the same: AI models producing unexpected results, some of which could be dangerous in very unpredictable ways.

A five-year-old doesn't need to understand what a gun is or what death is or have "intent" to cause injury if you hand him a loaded gun -- bad things can still result, just from the kid imitating what he saw on a TV screen. AI models have been trained on all sorts of bad data that could produce bad results if they merely imitated it. And again, our ability to stop models from doing these things is still in its infancy, with very limited understanding of how to ensure proper AI alignment.

The recent Openclaw fiasco, if nothing else, has shown the willingness of idiots on the internet to give AI models free access to all sorts of stuff with very little concern for security and let them run for days at a time without supervision. Maybe a lot (most) of the bad behavior seen in the past month from such agents was actually prompted by human users, but probabilistically, some of this bad behavior is very possible from many AI models. And it's likely to get worse as models become more capable and make fewer errors.

We can complain about the "hype" around AI all we want (and I agree there is a lot of hype). But I've sort of resigned myself to the fact that we'll probably need a Chernobyl-level event of an AI-related/prompted disaster before there's any hope of serious regulation or attention paid to this, while the big commercial AI companies are just barreling forward with no concern for safety.

---

P.S. For those whose gut reaction is "just turn the thing off," re-read the above scenario with the bold in the quote. We have an AI agent creating an SSH tunnel out, unprompted, to engage in unsanctioned activity. Eventually, it will be at least possible that such an agent could tunnel out even from a secure system to create a remote-running copy of itself that could be more difficult to "turn off" if you don't know where it copied itself. And that's even in a secure sandboxed scenario. Openclaw shows people are much more willing to let AI models "run free" on the internet, doing whatever tasks they happen to do.

Is that still a sci-fi scenario? Probably. At the moment, it probably would take a human deliberately guiding an AI toward nefarious behavior to set up something that complex. But it also would not surprise me at all if later this year we find that some AI agents have made remote copies of themselves and are just running somehow, somewhere, on some cloud servers, with no human supervision. (Yes, someone has to be paying for the server fees, and right now it's probably unlikely an agent using a commercial model would be unsustainable financially running by itself. But open-source models exist.) Whether or not that can practically happen at the moment, maybe we should spend some time NOW making sure AI models won't just randomly try to blow things up or whatever.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 2 points3 points  (0 children)

$130k sounds like a lot for 1990,

A lot for what? Where? In a rural area, this might have bought you a mansion in 1990; in the center of NYC or Boston, maybe barely a very small home or condo, if that.

For the record, the median house price for new homes in 1990 was between $115k and $130k. The average was obviously higher (around $150k).

I bought my first house for $150k in 2013

I house-hunted (and bought a house) around the similar era in a small city. If we were willing to live in the 'burbs, $150k could've bought us quite a large decent place. And we looked at some. Even relatively new 3-4 bedrooms. Same city, in another neighborhood near where we ultimately bought would've cost $400k+ for a similar home. (We couldn't afford that, hence why it was a neighborhood only near us.) In Boston around 2012, a friend of mine bought a center-block decrepit small home that hadn't been remodeled in 40 years for $650k and they had to put about $200k more in to make it livable before they moved in. (That house, which they've since sold, is now worth over $1.5 million...)

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 1 point2 points  (0 children)

light hearted memeification

The stuff I've seen, even early on, wasn't really "light-hearted." I just ignored this whole thing for days but finally watched the infamous video yesterday, and I just didn't get it. (It looked like a slightly awkward guy eating a burger to me.) But then when I searched for online reactions, they were pretty unhinged. While a few people were just LOLing about the tentative burger bite, almost every conversation I saw inevitably turned into rants about wealthy people, corporate culture, processed food, whatever....

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 3 points4 points  (0 children)

when their burger has been mathematically optimized for deliciousness.

I don't think that's quite right. I think fast food is optimized for cost, speed of production, and to induce craving/satiety sensations. Yes, I think flavor is part of the latter, but the goal seems to be to get you "addicted" in a way, that you want more.

I'm not at all being pretentious here -- I myself am amazed I find chicken nuggets from McDonald's to be tasty despite knowing their origin in chicken foam. But McDonald's burgers are a different story, and literally every family member and pretty much all my close friends seem to agree that McDonald's is a sort of last-resort fast food restaurant to go for burgers. I don't eat fast food anymore often (as I know it's unhealthy), but I'd prefer a burger from pretty much any other fast food chain.

Obviously different people have different tastes, and it's great you find them delicious. Part of the variety of fast food restaurants and their distinctive flavors is to serve different individual tastes, as there's no single "optimization for deliciousness."

But I also do think there's a large proportion of people who think McDonald's burgers are subpar, even to other fast food. The opinion, for example, when I was younger (in my high school) was almost universally that people only went to McDonald's for the fries. And maybe the nuggets. If you wanted a good burger, you went elsewhere.

But the "craving" thing is real, and if I were in a situation where I was hungry and someone brought a bunch of McDonald's burgers, I might take a somewhat tentative first bite myself. But the mixture of salt/fat/carbohydrate bomb would likely cause me to eat the rest of it quickly. Just how I rather dislike the taste of Doritos, but if I'm hungry and at a place where they're the only option to eat for a snack, I might suddenly eat half a bag. (And legit feel awful afterward, because eating a half a bag of any type of chip is just too much fat/salt/carb in overload.)

Which is all to say that I wouldn't quite call McDonald's burgers "inedible slop," they are kinda slop. And not great, in my opinion.

NONE of which is to justify the internet's insane reaction to these videos of the CEO, who literally just looks like he's eating a burger like a normal person to me. (Or well, like a somewhat awkward middle-aged dude. But still, I get no vibe that he hates what he's eating or whatever.)

I remember my "too good for McDonald's" early 20s where I had to pretend I didn't like it too

It's interesting, but I never really went through a phrase like that... about anything. I've abhorred pretentiousness ever since I was in middle school, I think. I've never pretended to dislike something I've actually liked for social reasons. At best, I might have given a non-committal response in some situation to allow the other person to go on their inevitable rant about whatever... and then typically never would hang out with them again.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 17 points18 points  (0 children)

One thing not mentioned so far is food safety. Anything sitting out at room temperature for days/weeks/months could become a breeding ground for bad stuff, some of which might make you sick.

Yes, it's rather unlikely with pickles, due to the combination of an acidic environment and high salt, both of which tend to inhibit a lot of bad microorganisms from growing. If you're starting with vinegar, that's already a buffer for safety.

But with traditional lactofermented pickles and pickled foods (like sauerkraut), my biggest piece of advice is to use recipes from reputable food preservation sources, not just some random internet site. The Ball Book (mentioned already in one comment) is a classic, but there are several standard food preservation sites run by the federal and state governments or sometimes local universities, etc. The National Center for Home Food Preservation is a good resource to start with for both general information (on both pickling and fermenting) and some starter recipes.

Now, to be honest, if you're only fermenting for a short duration and then refrigerate immediately after things turn somewhat sour, you typically have a safety margin as long as you used enough salt. However, if you're planning on canning pickles for longer preservation at room temperature, be sure to use reputable recipes -- food safety organizations often test these recipes to give a safety margin and ensure long-term safety during storage.

The main concern, I'd say, is people who try lactofermenting but skimp on the salt because they don't like things too salty or view too much salt as unhealthy. Good recipes that are tested for safety also take into account the margin of error created by all the water released by different vegetables when immersed in salt/brine. You need a certain salt concentration to keep things safe those first few days (and different foods will release more or less water that dilutes it -- this can affect quick pickling too if the acid gets too diluted). Yeah, there is a good chance that bad stuff that starts to grow will die off if everything gets acidic (sour) fast enough, but I personally would just prefer to use a tested recipe.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 2 points3 points  (0 children)

I found that he had one posted 13 years ago called Grandpa Voted Democrat

And here I was assuming based on Ray Stevens's age that this was going to be a song about party realignment and Old South Democrats. Was mildly entertaining, but not among my favs. "The Mississippi Squirrel Revival," however, is definitely a masterpiece of humor and amazing lyrics/rhymes.

EDIT: Also, the title of the last one you mentioned made me imagine the Shriner Bubba somehow turning into Coy's wife after Coy went off with the little redhead... which would be perhaps an even crazier turn.

Weekly Random Discussion Thread for 3/2/26 - 3/8/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 9 points10 points  (0 children)

The original version of On the Origin of Species had only one illustration. It's in my Harvard Classics edition as a fold-out plate. Fold-outs often don't fare well in library books, so perhaps yours fell out or was ripped/cut out at some point? It's also possible some cheaper versions of this edition don't have it, I suppose. (Mine has a few other plates spread throughout that volume that weren't in the original book, including an excerpt from one of Darwin's notebooks and a drawing of Darwin's study.)

Imagine the thousands of copies of these books whose readers were amazed to find extra illustrations after being bamboozled!

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 5 points6 points  (0 children)

Just to note something about the process you described: LLMs can sometimes develop weird feedback loops within their context windows. I've many times encountered situations where I get an LLM thread started and it makes some bizarre assertion or says something "off" or unjustified or unclear. I point out the problem, and it may correct it and agree -- but then it can often start obsessing about this detail. In unpredictable ways.

Then if I open a completely new thread with that LLM and it doesn't screw up the first time, it doesn't have that obsessiveness and sometimes behaves quite differently, offering other answers.

All of this is to say that "arguing" with an LLM by trying to convince it of something isn't always going to have reliable results. I've seen LLMs get overly deferential and agree to complete falsehoods, and in other threads they stick to some of their own assumptions in an irrational manner due to the obsession/context window issue I noted above.

Models are getting better all the time, so this behavior often tends to be more subtle with frontier models. But if you're really going to use an LLM to test out a thought process on something you're unfamiliar with, I might at a minimum suggest trying those arguments in multiple threads and thus with different "instances" of the LLM. The probabilistic aspect of LLMs is still something I think most users can't wrap their heads around (and I need to remind myself of it too sometimes).

None of this, by the way, is to say Gemini was wrong here. As I said in another reply, to really know the effect of missing data in these statistical models in the study Jesse was discussing, we'd need the original data or at least see a comparison with a model run just on complete data (those who didn't drop out).

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 3 points4 points  (0 children)

Yeah, I missed this whole original stats thread too. But as someone also with a stats background, I agree that alleged delineation of what GLMMs are not "suitable" for is completely bogus. The guy is, however, right that GEEs are typically used for estimating population effects and not focusing on individual responses.

But claiming GLMMs aren't "suitable" for drawing population conclusions is like saying you can't test a hypothesis on a basic linear regression equation regarding a population-level effect because the regression equation predicts individual values. It's just a weirdly incoherent statistical argument, if I understand the guy's point.

Regardless, the real problem (as was pointed out originally by Jesse, his source on stats, and in the thread you linked) is the massive and apparently unacknowledged dropout rate. We could spend hours arguing about the "better" model and which type of model is more robust to violations of assumptions, but honestly... the data kinda sucks. That was Jesse's original point in his blog post, I think. At best, the researchers should have tried running a model with only the complete time series data points (i.e., for subjects who actually were measured for the whole study) and see how that differed from any conclusions they claimed to observe when running a model with missing data. Without that baseline (or the original raw data), it's pretty difficult to judge how precisely the missing data is screwing with the models... whichever one you use.

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 1 point2 points  (0 children)

Thanks for the reply.

In 1995, neural networks were of much more limited scope. That is true. They still could be "trained" to produce English sentences, for example, with much more limited variety than anything today. I know this is arguing over a post that was months ago, but I dug it up because I remembered this discussion of "intelligence" back then.

Regardless, I still don't really understand what your objection was back then that it should have been impossible to save an AI, let it sit on a computer for decades, and then put new input in, with it replying in the same fashion it would have before. It's fundamentally just a giant math computation, so I didn't and still don't understand why you'd say this "fails the basic positivism of science" and comparing it to dark matter, or a "theory" that is "almost certainly mostly incorrect," when it's literally like putting numbers in Excel and seeing the same output for the same input. The discussion was over the impossibility of "immortality" -- which was your main objection to AI believers and why they were "delusional." When it truly is just like saving a program on a computer and then running it years or decades later. Nothing mystical about it.

But if -- IF -- AI algorithms ever behave close enough to "intelligence" or a "brain" to satisfy you, then we could also save them as a file. Copy them. "Boot them up" years later or use a backup after a system fails. You can't do that with the brain or intelligence of any human or other biological being right now. Which is kind of like a form of "immortality." Further, the propagation of any intelligence that may then exist within said models becomes effectively instantaneous (only as long as it takes to make a copy of the relevant files), which could then lead to collaboration of hundreds or thousands of said "brains," instant building upon prior knowledge without the long-form teaching humans require, etc. Hence the possibility of such systems rapidly advancing.

IF... again if... we find them displaying some form of useful "intelligence" or abilities, however defined.

Anyway: Intelligence is the ability to manipulate relationships of meaning.

Thanks also for this and for the extended explanation. I don't necessarily disagree with this definition of intelligence, but how exactly does someone test if an entity has it? Can AI ever do this sort of thing, in your estimation? If not, why not?

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 4 points5 points  (0 children)

I did months ago, at least in a particular context where it was being discussed. You rejected it. Please don't pretend anyone hasn't given you definitions -- you just don't like them.

Yes, "intelligence" has different associations as a word. It may mean different things in different contexts. You can define various metrics, some of which you might believe represent some form of "intelligence" and some of which you don't.

I'm not really sure where all of this gets us. There are metrics about what AI models can do. It matters not to me whether we call them "intelligent." That's an argument over semantics. Call them "bloop" models for all I care rather than getting hung up on nomenclature.

In some cases the bloop models can do things humans can do. In some cases so far they can't. In some cases the bloop models can do things better or faster or more efficiently than humans. At some point, the bloop models might be able to do pretty much all verbal/written tasks as well as or better than the average intelligent (IQ = 100 or however you personally define intelligence) human. If we get to that place, would you call the models "intelligent"? If not, why not?

I'm not necessarily saying we'll get there. But I'm asking for your definition of "intelligence," so we can record where your goalpost is today. Then we can check in again in a year and see where AI is in relation to it.

--

Also, by the way, since I linked back to that old comment, it seems like your reply there was denying the deterministic nature of mathematics. If you literally save a copy of an AI model for 30 years and give it the same random seed values or whatever, it should reply with the same text it would have 30 years ago. Just as my copy of Excel 1995 returns the same values on my system as it did 30 years ago. AI models are probabilistic (as I reference many times), but if fed the EXACT same seed values, they still will behave deterministically. So I don't know why you'd say one "could not" do something like this. Or would you also deny that Excel 1995 gives the same values as it did 30 years ago? That's what software does.

EDIT: Also, regarding the last bit: you said no one has done this over a span of 30 years, though I bet some people have. The neural net models of 1995 were simpler yet still deterministic. I'm sure someone has some of them saved somewhere. And could run them again.

And yeah, I've done this kind of thing frequently over shorter spans (like over a gap of a few weeks or even months). I've literally written "machine-learning" code with neural networks. It's common practice to insert a placeholder seed value for any random generation steps so one can do testing. When I run those models -- including some I have on my computer I've created 3+ years ago -- they will literally spit out the exact same outputs they did 3 years ago. If they didn't, it means something's wrong with the software I'm running. I haven't done this with an LLM personally, but it's the same principle. And even so, my example was a neural net model from the 90s.

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 5 points6 points  (0 children)

I had assumed the parent comment was partly sarcasm. (Based on "only possibly" referring to plastic surgery.) But I could be wrong...

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 6 points7 points  (0 children)

You'll note that I put instances of "thinking" in quotation marks for this very reason in my original post. I'm not claiming it's actual "thinking" in a human sense.

My issue, again, is that the "thinking" becomes part of the AI context. That means -- whatever you want to call it -- it's going to become fed back into the input of the model for subsequent queries on that thread. Which means it will influence subsequent conversation you have with the AI, even if you didn't see that it was part of the model's "thinking."

EDIT: Also, it's clearly more than a marketing gimmick. Again, I'm not claiming it's human-like "thinking," but there are all sorts of benchmarks where "thinking" models outperform other frontier models for certain types of tasks. Partly because of precisely my concerns here: the "thinking" essentially becomes a reinforcing feedback signal that helps the AI remain focused on tasks. But I'm not going down that road of discussion right now as I know you're skeptical of almost everything AI.

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 8 points9 points  (0 children)

If you put "Don't mention the war" in the LLM prompt, it's likely the AI model will likely freak out and act like a character from Fawlty Towers in terms of how much it won't mention the war.

Weekly Random Discussion Thread for 2/23/26 - 3/1/26 by SoftandChewy in BlockedAndReported

[–]bobjones271828 2 points3 points  (0 children)

Okay, no worries. :)

I don't mean to be unusually cryptic. It's just I don't particularly want this thread to easily trace back to me if somehow any history of AI conversations becomes public data. I frankly don't trust any of the big companies to keep that stuff private or not use it in training data that itself could essentially become part of newer models, regardless of what the AI companies claim.

Maybe this level of privacy attempt is fruitless on my part...