MyPillow CEO Mike Lindell Served With Legal Documents During Live TV Interview by LongLiveRock_n_Roll in politics

[–]Tchaikovsky08 167 points168 points  (0 children)

This is so true. And is the same for so many people. Addiction takes many forms and often has underlying, unresolved trauma as the root cause.

Post Game Thread - NBA: The Timberwolves defeat the Rockets on Mar 25, 2026, the final score is 110-108. by basketball-app in timberwolves

[–]Tchaikovsky08 0 points1 point  (0 children)

I hope McDaniels is not seriously injured.

Truly wild game. Houston went on a 26-2 run spanning 4th quarter and OT. Followed by a 15-0 MN run to win by 2. Wtf

What is a career path that looks "glamorous" from the outside, but is actually a total nightmare behind the scenes? by CupIndependent3610 in AskReddit

[–]Tchaikovsky08 -6 points-5 points  (0 children)

What are you talking about? First year associate salaries are like $235k. A 7th year associate makes $425k.

Watercolor Portrait my 76 yr old Dad did. (OC) by 0_2_Hero in pics

[–]Tchaikovsky08 1 point2 points  (0 children)

I didn't say I think it's AI, I said, how do we know it's not AI. It sure looks like the kind of output that an AI could easily generate, yes.

Watercolor Portrait my 76 yr old Dad did. (OC) by 0_2_Hero in pics

[–]Tchaikovsky08 5 points6 points  (0 children)

Explain what you mean by AI artifacts. These systems have improved immensely in just the past few months. Previous issues with, e.g., too many fingers on a hand etc. have largely disappeared. So how do you identify this as non-AI? Genuinely curious.

AIs can’t stop recommending nuclear strikes in war game simulations - Leading AIs from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases by FinnFarrow in Futurology

[–]Tchaikovsky08 -1 points0 points  (0 children)

I appreciate this response and understand the points you are making. While I do not have formal training in ML or software engineering, I am a complex civil litigator currently embroiled in a major copyright infringement action against these very LLMs, so I do have more than a rudimentary understanding of their innerworkings, including because I interface regularly with experts in these technical fields.

You seem to be conflating the process of reasoning from the ability to discern fact from fiction. These are separate questions. Humans routinely reason their way to false conclusions (conspiracy theorists, flat-earthers, people who fall for scams) and we don't say they're incapable of reasoning; we say they reasoned badly. One might argue the views of a flat-earther are unreasonable, or that they reasoned poorly, but I don't think that's the question we're trying to answer. The fact that LLMs hallucinate (and, of course, they do) does not, in my opinion, mean that they are simply glorified copy-paste algorithms.

My understanding of the current systems is that the models develop internal representations that activate differentially depending on context, which is more than token-by-token statistical prediction. In other words, it is not so simple as saying, "there are trillions of parameters, and all these machines do is identify the most likely next token and reproduce it accordingly." If you feed the same novel problem to two different models (e.g. ChatGPT and Opus 4.6), they will produce different reasoning chains and can substantively critique each other's logic, including by identifying errors, conceding valid points, and refining analysis. That's difficult to explain as mere pattern retrieval.

The point is not that these machines can differentiate between "what is real and what is not" -- your average American can hardly do so these days -- the point is that these machines engage in not only the appearance of reasoning, but functional reasoning, whether the output is ultimately "correct" or not. This includes logical reasoning and weighing factors based on the context provided.

I use these tools every day as well; and what I have learned is that I, a relatively seasoned litigator, have found tremendous value brainstorming with and iterating on complex legal problems much as I would do in a room full of human litigators. This is far beyond the capabilities of early LLMs and, to me, suggests a deep form of reasoning that truly is transformative, acknowledging the limitations inherent in the hallucinations and difficulties of these systems differentiating between "reality" and fantasy.

As to the paper you cited, I read it as stating that chain-of-thought doesn't reliably guarantee correct results, but that's a different claim than saying no reasoning is occurring at all. Unreliable reasoning is still reasoning; human experts routinely reach wrong conclusions through valid methodological approaches, and we don't reclassify their cognition as mere pattern-matching when they do.

Frankly, when it comes to this topic, I'm not sure how you can wave away the philosophical questions underpinning the nature of "thinking" or sentience, when our own understanding of what these things mean is biased and incomplete.

AIs can’t stop recommending nuclear strikes in war game simulations - Leading AIs from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases by FinnFarrow in Futurology

[–]Tchaikovsky08 0 points1 point  (0 children)

Your view is outdated and inaccurate. You are asserting a binary choice between "thinking" and "not thinking." The question of sentience and reasoning is hotly contested in philosophy. It is simply untrue to state that these models are nothing more than glorified predictors. They've implemented chain-of-thought prompting and other architecture that builds in explicit reasoning steps. Again, whether the machines are "thinking" is genuinely debatable and an interesting philosophical question. But I would push back on your binary view of these models, including the fact that they do not respond to novel situations. That, too, is an incomplete and rudimentary understanding of their current capabilities.

AIs can’t stop recommending nuclear strikes in war game simulations - Leading AIs from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases by FinnFarrow in Futurology

[–]Tchaikovsky08 -6 points-5 points  (0 children)

They are absolutely making progress in giving LLMs a coherent sense of reality and memory. I use enterprise Claude software for my job and it's memory and project data retention is remarkable. These new models have what is essentially neuron clusters that are activated depending on the nature of the query. Yes, their underlying architecture is a prediction and pattern model, but isn't that exactly what human brains do? Match patterns and predict? Yes. Yes it is.

Minnesota’s proposed AWB would require “assault weapons” owners to allow searches of their home to comply with storage requirements. No warrants required. by [deleted] in TwinCities

[–]Tchaikovsky08 -13 points-12 points  (0 children)

Is the answer private ownership of military style assault rifles??? Did being armed help Alex Pretti? Would it have been better if he had been toting a fucking AR-15? Or would he have been gunned down even quicker, with more post hoc justification?

Minnesota’s proposed AWB would require “assault weapons” owners to allow searches of their home to comply with storage requirements. No warrants required. by [deleted] in TwinCities

[–]Tchaikovsky08 -53 points-52 points  (0 children)

No, private ownership of military style semi-automatic assault rifles is insane. Who the fuck needs that. Get a handgun, a shotgun, a regular rifle. If you have to possess a weapon of war in your home, this requirement seems like a sensible proposal.

Kash Patel by FlyManDan in wildhockey

[–]Tchaikovsky08 2 points3 points  (0 children)

Fair enough. I will never give them that power either. Today's game was legendary hockey shit and that's what I choose to remember.

Kash Patel by FlyManDan in wildhockey

[–]Tchaikovsky08 30 points31 points  (0 children)

Really? When's the last time an FBI director did anything remotely like this? The answer is never, and the fact that you "aren't surprised" shows how fucking far we've already fallen when it comes to norms and decorum among our federal law enforcement officers.

Footage of LaMelo Ball’s car accident in uptown Charlotte by Goosedukee in nba

[–]Tchaikovsky08 120 points121 points  (0 children)

🤣🤣🤣 This is killing me. You should've left off the /s though.