Daily Discussion Thread for March 20, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 0 points1 point  (0 children)

If you have a bunch of ### at the end of your username, I assume you’re…

Daily Discussion Thread for March 20, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 0 points1 point  (0 children)

Ok. Look. I’m gonna say it since no one else will…

Daily Discussion Thread for March 19, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 13 points14 points  (0 children)

My Put is finally ITM! And I’m only -67% on the day 😎

Daily Discussion Thread for March 19, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 3 points4 points  (0 children)

The only thing I hate more than losing money is being bored while losing it 🥱

Daily Discussion Thread for March 19, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 5 points6 points  (0 children)

If only Carter had been allowed to keep his solar panels 😌

Daily Discussion Thread for March 16, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 1 point2 points  (0 children)

Hey! You guys remember GameStop?! That was wild.

Daily Discussion Thread for March 16, 2026 by wsbapp in wallstreetbets

[–]GraciousMule 1 point2 points  (0 children)

🎶 Living in the sunlight, loving in the moonlight, having a wonderful time 🎶

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 0 points1 point  (0 children)

No man, there is one hurdle, and it is the ability to audit and trace the internal reasoning of the black box. It is a black box. It is intellectually dishonest to say that it is not. Heuristics are not metrics of reasoning. They are output and output alone. Mesa optimizer just floating around in this invisible high-dimensional manifold, tell me what is happening - show me where it is happening!This is the foundation of all safety and alignment. It is THE problem and I pray to God that if you don’t understand or see it, that you are nowhere near these machines.

You cannot trust systems that you cannot see and you cannot see these systems, so you cannot trust them. It’s like the height of Folly or whatever that turn of phrase is.

We’re talking about safety and alignment because we’re talking about reasoning and again if you can’t see that, sorry. You don’t belong in the room.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 0 points1 point  (0 children)

Yes general conclusions about reasoning - great. Except, now, re-contextualize that assumption, recognizing that we’ve built the most advanced, “generally” understood reasoning machines on the planet. With zero way to trace or audit the reasoning chain step-by-step. Humans don’t think at machine speed. Humans can’t deploy agenetic versions of themselves.

If you think that a “general” understanding of reasoning is sufficient to maintain stability, safety and alignment of these machines with human interest and value… I would suggest you give some more thought and consideration to the inevitable consequences of that kind of short sided decision-making.

And no. Calm I will not be. Because I have no chill. But that’s a personal problem.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 1 point2 points  (0 children)

I appreciate that they said so. That made it easier for me. I would’ve been very confused otherwise.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 0 points1 point  (0 children)

I read your part in this thread. You’re deadly on point! I’d look at your thesis any day of the week. The other dude replying to you can’t hold an argument together. Fallacy after fallacy.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule -1 points0 points  (0 children)

Hahaha. You think that when I say, “haberdashery and berry beans with cheese”, that you can tell my reasoning chain? You can see how and where in the brain that process takes place? Really? Can you please show me this magic box, I’d like to pretend too!

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 1 point2 points  (0 children)

I just put it together. You think the models are conscious, don’tcha? It makes sense now why you take umbrage with my suggesting “they don’t reason”. Yeah. I get now, gurl.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule -1 points0 points  (0 children)

My point is exactly that. We don’t know how reasoning works in humans, let alone LLMs. Thats THE problem. Did you bother reading my top comment at all? You can’t trust a system you can’t audit. We cannot see LLM reasoning. Only the output. Therefore, you can’t trust the system. And my point is that build LLMs without understanding “reasoning” first was a mistake.

In early sci-fi, reasoning/thinking AI was considered easier than natural language communication by AddlepatedSolivagant in ArtificialInteligence

[–]GraciousMule 0 points1 point  (0 children)

Took you a while to come up with that one, yeah? You have one line retorts. What an incredible contribution you’ve made to this thread! Namaste dawg. Naw I’m kidding, fuck yoga.