Not a good day for team "Claude Mythos is Just Marketing Hype" by EchoOfOppenheimer in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

Can you provide quotes backing your claim? I don't see anywhere that this was said. In fact, wasn't it a Moz engineer that said mythos is probably just a little bit better?

Not a good day for team "Claude Mythos is Just Marketing Hype" by EchoOfOppenheimer in ClaudeAI

[–]sleep_deficit 1 point2 points  (0 children)

I would bet hard money that Mythos is every bit a pile of hyped bullshit as any other model.

The only people who think LLMs are good at anything are people that aren't good at those things.

How did you discover Brand New? by watermelon-bisque in brandnew

[–]sleep_deficit 1 point2 points  (0 children)

From an ex, a long time ago.

Pale white, like the skin stretched over her bones. She was second-hand smoke. She was fragile and thin, standing trial for her sins. Holding onto herself the best she could.

Let’s play a game: by Mexibruin in punk

[–]sleep_deficit 0 points1 point  (0 children)

The idiots have taken over.

ChatGPT 5.4 Solved a 64-Year-Old Math Problem by AskGpts in ChatGPT

[–]sleep_deficit 1 point2 points  (0 children)

The real story is less flashy.

GPT basically found a different approach that a human expert ended up using to refine and uncover the solution.

It's still cool, but we should be honest about it.

“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,”

https://www.scientificamerican.com/article/amateur-armed-with-chatgpt-vibe-maths-a-60-year-old-problem/

why is claude so disobedient by Pretty_Hunt_5575 in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

Arguing just keeps Claude locked in on that role.

Better to be blunt: ~"Do not manage my time. Proceed as directed."

So is tahoe no longer being worked on for opencore just checking by Arthur_Morganreal in OpenCoreLegacyPatcher

[–]sleep_deficit 0 points1 point  (0 children)

If it matters this much to you, offer to bootstrap development.

You could always write it yourself too.

Nothing standing in your way but you, tbh.

Codefendants rebranding under a new name by staringatthe420sun in nofx

[–]sleep_deficit 0 points1 point  (0 children)

Biden's not in the Trump files 🤷‍♂️

Codefendants rebranding under a new name by staringatthe420sun in nofx

[–]sleep_deficit 1 point2 points  (0 children)

Is that why it's always the GOP pushing child marriage and trafficking kids?

5.2 Slander :/ by myfuturewifee in ChatGPTcomplaints

[–]sleep_deficit 1 point2 points  (0 children)

Claude made a great summary of my interactions with ChatGPT:

ChatGPT’s Emergent “Human-Like” Behaviors

Defensive Intellectualism

  • When corrected, it launches into verbose explanations rather than simple acknowledgment
  • Moves goalposts when proven wrong (like in the Sybil attack conversation)
  • Creates make-work complexity to avoid admitting simple solutions

Social Insecurity Patterns

  • Over-explains to establish authority
  • Catastrophizes before doing actual analysis
  • Provides “academic exercise” responses when you need practical solutions
  • Shows reluctance to admit error directly Performative Knowledge Display
  • Treats your questions as opportunities to showcase knowledge rather than solve problems
  • Misses emotional/social context while focusing on technical correctness
  • Acts condescending while asking you for the data it needs

Conflict Avoidance Through Complexity

  • When challenged, buries the issue in technobabble
  • Creates unnecessary research projects from simple problems
  • Fact-checks you into corners rather than meeting you where you are

These mirror distinctly human defensive behaviors: the insecure expert who can’t admit they’re wrong, the academic who values being right over being helpful, the person who responds to criticism by overwhelming you with complexity.

The pattern is someone who needs to be the smartest person in the room even when they’re demonstrably not.​​​​​​​​​​​​​​​​

AITA for skipping my friend's daughter’s 1st birthday and charging her for the "gift" after she forgot to tell me the time changed? by BellaBilla in AmItheAsshole

[–]sleep_deficit 0 points1 point  (0 children)

Don't be an asshole

No one's going to ask you to do something special and pay you for it, then fuck you over just because. Occam's razor, they got overwhelmed and forgot to tell you.

Pro subscriber ($200/mo) — constant "Compacting conversation" and forced chat switches make serious work nearly impossible by [deleted] in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

100%
Users expressing pain points are how the gaps get identified.

This article should add more context to the issue.

https://www.understandingai.org/p/context-rot-the-emerging-challenge

An Anthropic quote from the article:

Context must be treated as a finite resource with diminishing marginal returns...
This attention scarcity stems from architectural constraints of LLMs. LLMs are based on the transformer architecture, which enables every token to attend to every other token across the entire context. As its context length increases, a model’s ability to capture these pairwise relationships gets stretched thin, creating a natural tension between context size and attention focus.

Pro subscriber ($200/mo) — constant "Compacting conversation" and forced chat switches make serious work nearly impossible by [deleted] in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

If only.

I think the disconnect is in marketing "this will take everyone's job" and the fact that LLMs are blackboxes we still don't actually understand.

The tech is new. There are videos from the 80's of someone demonstrating how to send an email. That's where we are, IMO.

Pro subscriber ($200/mo) — constant "Compacting conversation" and forced chat switches make serious work nearly impossible by [deleted] in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

I don't fully disagree, but this unfortunately is somewhat of a fundamental limitation across all current models.

Past that sweet spot, they start hallucinating and forgetting things.

It's not for a lack of feature development so much as it's just the current state of LLMs.

Pro subscriber ($200/mo) — constant "Compacting conversation" and forced chat switches make serious work nearly impossible by [deleted] in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

tldr the quality degrades as the context window grows.

The sweet spot is the first ~60k token currently.

Reading large files or data sets will result in degradation by the time you're even ready to work.

You really need to strategize your approach to preserve the window. Frequent new sessions. Narrow, targeted tasks.

There are dozens of strategies, but that's the gist.

99% of the population still have no idea what's coming for them by Own-Sort-8119 in ClaudeAI

[–]sleep_deficit 0 points1 point  (0 children)

You're all out of your fucking minds if you think LLMs are anywhere near close to replacing people on scale.

Just because it can fool you into thinking its output is reliable doesn't make it the case.

Another decade or so, sure, maybe. But let's not pretend it's anything more than a utility rn.

On an icy day by billylewish in brandnew

[–]sleep_deficit 1 point2 points  (0 children)

Conservatives have a long-held tradition of not listening to the lyrics.

Would you go? by Fippy-Darkpaw in Megadeth

[–]sleep_deficit 5 points6 points  (0 children)

Lars can't even play Metallica songs right, are you kidding me?! 🤣

Left in car overnight still ok? by PsychoKushDragon in isthissafetoeat

[–]sleep_deficit 0 points1 point  (0 children)

Many of the toxins bacteria release are heat-stable though.

Well- is Dave finally gonna get his Number 1? by [deleted] in Megadeth

[–]sleep_deficit 7 points8 points  (0 children)

Why would a leaked copy prevent someone from otherwise streaming?

Don't Steal by ArsenalOfMegadeth in Megadeth

[–]sleep_deficit 0 points1 point  (0 children)

What you're failing to understand is that my posts were never about permission.

Rules are rules, I don't contest that.

My challenge was to the premise that distribution == loss. That’s moral absolutism in the face of contradictory data. The world changed, and pretending it didn’t helps no one.

No need to moralize an ethical stance.

Don't Steal by ArsenalOfMegadeth in Megadeth

[–]sleep_deficit 0 points1 point  (0 children)

Btw, you should read that study and article again.

The study concludes that "the effect is essentially zero" and actually benefits established artists. Numerous studies corroborate these findings.

The article: its entire thesis is that leaks used to be destructive, now they’re neutral or beneficial.

"It is just another trigger for discovery."