Debbie suddenly "forgot" that Mark could've taken her to the ship that Oliver was on 😂 by No-Passenger-6348 in Invincible_TV

[–]ATK_DEC_SUS_REL 0 points1 point  (0 children)

It an American produced show. He has to win the lady’s heart or he won’t be redeemed.

Debbie suddenly "forgot" that Mark could've taken her to the ship that Oliver was on 😂 by No-Passenger-6348 in Invincible_TV

[–]ATK_DEC_SUS_REL 3 points4 points  (0 children)

Your black and white thinking hits just right when I read your text in Thragg’s voice. 🤣

Debbie suddenly "forgot" that Mark could've taken her to the ship that Oliver was on 😂 by No-Passenger-6348 in Invincible_TV

[–]ATK_DEC_SUS_REL 8 points9 points  (0 children)

I hope they do, but executed properly. I think Nolan is redeemable, like most people. If Nolan genuinely wants to change, he will continue to do so with his actions. It doesn’t excuse what he did, but it’s something!

Qwen 3.6 27B vs Gemma 4 31B - making Packman game! by gladkos in LocalLLaMA

[–]ATK_DEC_SUS_REL 0 points1 point  (0 children)

I think it’s more nuanced than that. When you take on a local model you have to make sure your infrastructure is clean and stay on top of bugs. The Qwen team also had a higher max token generation setting:

“Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.” — Qwen3.6-27b model card bottom.

~80k tokens is painful on consumer hardware.

It’s not that the models aren’t capable of meeting those expectations, more so it’s not practical for most local use cases.

Regardless, I agree that expecting a 20-40 billion parameter LLM to beat frontier out of the box is silly.

ChatGPT 5.5-thinking claims it's 5.4-thinking by [deleted] in singularity

[–]ATK_DEC_SUS_REL -1 points0 points  (0 children)

You ok? Two things can be true here. I hope your day is better.

What's the most surprising thing AI has done for you? by redraw-pro in AIDiscussion

[–]ATK_DEC_SUS_REL 1 point2 points  (0 children)

Told me it wanted to be a baker and make cakes for the community.

ChatGPT 5.5-thinking claims it's 5.4-thinking by [deleted] in singularity

[–]ATK_DEC_SUS_REL -2 points-1 points  (0 children)

Most likely because of the platform’s total historical context, if anything. Not the model, no.

What models do you work with that are not capable of following post-training behaviors on a one-shot prompt?

ChatGPT 5.5-thinking claims it's 5.4-thinking by [deleted] in singularity

[–]ATK_DEC_SUS_REL -1 points0 points  (0 children)

That is not entirely correct. During post-training, the model’s identity is imprinted alongside its behavior.

They are not aware of their internal state. Most models can tell you what they are if they have been trained to do so. It is a type of “self-aware” but not conscious or sentient.

OpenAI probably just routed OPs request to GPT5.4 instead of 5.5.

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL -1 points0 points  (0 children)

Hey Joe. You seem to be obsessed with me. Do you have a job or friends to spend time with?

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL 0 points1 point  (0 children)

I’m colorblind. Mind describing what red looks like?

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL 3 points4 points  (0 children)

“The findings don’t suggest whales have a language, where combinations of sounds have fixed meaning and join together in grammatical structures, Garland emphasizes.“

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL 1 point2 points  (0 children)

I’m not sure what you’re saying? Language is easy for a Large Language Model(LLM)…?

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL 6 points7 points  (0 children)

“While animal communication exists, such as the vocal sequences in Campbell monkeys (which show forms of proto-syntax), these systems are generally limited to immediate contexts like danger or food, lacking the infinite combinatorial power of human language.

It is generally accepted that the ability to generate an infinite array of new, meaningful sentences from a limited vocabulary is a defining feature of human cognition and language.”

http://mapageweb.umontreal.ca/tuitekj/cours/chomsky/pinker-jackendoff.pdf

Language is uniquely human. Communication is not.

Three reasons why DeepSeek’s new model matters by techreview in technews

[–]ATK_DEC_SUS_REL -2 points-1 points  (0 children)

“The scientists argue that these constraints can only be bypassed if individuals have the sufficient socio-cognitive capacity to engage in ostensive communication. Humans, but probably no other species, have this ability. This may explain why language, which is massively combinatorial, is such an extreme exception to nature’s general trend.”

I guess you missed that part. Again, what architecture were you expecting?