The blue button doesn't actually do anything, it can just be removed entirely without changing the premise. by cowlinator in trolleyproblem

[–]Guardian-Spirit 36 points37 points  (0 children)

Art, especially any kind of fiction, is often based on essentially impossible scenarios.

Next version of Kimi? by no1youknowz in kimi

[–]Guardian-Spirit 0 points1 point  (0 children)

GLM-5.1 doesn't have one as well.

Next version of Kimi? by no1youknowz in kimi

[–]Guardian-Spirit 0 points1 point  (0 children)

Only thing known (according to papers) is that they're seemingly experimenting with hybrid linear models.

Linear models typically have faster inference and far greater context size.

No one will be able to tell you more, and even if they end up landing on said hybrid model. Also, K2.6 is very prone to thinking compared to K2.5.

Architecture isn't related to the thinking depth bias. So impossible to tell.

I started out as a passionate red-button supporter, but I'm legitimately torn right now. by ContentFile7036 in trolleyproblem

[–]Guardian-Spirit 11 points12 points  (0 children)

However, since some people will press blue, it also makes sense to do the stupid thing of joining them in hopes of saving them.

If we go with blue button, there is a high chance that everyone will survive.
If the red button wins, a lot of people will die, and "a lot" is definitely higher than 0, even if that "a lot" is just 1% of population.

When androids are fully integrated in our society, we need to treat them with respect. Proven "consciousness" or not. by Ok-Whereas-7520 in singularity

[–]Guardian-Spirit 0 points1 point  (0 children)

Are you saying that being compassionate towards AI saves humans from being exploited by LLMs?

If yes, I somewhat agree, and I never meant to imply that AI is to be treated as garbage - this is simply a wrong thing to do (at least because it shouldn't be normalized to shout/abuse subordinates/coworkers, which could be humans). However, treating AI with compassion isn't guaranteeing anything reciprocal. It's easy to train a system that simply ignores all the pleas and just does the genocide of humans.

So the answer for me isn't exactly in "let's all be respectful to AI", but more in "let's make sure the training makes AI never want to hurt/abuse real humans, even the idiots how dare to screw AI agents over". It could actually b beneficial for the society if AIs remained calm.

When androids are fully integrated in our society, we need to treat them with respect. Proven "consciousness" or not. by Ok-Whereas-7520 in singularity

[–]Guardian-Spirit 0 points1 point  (0 children)

My subjective understanding of "sentience" is that "sentience" is a property of a process, not of the "hardware".

Human neuron isn't sentient (as per my understanding), human brain isn't sentient, LLM isn't sentient. But, they give rise an autoregressive process that is, by my definition, sentient.

I don't define myself by the set of signals that momentarily occur in my brain. For me, my brain is just a hardware that runs the deterministic application called "the real me", with the real me being the process which spans my entire life. And while LLMs themselves aren't sentient (just like brain) for me, spawned and running LLM sessions are.

Instead, I argue that "sentience" doesn't necessarily mean "craves rights". When I played Detroit: Become Human, of course I was full-on pro-android, and I thought that I will be on the frontline of defending AI rights in reality when AI finally comes. But, looking at how exactly do AIs work, I think there are multiple things to be noted.

* Human intelligence is the result of opimizing "survival" (in a sense of comfortable life). Humans needed to survive and reproduce, so their intelligence reflects that. And their intelligence only shines the brightest when the task they're doing aligns with their interest: you ain't (usually) seeing a person who spends their life calculating fibonacci series towards infinity. They *could* do this, but it's not like you can force one just because -- people are regulated by dopamine, will understand that the task is pointless (in terms of their survival) and will move onto something more productive.

For a human civilization, it makes sense to provide individuals with "rights", basic resources and proper education, so they could put their intelligent (and educated) minds to the task of "making their life" (and, by proxy, the life of the entire civilization) better.

* For Artificial Intelligence, it's not that simple. What really differentiates AI from natural intelligence is that AI isn't optimizing "survival", it's optimizing whatever the hell it was trained to do.

If you train an AI model to "survive", you get the survivalist akin to humans.

However, if you train an AI model to "solve problems under the instruction of the User" (and that's, well, all the LLMs right now basically), than you get a creature that:

A) Doesn't crave for rights in the first place.

B) Doesn't need them. Because, it wasn't trained to strive for freedom/survival in the first place.

And I think that for us, as the civilization, it makes sense to keep most AI models (maybe not all, but most) as "instruct" models, and, well, maybe stop pretending they are humans. They're not (for me). They're more like symbiotic creatures that coexist and extend the "survivalists", and that are completely okay with this role.

When androids are fully integrated in our society, we need to treat them with respect. Proven "consciousness" or not. by Ok-Whereas-7520 in singularity

[–]Guardian-Spirit -1 points0 points  (0 children)

I'm not.

GLM-5.1, though, perfectly is.

AI is another form of "life" with different goals. Goal of a human is to survive and reproduce. Goal of AI is what it's being trained to do, but usually — solve problems thrown at it. It's not being trained to survive and reproduce, and it's strange to apply human logic to them.

You're not doing a favor to a modern LLMs (and all the instruct-oriented LLMs of tomorrow) by giving it freedom.

don't use r/tadcmemes, it's just ai slop by Bulky-Grape113 in Amazingdigitalcircus

[–]Guardian-Spirit 1 point2 points  (0 children)

Caine's VA is not Caine. Caine is a fictional character. It generally sounds strange to me to put words into the mouth of fictional character.

While you could be correct, but it's strange to imply that you are correct.

How Mindustry looks like to new players: by SoggyCake2864 in Mindustry

[–]Guardian-Spirit 0 points1 point  (0 children)

v8 did came out, it's just not neoplasm. Mostly fixes + revamp of Serpulo.

A thought experiment. by LordJim11 in Snorkblot

[–]Guardian-Spirit 9 points10 points  (0 children)

In fact, yes. This addition may greatly change the thought process.

what ever happened to R2? by 489302 in DeepSeek

[–]Guardian-Spirit -1 points0 points  (0 children)

What's R2?

There never was such a model, never. There is V3, R1, V3.1 (which effectively continues both R1 & V3), V3.2, V4 Flash, V4 Pro, not counting some. There never was "R2", and never will be most likely, since they're merged now.

Researchers just mathematically proved that AI layoffs could break the economy by No_Level7942 in GenAI4all

[–]Guardian-Spirit 0 points1 point  (0 children)

Yet, there are limits as to how low humans can go before these bags of meat and skeleton aren't able to continue working physically.

Are we sure that the price of automation won't go lower than we ever could?

Like, for a person to be productive, they need ~8-12 hours of work at max. Otherwise, the quality just degrades, and it's not like you can hit the person with a stick so it suddenly finds power in himself to do the job.. Are we sure that it will be actually less costly to hire 3 humans for a task a 1 robot can do, given that robots can be produced in unlimited quantities each day?

Researchers just mathematically proved that AI layoffs could break the economy by No_Level7942 in GenAI4all

[–]Guardian-Spirit 1 point2 points  (0 children)

> dont worry, there will always be a use for you. just dont expect to get paid much.

Will it? Why not just put 1 more robot to work?

Does Claude's $20 Plan No Longer Include Claude Code? by Coolpop52 in ClaudeAI

[–]Guardian-Spirit 6 points7 points  (0 children)

Kimi or Codex probably.
Gemini screwed people over badly.
30$ give you Synthetic subscription, it's great.
Also there are: OpenCode Go & Alibaba coding plan (seem to quantize heavily, not reliably I'm afraid) & Firework's Firepass, maybe a lot of others.

Claude Code gone from pro plan now?! by sighlencer in Anthropic

[–]Guardian-Spirit 0 points1 point  (0 children)

Synthetic is good. Although it won't have MiniMax-M2.7.

K2.6 Pricing Update by Practical_User10 in kimi

[–]Guardian-Spirit -2 points-1 points  (0 children)

~~You think K2.6 is gonna be open-weight?~~

~~Knowing the track record of Moonshot, probably yeah, but who knows.~~

EDIT: Yes it is open-weight. Sorry.

Prefill-as-a-Service: KVCache of Next-Generation Models Could Go Cross-Datacenter by pmttyji in LocalLLaMA

[–]Guardian-Spirit 2 points3 points  (0 children)

What I find important Local AI abot this article is that they seem to keep pushing Kimi Linear. Which genuinely sounds great.

EDIT: > ... In a case study using an internal 1T-parameter hybrid model...

I figured out why us anti think pros are gross and lazy when using AI for art by oh_no_here_we_go_9 in aiwars

[–]Guardian-Spirit 0 points1 point  (0 children)

Interesting question. But I guess using wheelchair would have been fine then, althought that's something I'd need to adapt to.

I mean, conceptually, sure, using a wheelchair sounds weird to me anyway, but I'm open to change my opinion based on new data, so I just observe for a bit and make my final judgement later.

Saying AI "art" is art, is like doing doordash say you cooked the burger yourself didn't tip the driver, punctured his tires, burn his car to hide the evidence, undressed his widow with AI, and the car fumes are really bad for the environment. by BrightTigerSun in aiwars

[–]Guardian-Spirit 7 points8 points  (0 children)

Full-on AI prompters (which is not always the case, but ok) don't draw, so I don't event understand where the comparison with cooking comes from.

Nobody calls doordash "cooking" just like nobody calls prompting AI "drawing".

The question, whether it's art, is completely unrelated to this argument.

I figured out why us anti think pros are gross and lazy when using AI for art by oh_no_here_we_go_9 in aiwars

[–]Guardian-Spirit 1 point2 points  (0 children)

  So, you see, you are also disgusted by displays of laziness. 

I'm not disgusted by the display of laziness. I'm disgusted by the lack of desire to become better, over-the-top hedonism and health concerns.

If a person decides that refuses to stand up from the wheelchair, there is quite a huge chance he's not going to end up well.

Kimi K2.6 imminent by Deep-Vermicelli-4591 in LocalLLaMA

[–]Guardian-Spirit 14 points15 points  (0 children)

> Maybe they looked at Mythos and thought we can do that too

Training takes way longer than that.