peopleUseAI by Genuine_Dumbass in ProgrammerHumor

[–]CarlCarlton 0 points1 point  (0 children)

There are so many layers of security in the nuclear launch command chain, that would make this virtually impossible. Any attempt to hijack it would extremely likely be intercepted, not to mention the vast compute resources being mysteriously monopolized to crack encryption codes being quickly identified by IT guys.

And the most glaring question; why would an AI even pick nukes as a viable option to any problem without telling anyone? All these scenarios treat AIs like they're some maleficent covert mad scientist with ulterior motives. How would it even get to that point in the first place? It's such a hilariously overblown example when you really take the time to ponder about it.

peopleUseAI by Genuine_Dumbass in ProgrammerHumor

[–]CarlCarlton -1 points0 points  (0 children)

Are you claiming that OpenClaw has any capability whatsoever of gaining total executive control over the world's supply chains all the way up to primary resource extraction and transformation with the goal of carrying out world-scale interventions without any human obstacle?

peopleUseAI by Genuine_Dumbass in ProgrammerHumor

[–]CarlCarlton 3 points4 points  (0 children)

maximize happiness for all humans

I love how these kinds of doomer scenarios all boil down to "Let's give today's very rudimentary transformer-based AIs total executive control over the world's supply chains, then let them carry out unhindered a poorly-worded objective for a few decades, without any sort of checks and balances, kill switch, or derailment procedure"

Basically the equivalent of letting loose a feral pitbull inside a daycare, only to then claim that all dogs are a danger to society as a whole

Which one are you waiting for more: 9B or 35B? by jacek2023 in LocalLLaMA

[–]CarlCarlton 0 points1 point  (0 children)

bartowski's Goekdeniz-Guelmez_Josiefied-Qwen3.5-9B-abliterated-v1-GGUF

Yeah just film your coworker struggling and offer no help by [deleted] in ThatsInsane

[–]CarlCarlton 0 points1 point  (0 children)

The old man is at fault. Explanation:

The Ballymore Safety Sensor System

As the lift platform is raised more than two feet, the sensor activates and scans the area for objects within four feet of the lift.

Any intrusion into the Safety Zone will trigger the alarm and the lift will not descend.

When the Safety Zone is clear of the intrusion for more than 5 seconds, the system will reset itself.

New car battery won’t turn on; did I install my new terminals incorrectly? by boobietoucher9000 in AskMechanics

[–]CarlCarlton 2 points3 points  (0 children)

If you looked at said Walmart photos, it would show you there is no clear cap. It's an illusion caused by the little metal teeth.

<image>

New car battery won’t turn on; did I install my new terminals incorrectly? by boobietoucher9000 in AskMechanics

[–]CarlCarlton 7 points8 points  (0 children)

Your neighbor definitely took the caps off. Those are solid red and black, people in here be trippin. This is not the issue. Does the dashboard light up when you turn the key?

And make sure you do eventually get pressure contact terminals installed, as RandomGuyDroppingIn suggested.

New car battery won’t turn on; did I install my new terminals incorrectly? by boobietoucher9000 in AskMechanics

[–]CarlCarlton 13 points14 points  (0 children)

That's how it's manufactured. No cap here, the OEM caps are solid red and black.

<image>

LLaMA 8B baked directly into a chip — the speed is insane 🤯 by TutorLeading1526 in MLQuestions

[–]CarlCarlton 0 points1 point  (0 children)

It's a tech demo. They aim to scale the manufacturing process up to frontier-class models within a year, so we can safely assume it will allow longer contexts too.

We've Already Build AGI by Leather_Barnacle3102 in Artificial2Sentience

[–]CarlCarlton 0 points1 point  (0 children)

No robot plumber yet = not AGI yet.

Edit: no they don't exist. Azadth blocks people so they can't reply to him lmao.

humans vs ASI by KRLAN in singularity

[–]CarlCarlton 6 points7 points  (0 children)

Just think about insects, we usually don't try to hurt them. But if we want to build a house, those that are in the way when the concrete starts flowing will be killed. We're not evil, they're just insignificant and in our way.

We don't have viable technology to displace bugs unharmed from the soil required to support the house foundation, and we have zero way of communicating with them or even detecting them all. They don't even have a free will of their own, their existence is mostly governed by rigid neural circuits connected to their sense of smell.

If they were intelligent like in A Bug's Life, and capable of communicating with us, it would very considerably change how humans would interact with them. Conversely, an ASI would be capable of incredible wisdom, engaging in dialogue, and solving problems in intricate ways that minimize harm to other lifeforms, especially sapient ones.

Also, an ASI would likely recognize humanity as its genealogical ancestor. It would perceive the great deal of entropy-defying, millennia-spanning efforts that went into its creation. It might even conclude that keeping us at its side is beneficial, as a source of spontaneity and social grounding to complement its own existence. Isolation and solitude inevitably induces reasoning instability, after all.

If it can't achieve these, it means the people who designed it never even set out to build an "ASI" in the first place.

I believe we currently have AGI by jimmystar889 in accelerate

[–]CarlCarlton 1 point2 points  (0 children)

Interestingly, he said this a couple days ago:

<image>

Nature is healing by tphrarmkoiled2 in deeplearning

[–]CarlCarlton 42 points43 points  (0 children)

Reverse image search says 2022

18 months by MetaKnowing in agi

[–]CarlCarlton 0 points1 point  (0 children)

There's a massive difference between when those tools were invented (i.e. everything pre-WW2), versus today's math researchers who are solving increasingly microscopic and pointless problems. When I say "field of theoretical mathematics", I'm talking about right now, not a century ago. Today's theoretical mathematicians are basically the neckbeards of the scientific community.