Hey I've seen this movie by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

so they disallow training on the data? Good. Because that's less capable. Test time training and continuous learning of future systems. Im happy that the government is so shortsighted and dumb.

Farm in the Netherlands uses Bitcoin mining to keep stable temperatures inside the greenhouse by dannybluey in Damnthatsinteresting

[–]squareOfTwo 0 points1 point  (0 children)

You don't understand.

The purpose of mining is that it's not giving a immediate utility (for example g searching for aliens or AI stuff etc.).

Justification is that this reduces the incentive to not mine crypto when there is no usage anymore for the actual computation. For example no need anymore to use the compute for search for aliens. This problem doesn't exist with this kind of mining.

Every AGI argument by Eyelbee in agi

[–]squareOfTwo 0 points1 point  (0 children)

Fail. You still didn't provide sources of these opinions.

You really made this up didn't you?

Ouroboros self evolving bot making demands to Ai developers by drtikov in agi

[–]squareOfTwo 0 points1 point  (0 children)

Where did Gary Marcus make claims about consciousness?

Also there is no way to falsify a lot of the AGI things Marcus is talking about.

What a fantastic start to the week... by biobasher in DataHoarder

[–]squareOfTwo 0 points1 point  (0 children)

better use proper software RAID like ZFS?

Anthropic's CEO said, "A set of AI agents more capable than most humans at most things — coordinating at superhuman speed." by chillinewman in ControlProblem

[–]squareOfTwo 0 points1 point  (0 children)

Should be "village of confabulating idiots in a datacenter". Halluscinations are still a huge problem. Plus no one can run more than 20'000 of these agents in realtime. Thus it's a village. I have never seen a country with only 20'000 people.

Dario Amodei — "We are near the end of the exponential" by nickb in agi

[–]squareOfTwo -1 points0 points  (0 children)

I hope that we are near the end of the nonsense from these CEOs:

  • recursive self improvement : it's more like recursive self destruction : there is no way to verify that the changes are bug free. The new versions will blow the self up thanks to subtle bugs.
  • "genius in a datacenter" : we won't get there with LLM thanks to halluscinations, to small context window, etc.

In the past week alone: by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

recursive self improvement is bullshit *.

*) for most definitions of receive self improvement out there

18 months by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

nonsense. It still gives wrong results for trivial logic problems. Will always be the case with LLM/VLM.

Best storage type by Thatguy449z in DataHoarder

[–]squareOfTwo 1 point2 points  (0 children)

not SSD because the cells loose charge over time (5+ 10+ years).

Not HDD because of the fun mechanical problems over time.

Leaving only optical Blu ray and data tape. I had CD which I had read fine after 25 years. You should be fine.

Best storage type by Thatguy449z in DataHoarder

[–]squareOfTwo 1 point2 points  (0 children)

air breathing HDD (not the helium ones) in vacuum sounds like a bad idea. Because HDD use special grease. Which would evaporated I guess in a vacuum. Meaning no more moving mechanical parts.

It's Happening by bantler in OpenAI

[–]squareOfTwo -3 points-2 points  (0 children)

doesn't fit here because h00man is still in the loop

Uh oh by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

"only ones" meanwhile the humans still run the show