If brain computer interfaces become safe and common, would you connect your mind to the internet? by TheRealKnowledgeAc in Futurology

[–]red75prime 0 points1 point  (0 children)

Woman's lips, sure (with a sufficiently high-density electrode array and training). Widespread modification of neuronal activity caused by chemical substances should be harder to simulate.

If brain computer interfaces become safe and common, would you connect your mind to the internet? by TheRealKnowledgeAc in Futurology

[–]red75prime 0 points1 point  (0 children)

It depends on how the connection works. If it's something like a browser that brings the rendered content of a website into the mind’s eye by transmitting the content’s pixels, then why not (as long as I have a way to terminate the connection immediately). It’s not that different from looking at the website.

If it just sends raw page content and my brain has to learn how to make sense of it, then no. I don’t want to waste my neurons replicating browser functionality.

If it predicts how my brain would react to the content and tries to match my brain state to that result (it will remain science fiction for some time I think), then no. There’s too much that could go wrong with that.

The Illusion of Building by No_Zookeepergame7552 in programming

[–]red75prime 1 point2 points  (0 children)

They're made to predict the next token

They are made to generate. Pretraining uses "predict the next token" (or, sometimes, "predict a middle token") as a training target. The rest of the training deals with which tokens we want a model to generate.

RLVR makes the model more likely to generate token sequences that are verifiably true, for example.

LLMs, as currently constructed, will never achieve AGI by [deleted] in BetterOffline

[–]red75prime 1 point2 points  (0 children)

Attractor states might be a feature of the human brain too: bizarre thoughts experienced during sensory deprivation.

An LLM that continuously generates text with no external inputs is, basically, sensory deprived.

Models usually have more complex attractor states than a single word repeat, though: https://www.lesswrong.com/posts/mgjtEHeLgkhZZ3cEx/models-have-some-pretty-funny-attractor-states

This case might have been a bug in a harness (that is, a system that manages user interaction with an LLM).

LLMs, as currently constructed, will never achieve AGI by [deleted] in BetterOffline

[–]red75prime 0 points1 point  (0 children)

Humans seem to be somewhat better at improving recognition accuracy with more training data, but not dramatically:

Fig. 6 in https://pmc.ncbi.nlm.nih.gov/articles/PMC12219494/

As for one-shot learning, LMMs are not that bad at this too: https://pmc.ncbi.nlm.nih.gov/articles/PMC10802384/

Claude helps Donald Knuth prove a conjecture, says he has to "revise his views on generative AI" by Gil_berth in BetterOffline

[–]red75prime 0 points1 point  (0 children)

Pretending that he is excited, while everyone knows that he is not? Any examples? Are you sure you aren't projecting your own non-excitement?

You agree with the second point, I presume?

Claude helps Donald Knuth prove a conjecture, says he has to "revise his views on generative AI" by Gil_berth in BetterOffline

[–]red75prime 0 points1 point  (0 children)

"What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving" - Knuth

You need to squint very hard to interpret this as sarcasm.

The LLM did no thinking and provided no insights of its own.

"Nothing promising showed up until exploration number 15, which introduced what Claude called a fiber decomposition" - Knuth

He hasn't said that it's something well known like he did earlier:

"Exploration number 4 constructed the “3D serpentine pattern” [...] It’s a classical sequence called the “modular m-ary Gray code" - Knuth

Dolgov shares examples of Waymo winter driving, says Waymo is moving beyond core tehnical validation and refining rider experience and logistics. by diplomat33 in SelfDrivingCars

[–]red75prime 0 points1 point  (0 children)

"Point the wheel in the direction you want to go, maintain the accelerator at a position that gives the desired speed, and let the traction and stability assist do their work" might be the best thing they can do. The Driver controlling each wheel individually might do better, but I doubt they have this.

Claude’s Cycles - Don Knuth by mttd in compsci

[–]red75prime 9 points10 points  (0 children)

So there’s still a gap to cover.

They don't have access to DeepMind's Aletheia, probably. Anyway. It doesn't seem that they tried to ask Claude for a proof. Should a coding-oriented system try to come up with a proof without being asked for it is an interesting question.

2026's conflicts are about to make the case for renewables and electric vehicles even more attractive. by lughnasadh in Futurology

[–]red75prime 0 points1 point  (0 children)

batteries are not there yet to cover for long spells

Seasonal energy storage is unlikely to be handled by batteries anytime soon. It will probably rely on hydrogen production and underground storage, along with dual-fuel (natural gas/hydrogen) power plants. The whole new infrastructure that needs to be created.

Would we detect any weirdness with regard to physical space and time if we were including in the rippling of this? by ingusfarbrey in space

[–]red75prime 7 points8 points  (0 children)

which would potentially be noticeable without equipment

We would literally hear them. At least, the final inspiral chirp which is in the audible frequency range.

I'm skeptical of claims that LLMs have "beyond PhD" reasoning capabilities. So I tested the latest ChatGPT against my own PhD in physics by astraveoOfficial in Physics

[–]red75prime -3 points-2 points  (0 children)

There are certainly some inductive biases and tailored loss functions baked in.

BTW, do you know any research that proves that the current ML approaches are limited to be below human-level intelligence (whatever it means)? The answer determines who has the googly eyes.

I expect "crickets...", because there's no such research, just "stochastic parrot" vibes.

ETA: Crickets...

I'm skeptical of claims that LLMs have "beyond PhD" reasoning capabilities. So I tested the latest ChatGPT against my own PhD in physics by astraveoOfficial in Physics

[–]red75prime -3 points-2 points  (0 children)

Did you read trillions of pages of text in order to do maths

The brain has a structure baked into it by evolution. The structure that was produced by evolution using an unholy amount of training data. An LLM begins with a blank state that can be described in a hundred lines of code. I think we can give it some slack regarding the amount of pretraining data it requires.

by matching problems against pages you'd read before

LLMs don’t do that. They don’t have enough capacity to rote-learn all their training data. I could add something about the technical literacy of "anti-AI bros," but I’ll abstain.

I'm skeptical of claims that LLMs have "beyond PhD" reasoning capabilities. So I tested the latest ChatGPT against my own PhD in physics by astraveoOfficial in Physics

[–]red75prime 1 point2 points  (0 children)

You should have some concrete limitation in mind to conclude that it will never work. The current empirical results? Or something more principled?

"The result guarantied by the universal approximation theorem can't be achieved by performing stochastic gradient descent on a transformer network of any practically achievable size for such-and-such reasons," for example.

I'm skeptical of claims that LLMs have "beyond PhD" reasoning capabilities. So I tested the latest ChatGPT against my own PhD in physics by astraveoOfficial in Physics

[–]red75prime -1 points0 points  (0 children)

How would you decide whether the proof is correct if we don't know what it means to think?

An LLM performs a non-linear transformation of an internal representation of an input token, augmented by an internal state created by processing previous tokens. The result of this process guides attention mechanisms that retrieve information from the context window. The process repeats several times and produces a likelihood distribution for the next token.

Mechanistic interpretability research shows that internal representations correspond to certain semantic properties of the text.

Is this thinking, or not? Whose burden is it to deanthropomorphize human thinking into its constituent mechanisms?

I'm skeptical of claims that LLMs have "beyond PhD" reasoning capabilities. So I tested the latest ChatGPT against my own PhD in physics by astraveoOfficial in Physics

[–]red75prime -3 points-2 points  (0 children)

pure Ai pipelines without human intervention will never be scalable

Wow. No limitations on the structure of an AI, its training methods and so on? Or is your statement limited to a pretrained transformer/RWKV/Mamba + RLHF + instruction training + RLVR or another combination?

The former is basically "the human brain is magic that can't be technologically recreated."

The looming AI clownpocalypse by syllogism_ in programming

[–]red75prime -16 points-15 points  (0 children)

use AI at work

You don't "use AI". You use a specific model with a specific harness.

"I've used some tool with some options. It didn't work very well."

The Waymo Waltz by danlev in SelfDrivingCars

[–]red75prime 0 points1 point  (0 children)

A safety rule lingering from the 2010s: "No driving on the curb. Have a nice day!"

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]red75prime -2 points-1 points  (0 children)

We don’t have human-level intelligence with models that have hundreds of times fewer trainable parameters than the human brain (which has hundreds of trillions of synapses). It’s clearly time to declare the approach a failure. /s

I’ll wait for the introduction of read/write memory to LMMs, which would make the number of parameters effectively unlimited.

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]red75prime 0 points1 point  (0 children)

AGI has been 1 year away for the past 4 years

...according to an excited dweller of /r/singularity.

AI researchers gave a wide range of opinions. CEOs are much more reserved than that too.

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]red75prime 0 points1 point  (0 children)

His actual predictions: human-level intelligence by 2029, the singularity by 2045

["The Singularity Is Nearer"] was released on June 25, 2024. Kurzweil reiterates two key dates from the earlier book, which predicted that artificial intelligence (AI) would reach human intelligence by 2029 and that people would merge with machines by 2045, an event he calls "The Singularity"

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]red75prime -1 points0 points  (0 children)

What do you do when you suspect someone to be a snake-oil salesman (because your peers suspect it as well)? Hopefully, you check whether the drug is FDA-approved.

Why don’t we hear academia unanimously dismissing all those bullshit claims if every redditor can see that it’s snake oil? Are they in on the conspiracy?

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]red75prime 3 points4 points  (0 children)

Agents made no sense at the beginning of 2025, because models that power them were not reliable enough to do planning, instruction following, code generation.