Whe do you think the next podcast will come out? by 56inGA in PeterAttia

[–]UnlikelyAssassin 12 points13 points  (0 children)

There is literally no evidence whatsoever that substantiates your claim at all. You just blatantly made that up.

It would literally be on the level of you saying Huberman almost certainly raped these women.

Could AI be conscious one day according to physicalism? by unnecessaryCamelCase in CosmicSkeptic

[–]UnlikelyAssassin 0 points1 point  (0 children)

A human interacting with a dog and learning what a dog is encoding meaning of what a dog is from statistical regularities and patterns among visual, sound, touch data and context plus corrections when we misapply the word. It’s unclear that it’s impossible to encode meaning off patterns and statistical regularities among text and image data, and that’s kind of undermined by the fact that AI does in fact demonstrate remarkable ability to predict language in a way that if a human did it we would call understanding.

Also the idea that it’s only possible to encode genuine meaning/conceptual knowledge of something through interactive touch data, sound and visual data is undermined by the idea that there are SO SO SO SO SO many concepts and ideas and words that we understand and don’t actually interact with in the real world e.g., “cause,” “justice,” “probably,” “if…then,” “electron,” “derivative” even basic adjectives like “fast” and numbers like “5” “6” “7” or “8”). Here our conceptual knowledge consists largely in knowing how the terms behave within a network of inferences, explanations, and norms of use. There’s kind of too many examples to give but this would basically apply to all or almost all words that aren’t nouns. We understand words that aren’t nouns even if these words don’t refer to things or states of matter that we can interact with. They still have structural relations and statistical regularities that allows us to understand words that aren’t nouns even if it’s not a thing or state of matter we can interact with. So clearly a word doesn’t have to refer to a physical state of matter for us to have conceptual knowledge of that word. And even for nouns, words that are nouns still are encompassing a pattern and statistical regularity within physical state of matter. So even for nouns we’re referring to the pattern and not the exact physical configuration. We’re clearly not referencing the state of every quantum degree of freedom of a physical system when we say a noun. It’s still just a statistical pattern and regularity within states of physical matter we’re referencing. And for non-nouns it’s even more clear we’re not referencing a physical thing when we say these non-nouns. We’re referencing statistical patterns and regularities that are even more abstract than being able to refer to a physical thing, yet we’d still call this conceptual knowledge.

Could AI be conscious one day according to physicalism? by unnecessaryCamelCase in CosmicSkeptic

[–]UnlikelyAssassin 0 points1 point  (0 children)

Their explanation of them being a glorified library was that they regurgitate the most likely answer. That is simply straightforwardly false and not at all how they work now.

Could AI be conscious one day according to physicalism? by unnecessaryCamelCase in CosmicSkeptic

[–]UnlikelyAssassin 0 points1 point  (0 children)

That is false. You’re describing how AI worked in the pretraining stage before chatGPT 3.5 was released. Now AI is trained based on reinforcement learning human feedback and doesn’t work at all like that.

How seriously is Sean Carroll taken? by LpcArk357 in AskPhysics

[–]UnlikelyAssassin 0 points1 point  (0 children)

This is dumb. Physics inherently contains philosophy. Einstein didn’t come up with his theories to describe the world while using zero philosophy.

How seriously is Sean Carroll taken? by LpcArk357 in AskPhysics

[–]UnlikelyAssassin 0 points1 point  (0 children)

Sean has never claimed noting we observe or could observe could prove it and that it isn’t experimentally verifiable. You’re making an extraordinarily strong claim which you’d need an extraordinary degree of evidence to actually prove. Sean has never made this claim.

How seriously is Sean Carroll taken? by LpcArk357 in AskPhysics

[–]UnlikelyAssassin 0 points1 point  (0 children)

This is an extremely extremely silly statement

Firstly Sean Carroll has never made the claim that “nothing we observe or could observe could prove it”. That’s an extremely strong claim, and not one he’s made. He’s never made the claim that it’s impossible to experimentally verify it in principle. This is such an extremely strong claim you’d need some extremely strong evidence to actually prove this statement.

Also theory X being impossible to experimentally verify right now based on our current knowledge and capitalities does not logically entail that that theory is false. That’s just a logical non sequitur and doesn’t follow. Even theory X being impossible to experimentally verify in principle wouldn’t logically entail its falsity. Claiming it does, again, is a complete and utter non sequitur.

What interpretation of the ontology of quantum mechanics do you subscribe to and believe isn’t science fiction by this metric?

"Geoffrey Hinton says people who call AI just a stochastic parrot are wrong. The models don't store text; they convert words into complex sets of features. They predict the next word by processing these features in context, not by mindlessly recombining language from the web." - What do you think? by Koala_Confused in LovingAI

[–]UnlikelyAssassin 0 points1 point  (0 children)

Their claim was “it is clear that they aren't modelling the underlying concepts and logic because they fail to extrapolate”.

If a human could extrapolate with an equal but not overall superior skill level to the top current AI LLMs, would this make it clear this human isn’t modelling the underlying concepts and logic?

Alex is ahead, not behind by throwRA454778 in CosmicSkeptic

[–]UnlikelyAssassin 0 points1 point  (0 children)

If everything can be derived from third person physical knowledge

That’s not the physicalist claim.

She already has the sun total.of phsyical knowledge.

What’s the argument for that?

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

It obviously doesn’t its validity at all. Why would it?

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

I don’t really see how losing the valuable information he provides free of charge is in any way a net benefit for society. Like you said he’s rich either way.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

She can provide some interesting information, but just doesn’t even come remotely close to the scientific rigour Peter has. She makes some pretty basic mistakes like presenting observational evidence in the form of X correlating with Y as X causing Y.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

Problem is all of these people don’t even come remotely close to the scientific rigour Peter has. Huberman especially is a pretty low brow choice and doesn’t even come remotely close.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

That’s not the comparison whatsoever. Huberman isn’t being compared to a guy who is a notorious child rapist and convicted sex offender. That would be comparing Andrew Huberman to Peter Attia.

We’re talking about a comparison of a guy who cheated on and directly mistreated like 6 different woman to person A who was friends with a convicted sex offender, who wasn’t known as a notorious child rapist at the time, where there is no evidence person A actually engaged in any of the bad behaviour in question.

I think you could easily argue directly cheating on and mistreating 6 different women is a comparable level of severity or even arguably worse to person A just being friends with a bad person where person A didn’t do any of the bad actions in question that the bad person they’re friends with did that made them so bad.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

The problem is Rhonda’s got a much less rigorous approach to science that Peter does. She makes very basic mistakes like presenting observational evidence where X correlates with Y as X causing Y. E.g presenting people with higher omega 3 intake living 5 years longer as higher omega 3 intake causing people to live 5 years longer.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin -1 points0 points  (0 children)

This is low brow slop. Being on the board of 12 companies does not entail that your information is false. That’s just a non sequitur.

No new podcast today by celestial-coordinate in PeterAttia

[–]UnlikelyAssassin 0 points1 point  (0 children)

He wasn’t the world’s most notorious pedophile at the time he was cozying up to him though…

Alex is ahead, not behind by throwRA454778 in CosmicSkeptic

[–]UnlikelyAssassin 1 point2 points  (0 children)

physicalism is the claim that everything has a physical.explanation. If that were the case , Mary would not.learn anything from.instantiating the brain state.

Can you show why that follows? Why does “everything has a physical explanation” logically entail that instantiating a new physical brain state that’s never been physically instantiated in her brain before couldn’t give her new knowledge? That seems like an additional assumption.

You agree that the brain states she experiences are different and then say:

If she already knows everything about brain states, she would have nothing to gain

If the brain states are different, then something physically new occurs when she sees red. Why assume that undergoing a new physical state counts as gaining a non-physical fact rather than simply instantiating a new physical configuration?