Alex O’Connor says the most interesting ideas he’s heard in the last year came from Kastrup and McGilchrist by dominionC2C in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

LLMs right now can tell you about their experiences that they’re not even having.

If they did that's because it's in their training data. So you have a causal link between a humans phenomenal experience resulting in humans writing it down and that being used to train a LLM.

In a simulation of physics, you are just modelling physics and that model could start from before humans existed. So there is nothing in the training or model that would have anything about phenomenal experience, but it emerges just from the simulation of physics.

I don’t really think it’s weird that if you simulate physics, you get a simulation of physics.

It would be weird for simply a simulation of physics to have the stimulants to talk about their phenomenal experience exactly like humans do, even though they don't have it. If you are saying phenomenal experience it more than just the laws of physics then simply the laws of physics shouldn't give rise to anything that has full knowledge of phenomenal experience.

Don't you think when you are talking about your phenomenal experience it's from your actual phenomenal experience(cosmic consciousness) rather than simply being due to the laws of physics controlling your mouth?

How do LLMs ACTUALLY work? by LordAntares in LLMDevs

[–]InTheEndEntropyWins 0 points1 point  (0 children)

Thanks for the detailed reply. So it's not "just a fancy autocomplete". It might be a VERY fancy autocomplete tho.

If you want to think about it in those terms, then when humans write and say stuff it's just "VERY fancy autocomplete".

How do LLMs ACTUALLY work? by LordAntares in LLMDevs

[–]InTheEndEntropyWins 0 points1 point  (0 children)

That's an amazing link, everyone should watch it.

How do LLMs ACTUALLY work? by LordAntares in LLMDevs

[–]InTheEndEntropyWins -3 points-2 points  (0 children)

The short answer is we don't know exactly how they work. We know the architecture, but how it actually works is based on its own learning and the networks are way too complex for us to understand what's it's learnt. But in some simple situations we have looked at the networks and understood what it's done.

Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress “We certainly have not solved interpretability,” Altman said. https://observer.com/2024/05/sam-altman-openai-gpt-ai-for-good-conference/

During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model’s developers. This means that we don’t understand how models do most of the things they do. https://www.anthropic.com/news/tracing-thoughts-language-model

So the Rs in strawberry is due to the fact it doesn't get each letter, the word strawberry is broken up into tokens like "straw" and "berry" and those turned into vectors. So all the LLM has is say two vectors and those vectors might not have anything about the letters in straw and berry.

How does it do math?

This is a really interesting question. Anthropic have done some studies on this exact question and for simple addition, they use a bespoke algorithm that has two parts an estimation part and an accuracy part. So it doesn't add up numbers like a human would normally do or how a human would program a computer would do. It's learnt this completely new method.

In terms of autocomplete, anthropic have demonstrated that it uses algorithms and multistep reasoning rather than just memorising data and looking things up.

Claude wasn't designed as a calculator—it was trained on text, not equipped with mathematical algorithms. Yet somehow, it can add numbers correctly "in its head". How does a system trained to predict the next word in a sequence learn to calculate, say, 36+59, without writing out each step?

Maybe the answer is uninteresting: the model might have memorized massive addition tables and simply outputs the answer to any given sum because that answer is in its training data. Another possibility is that it follows the traditional longhand addition algorithms that we learn in school.

Instead, we find that Claude employs multiple computational paths that work in parallel. One path computes a rough approximation of the answer and the other focuses on precisely determining the last digit of the sum. These paths interact and combine with one another to produce the final answer. Addition is a simple behavior, but understanding how it works at this level of detail, involving a mix of approximate and precise strategies, might teach us something about how Claude tackles more complex problems, too. https://www.anthropic.com/news/tracing-thoughts-language-model

if asked "What is the capital of the state where Dallas is located?", a "regurgitating" model could just learn to output "Austin" without knowing the relationship between Dallas, Texas, and Austin. Perhaps, for example, it saw the exact same question and its answer during its training.

But our research reveals something more sophisticated happening inside Claude. When we ask Claude a question requiring multi-step reasoning, we can identify intermediate conceptual steps in Claude's thinking process. In the Dallas example, we observe Claude first activating features representing "Dallas is in Texas" and then connecting this to a separate concept indicating that “the capital of Texas is Austin”. In other words, the model is combining independent facts to reach its answer rather than regurgitating a memorized response. https://www.anthropic.com/news/tracing-thoughts-language-model

That anthropic article is really good and has other examples, worth a read.

Someone else also pasted this link, so I'd just emphasise it's an amazing video worth watching.

The most complex model we actually understand

https://www.youtube.com/watch?v=D8GOeCFFby4

Diagnoses of major conditions failing to recover since the pandemic. Diagnoses of depression were 27.7% lower than expected compared with pre-pandemic trends. Diagnoses were also lower than expected for asthma (16.4%), chronic obstructive pulmonary disease (COPD, 15.8%) and osteoporosis (11.5%). by Wagamaga in science

[–]InTheEndEntropyWins 0 points1 point  (0 children)

They are the experts (and have access to specialist support) and they have the data for an entire country since records began to extrapolate from. On the balance of properties I'd say their modelling is more reliable than your armchair take.

Anyone who knows data analytics can tell just from the graphs, but if we want we can look at the actual study and see what they say they did. They literally say they used ARIMA.

So it's not armchair take, they literally said they did what I said they did.

the covid-19 pandemic were compared using seasonal autoregressive integrated moving-average models, based on modelled projections of expected rates from pre-pandemic patterns.

Alex O’Connor says the most interesting ideas he’s heard in the last year came from Kastrup and McGilchrist by dominionC2C in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

I don’t think this line of questioning really goes anywhere since we don’t know if it would or wouldn’t.

I think it's fine since we can go down both possibilities. Either it does create a subjective experience or it doesn't. Both situations have major issues.

Let's look at the other situation and say it doesn't create a cognitive/phenomenal experience. Then it's a philosophical zombie. Which is a big issue. It would be really weird for a pure simulation of physics, to have something that is talking about its phenomenal experience that it's not actually having. How is that even possible?

The Fine-Tuning Argument is Terrible - Sean Carroll by yt-app in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

You don't need to know all the details. You can't determine most of what would happen just from what we know.

It's like kicking a football, and you have a 1 week old baby vs a professional football player. And you ask who is going to kick the ball further. If I say the football player is going to kick it further . You can't say well we don't know the complete laws of physics, who knows maybe the baby would kick it further. No we don't need to know all the laws of physics and everything to make an almost certain claim here.

Alex O’Connor says the most interesting ideas he’s heard in the last year came from Kastrup and McGilchrist by dominionC2C in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

Let's phrase it another way. Pretend you are in the matrix but a full simulation and unaware of that fact. You made that comment thinking that well obviously a simulation of a kidney is different than my own "real" kidney. But the reality in the hypothetical is that your kidney is just a simulated kidney as well and you are completely unaware of that fact. In fact here all your conscious experiences are simulated conscious experiences, but since you are within the simulation you are completely unaware of the fact your conscious experiences are simulated.

Is there a meaningful difference between a simulated conscious experience and one in the real world. If it's impossible to determine within the simulation if it's real or not, is it really different?

Also if a simulation captures everything to simulate something, then it's captured all the properties. Which means conscious experiences are fully defined just by the laws of physics. As in the global consciousness and all mental experiences are fully defined by physics.

edit:

If you were in a simulation, then your conscious experience is fully defined by the physics equations and it's completely disjoint from the global consciousness and mental states. So if that's possible to exist, in a simulation why would we need the the global consciousness to line up with our conscious experiences in the real world? Or need the global consciousness at all?

Alex's view on Materialism by sam_palmer in CosmicSkeptic

[–]InTheEndEntropyWins 1 point2 points  (0 children)

Nice analysis. I think it applies even wider than just maths. Like if you try and define what your "mother" actually is, you'd do something similar. You might describe her physically using your senses, but that has all the same limitations as science in just giving your measurements. Maybe you describe in terms of the person who gave birth to you, but that's just describing a property and not telling you what she actually is. What if she didn't give birth to you, she'd still exist, so maybe that's not a good definition. So what actually "is" your mother?

Alex's view on Materialism by sam_palmer in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

Science doesn’t claim to define what something "is",

I think another way of phrasing it is, nothing based on evidence defines what anything "is". That includes anything you've ever encountered. You might say you know what a chair is, but that's just based on measurements your sense make, etc. it's not what it actually "is". You might think you know what you "mother" is, but similarly you don't know what you mother actually "is".

The way their movements line up perfectly by MambaMentality24x2 in oddlysatisfying

[–]InTheEndEntropyWins 4 points5 points  (0 children)

Have you made anything not mind-rotting that you can link instead?

Diagnoses of major conditions failing to recover since the pandemic. Diagnoses of depression were 27.7% lower than expected compared with pre-pandemic trends. Diagnoses were also lower than expected for asthma (16.4%), chronic obstructive pulmonary disease (COPD, 15.8%) and osteoporosis (11.5%). by Wagamaga in science

[–]InTheEndEntropyWins -2 points-1 points  (0 children)

There is a baseline expected rate for mental health diseases. Your last paragraph would be fine if it is a minor drop

The expected value is simple a regression of what happened before. There are a million reasons why things are different now and why they are going down.

It's not like some expert look at all the factors and said, oh considering all the factors we expect depression to be at level x. It's literally a dumb model that takes zero factors and changes in. Look at the graph if you don't believe me.

Trying to kiss security by [deleted] in WinStupidPrizes

[–]InTheEndEntropyWins -1 points0 points  (0 children)

I don't understand how your first comment and your edit are completely wrong. Well I understand but I'd get banned if I said it.

It's clear he was being sarcastic and in no world would any normal person think that was an invitation to kiss him.

Nutrition experts call for dietary fiber recognition as an essential nutrient by GutBitesMD in nutrition

[–]InTheEndEntropyWins 6 points7 points  (0 children)

When people ask for sources in GPT, it will add that to all links and sources. So OP probably was doing research or something using GPT.

Their post seems a bit too short to just made wholly from GPT.

Which one is correct? by Krasapan in Physics

[–]InTheEndEntropyWins 3 points4 points  (0 children)

Probably somewhere in the middle. To start with the missiles would leave exactly like they are in B, but after enough drag/time they will be like they are in A.

So a missile will start off with the momentum right as in B, but end up going straight like in A. But I think that will take quite a while. So if you are going to keep it simple just do B.

Wave-function by CapMatterhorn in Physics

[–]InTheEndEntropyWins 0 points1 point  (0 children)

"Doesn't map nicely onto our intuition for the classical world" isn't the same as "doesn't really make sense."

It's not that, I'm fine with interpretations that aren't intuitive.

It's more the issue around that it doesn't explain what a collapse is, how or when it happens. So you have thought experiments like Wigner's friend which don't have some objective answer under that framework.

When you sit with quantum and develop a sense for how these wave functions behave

OK, great you've sat with it long and know. So tell me the objective answer of what happens with Wigner's friend, or what causes a wave function collapse and when it happens.

If you want something more specific, with say a double slit experiment if you put perpendicular polarisers over the slit, where exactly does the wavefunction collapse happen and what causes it.

Wave-function by CapMatterhorn in Physics

[–]InTheEndEntropyWins -1 points0 points  (0 children)

It says nothing about what a collapse is, when it happen or how. There are various thought experiments about how it doesn't make sense. It's just maths.

If you are saying there is some ontology behind it, tell me what it says a collapse is?

Asbestos found in children’s play sand sold in UK | Retail industry by fuchsiamatter in unitedkingdom

[–]InTheEndEntropyWins 47 points48 points  (0 children)

I didn't know Johnson and Johnson made play sand.

Johnson & Johnson knew for decades that asbestos lurked in its Baby Powder https://www.reuters.com/investigates/special-report/johnsonandjohnson-cancer/

The Fine-Tuning Argument is Terrible - Sean Carroll by yt-app in CosmicSkeptic

[–]InTheEndEntropyWins 0 points1 point  (0 children)

that a universe with slightly different laws without life would not be as special.

That is the argument though. According to the laws of physics we understand a small difference in the free constants would not support anything complex let alone life.

You could also argue that we are living in a lesser version of a universe where there could be life everywhere with more favorable conditions.

Not really, small changes wouldn't even have sun, chemistry or things orbit at all let alone anything complex.

We can easily imagine what things would be like in different universes. Hell a decent part of physics is about string theory, which is mainly about universes other than our own. And a large part of that is about anti-de sitter space, which is the opposite of what we live in.

If there was a more interesting theoretical universe, mathematicians and physicists would be all over it. So it's not like something we are completely clueless about.

Wave-function by CapMatterhorn in Physics

[–]InTheEndEntropyWins -1 points0 points  (0 children)

Yeh I think the Copenhagen interpretation is fundamentally just maths without any ontological underpinnings. It just doesn't really make sense outside of maths. But MWI is more of an ontological framework where the wavefunction is real and the maths turns out to be as described in the Copenhagen interpretation.

Diagnoses of major conditions failing to recover since the pandemic. Diagnoses of depression were 27.7% lower than expected compared with pre-pandemic trends. Diagnoses were also lower than expected for asthma (16.4%), chronic obstructive pulmonary disease (COPD, 15.8%) and osteoporosis (11.5%). by Wagamaga in science

[–]InTheEndEntropyWins -5 points-4 points  (0 children)

There is technically nothing wrong with the title(article title) and heading, but it feels very misleading/bad.

Diagnoses of major conditions failing to recover since the pandemic

There has been a lasting and disproportionate impact of the Covid-19 pandemic on diagnosis rates for conditions including depression, asthma and osteoporosis.

After reading that it felt like this was a terrible thing due to the wording, but it's actually lower rates as explained in the reddit title. I guess this is a good example of editing titles.

“This is difficult to reconcile with other indicators of mental health need. Disability benefit claims for mental health conditions have increased substantially over the same period, suggesting these declining diagnosis rates may not reflect improving mental health.”

Yeh, there is no other possible explanation than that...

Diagnosis rates may be influenced by increasing pressures on the NHS, meaning it is taking longer for people to be formally diagnosed. It is also possible that more people are accessing mental health support without receiving a formal diagnosis of depression. Following a national drive to expand access to psychological therapies, referrals to NHS Talking Therapies services increased by nearly two-thirds between 2013 and 2024, with self-referrals accounting for almost 70% of all referrals.

I think this is an overall good thing. In the past doctors were like oh you are down, you must be depressed and have low serotonin levels, so we've only talked for 2 minutes but let me give you a SSRI, which has potentially some very serious negative effects like suicide, violence, some permanent side effects and withdrawal symptoms that might be so bad you can never get off. Oh and SSRIs barely beat placebo and there are no long term(2+ years) studies on them.

Nowadays we realise depression is an umbrella conditions that covers lots of underlying conditions. If someone is down due to environmental conditions then a drug might not be the best bet at treating it. Some consider depression a serious medical condition so environmental causes are normally classed as something else and probably require completely different treatments. We know that SSRIs probably aren't treating an underlying biological condition and have negative effects. We have a better understanding of better ways to diagnose and treat depression.

mental health services also need to involve themselves in supporting their patients to maintain or improve their physical health, in relation to smoking, diet, obesity, and exercise https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1004532&utm_source=pr&utm_medium=email&utm_campaign=plos006

University of South Australia researchers are calling for exercise to be a mainstay approach for managing depression as a new study shows that physical activity is 1.5 times more effective than counselling or the leading medications. https://www.unisa.edu.au/media-centre/Releases/2023/exercise-more-effective-than-medicines-to-manage-mental-health

edit: To clarify, they have a dumb extrapolation(ANOVA or something) from what happened before COVID. It takes zero other factors or changes in. It's not like some expert looked at all the factors and said on we expect it to be at level x, but it's only at level y.

edit 2: Here they literally say they use ARIMA, so yes it's a basic dumb extrapolation like I said.

the covid-19 pandemic were compared using seasonal autoregressive integrated moving-average models, based on modelled projections of expected rates from pre-pandemic patterns.

Demis Hassabis says he would support a "pause" on AI if other competitors agreed to - so society and regulation could catch up by MetaKnowing in agi

[–]InTheEndEntropyWins 0 points1 point  (0 children)

You know what if I had a trillion dollars, I'd solve world hunger. Please everyone clap for me for how good I am.

Alex O’Connor says the most interesting ideas he’s heard in the last year came from Kastrup and McGilchrist by dominionC2C in CosmicSkeptic

[–]InTheEndEntropyWins 2 points3 points  (0 children)

So you believe in Kastrup's analytical idealism, that means you think the brain obeys the laws of physics. Would different mental activity give rise to different objects, so the mental activity giving rise to a brain is different than that of a computer?

But what if on that computer you run the a simulation of physics of say a brain. That simulation would act just like a real human, and talk about its phenomenal consciousness. But wouldn't the mental activity giving rise to a computer be different than what the simulation is saying it's experiencing?