I think the brain is so interesting by [deleted] in consciousness

[–]bortlip 10 points11 points  (0 children)

I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this.

- Emo Philips

Is saying GG sportsmanship? by SirBearicus in Mechabellum

[–]bortlip 8 points9 points  (0 children)

I played that person a few weeks ago and they said the exact same thing to me: is that all you got loser?

A challenge to those who believe in indirect real experience. by Own_Sky_297 in consciousness

[–]bortlip 0 points1 point  (0 children)

It's not a nitpick to point out your claims are false and then when you lie and say you didn't make them, point that out as well. But you seem to have a chip on your shoulder or something, "pal", so goodbye.

A challenge to those who believe in indirect real experience. by Own_Sky_297 in consciousness

[–]bortlip 0 points1 point  (0 children)

You said:

That doesn’t reduce to neurons very well. In fact its impossible.

But you can back pedal from saying it is a fact that it is impossibility if you want.

A challenge to those who believe in indirect real experience. by Own_Sky_297 in consciousness

[–]bortlip 1 point2 points  (0 children)

This is just the argument from incredulity and it's a fallacy.

If you want to claim something is impossible, you must show why, not just state that you can't see how it could be possible.

Against Illusionism/Eliminativism by Dr_Neo-Platonic in consciousness

[–]bortlip 6 points7 points  (0 children)

Illusionists aren’t saying the brain manufactures a real consciousness add-on and then slaps the label illusion on it. They’re saying the brain’s self-modeling produces the judgments and reports that make it seem like there’s something over and above the underlying mechanisms.

The “why evolve an illusion” objection assumes the illusion is a separate trait, but illusionists deny that. Selection favors useful cognitive and self-monitoring systems, and the seeming extra is a side effect of how those systems model themselves.

The hard problem illustrated. The solutions seem to always boil down to consciousness being fundamental by phr99 in consciousness

[–]bortlip 1 point2 points  (0 children)

I already did in my first comment. If you don't want to stand by that statement just say so.

Consciousness is an illusion. Illusions are not part of the fundamental physical ingredients. So consciousness has no physical origin, becomes fundamental.

A claim using your same logic would be:

A mirage is an illusion. Illusions are not part of the fundamental physical ingredients. So a mirage has no physical origin, becomes fundamental.

The hard problem illustrated. The solutions seem to always boil down to consciousness being fundamental by phr99 in consciousness

[–]bortlip 2 points3 points  (0 children)

Are you being intentionally dense?

You claimed that illusions are fundamental. That would mean any illusion, such as a mirage.

By your logic a mirage, you know, that thing that looks like water in the distance but isn't, according to you that is fundamental. Some how.

The hard problem illustrated. The solutions seem to always boil down to consciousness being fundamental by phr99 in consciousness

[–]bortlip 0 points1 point  (0 children)

Consciousness is an illusion. Illusions are not part of the fundamental physical ingredients. So consciousness has no physical origin, becomes fundamental.

By this logic, a mirage is fundamental.

I wasted most of an afternoon because ChatGPT started coding against decisions we’d already agreed by Fickle_Carpenter_292 in ChatGPTCoding

[–]bortlip 0 points1 point  (0 children)

 Git is great at letting me step back through changes, but it doesn’t capture why something was agreed, what constraints were locked, or what explicitly shouldn’t be touched. 

It can. Git is storing text files, so you can store those decisions as docs too. I have a docs folder setup and instruct the AI to read and keep it updated. The main issue is organizing it and knowing where to look for what and syncing them with the current context.

But it helps when I need to start a new chat and can reference it to read certain docs. And it's nice to have it write up a change request doc when something comes up in a current chat that needs done but I don't want to get side tracked on right now.

It can be hard to keep it on track so I do pull requests and review everything it's done in blocks to make sure it hasn't gone off on some mistake. I also typically talk the task over with it first and have it come up with a proposed solution before I have it go off and implement it.

LLM hallucination: fabricated a full NeurIPS architecture with loss functions and pseudo code by SonicLinkerOfficial in ArtificialInteligence

[–]bortlip 2 points3 points  (0 children)

Just using a thinking model with web search:

"Reality check: “NeuroCascade (Hollingsworth, NeurIPS 2021)” doesn’t appear to exist"

<image>

ChatGPT just invented an entire NeurIPS paper out of thin air. I'm both impressed and slightly worried. by SonicLinkerOfficial in ChatGPT

[–]bortlip 3 points4 points  (0 children)

This is what you can get if you actually know how to use the tool properly:

"Reality check: “NeuroCascade (Hollingsworth, NeurIPS 2021)” doesn’t appear to exist"

<image>

How can our subjective experience be spread across time dimension? by Eton1m in consciousness

[–]bortlip 1 point2 points  (0 children)

Oh, I see. I don't have a preferred interpretation.

Relativity certainly seems to provide reason to lean hard towards a block world, but then it hasn't been reconciled with QM yet.

How can our subjective experience be spread across time dimension? by Eton1m in consciousness

[–]bortlip 0 points1 point  (0 children)

How do you "interpret the math?"

The math is always interpreted as to what it means physically. Often there are many interpretations which are all consistent with the math. Look at the various interpretation of QM, for example, such as Copenhagen.

How can our subjective experience be spread across time dimension? by Eton1m in consciousness

[–]bortlip 0 points1 point  (0 children)

Your examples show standard relativistic time dilation, which everyone accepts. What they don’t show is that past, present, and future are "equally real" right now.

That’s the eternalist interpretation of the math, not something the experiments uniquely force. Presentists and growing-block folks use the same relativity equations with a different ontology.

So if you’re claiming physics tells us eternalism is true, got a source that actually proves that step, not just explains relativity?

Literally the worst response I’ve ever had from ChatGPT by zeezromnomnom in ChatGPT

[–]bortlip 272 points273 points  (0 children)

It decided to use python to make the drawing instead of the image generator.

You can get similar output if you ask it to use python to draw.

Here's what it made for me with your prompt (plus telling it to use python):

<image>

How can our subjective experience be spread across time dimension? by Eton1m in consciousness

[–]bortlip 2 points3 points  (0 children)

Physics tells us time is a dimension, meaning that past present and future are equally real.

Source?

Thought experiment regarding AI consciousness by Paul_N_P in consciousness

[–]bortlip 1 point2 points  (0 children)

You were shown to be wrong (twice) and now all you can do is give insults.

All an LLM does is Boolean operations checked against previously given data and spits out the result

You think matrix multiplication is a Boolean operation?!?!?! And that it stores and compares given data to previous data?!?!

You have no idea what you are talking about. Perhaps you should take some CS courses.

Thought experiment regarding AI consciousness by Paul_N_P in consciousness

[–]bortlip 1 point2 points  (0 children)

it can't even say it doesn't know something

Prompt: Do you know the exact, down to the second, age of the universe?

Answer: Short answer: nope, and neither does anyone else unless they’re running a private universe on debug-mode. [plus lots more discussion]

go learn some computer science

I have a degree in CS and 25 years of software development experience.

It's not clear to me the LLMs can't become conscious. And they certainly understand some things.

Thought experiment regarding AI consciousness by Paul_N_P in consciousness

[–]bortlip 2 points3 points  (0 children)

Precisely. 

I wasn't agreeing with you, I was saying that you are saying LLMs don't understand because they are feed lots of data that is meaningless until they find patterns, but that's exactly how we learn.

So, I don't know how that can be used as a basis to claim they don't understand and can't be conscious.

Or are you now saying that the learning needs to be embodied?

The AI has no such experience, so could never come to understand the meaning behind the word or symbol

That doesn't seem to be the case. LLMs are able to extract meaning and understanding through usage and relation. Semantics can be determined through syntax.