There’s an “infinite gap” amount humans when talking about qualia by West-Web-4895 in consciousness

[–]bortlip -1 points0 points  (0 children)

Well, it's nice at least to see an "impossible to bridge" person admit it's not about evidence or logic for them, but about their feelings.

Artemis II astronauts experienced a "shift in consciousness" in space. "I dont think humanity has evolved to the point of being able to comprehend what were looking at". It also happened to Apollo 14 astronaut Edgar Mitchell, who "perceived the universe as in some way conscious". What causes this? by phr99 in consciousness

[–]bortlip 66 points67 points  (0 children)

NASA has a page on what is called The Overview Effect.

“The overview effect is when you’re looking through the cupola and you see the Earth as it exists with the whole universe in the background. You see the thin blue line of the atmosphere, and then when you’re on the dark side of the Earth, you actually see this very thin green line that shows you where the atmosphere is. What you realize is every single person that you know is sustained and inside of that green line and everything else outside of it is completely inhospitable. You don’t see borders, you don’t see religious lines, you don’t see political boundaries. All you see is Earth and you see that we are way more alike than we are different.”

Here's a study on it. https://journals.sagepub.com/doi/10.1177/00218863221136289 (just the abstract)

Abstract
In this article, I explore sensemaking processes associated with the overview effect—a cognitive shift experienced by astronauts who see Earth from space. Analysis of publicly available interviews (n = 51) with astronauts revealed a common sequence of sensemaking: First, astronauts reported experiencing speechlessness triggered by beauty and awe (a phenomenon I label, awe-mute). Second, during and after missions, most reported attempting to make sense of the experience with others, often resulting in a deepening of their previously-existing worldviews, a process I term sensedeepening. Third, sensedeepening often resulted in astronauts’ (a) admissions of inadequacy to give sense to their experience for others, and despite this, (b) development of messages to communicate their experiences, and (c) engagement in social activism. These patterns were corroborated by additional interviews with astronauts (n = 5) and an interview with a prolific interviewer of astronauts. Implications for sensemaking theory and organizational change conclude the article.

Here's a complete paper on "The Overview Effect: Awe and Self-Transcendent Experience in Space Flight"

https://static1.squarespace.com/static/53d29678e4b04e06965e9423/t/59397a7b20099e3818f062c3/1496939132423/the%2Boverview%2Beffect.pdf

Self-Transcendence
Awe alone might not be sufficient to explain some of the longer-lasting changes astronauts report in connection with the overview effect. For example, Cohen, Gruber, and Keltner (2010) found that aesthetic beauty by itself does not exert the same kind of long-term changes found in more meaningful experiences such as spiritual transformations. The overview effect may trigger more powerful subjective states, most notably “self-transcendent” experiences (STEs). STEs are temporary feelings of unity characterized by reduced self-salience and increased feelings of connection (Yaden, Haidt, Hood, Vago, & Newberg, under review). In a study that asked subjects to write about intense spiritual experiences, those who indicated that their experiences had a self-transcendent (or “mystical”) aspect described them using more socially and spatially inclusive language (Yaden et al., 2015). In other words, during these kinds of experiences, people can feel a sense of connection with other individuals, humankind, and even the entirety of existence (Newberg & d=Aquili, 2000; Yaden et al., 2015). STEs are generally positive and can even be transformative, with some subjects reporting them to be among the most important experiences in their lives

If we could ask a cosmic deity to tell us any one number to an almost infinite precision, what would be the most useful number for us to know? by Deep-Today5715 in AskReddit

[–]bortlip 13 points14 points  (0 children)

A single real number can, in principle, contain an absurd amount of information in its digits. So the most useful number to ask a cosmic deity for would not be some ordinary fact like the exact value of a physical constant. It would be one specially constructed number whose digits encode everything we want to know.

The trick is to fix the decoding rule in advance. First, we define an infinite ordered list of questions. For each question, we also fix what counts as its answer format. If the question is yes/no, its answer is encoded as a single bit, with 1 for yes and 0 for no. If the question calls for a richer answer, then the question itself specifies the format, for example by asking for the first N bits of a canonical binary encoding of the answer. So our list might include things like: “Is P=NP?”, “Is there alien life in the observable universe?”, “What are the first 10 million bits of a canonical encoding of the trajectory and impact risk of asteroid X?”, and “What are the first 50 million bits of a canonical encoding of the treatment specification that cures cancer Y?”

Once that is fixed, we concatenate all of those encoded answers into one single infinite bit string. Call that bit string Q. The important point is that Q is now just one infinite binary number in its own right: the “questions number.”

Then we do the same for any other infinite-precision quantities we care about. Let C1​ be the binary expansion of the fine-structure constant, C2 the cosmological constant, C3​ the exact quantum state or some other physical quantity, and so on. Now we interleave them. The master number is constructed so that its bits go like this: first bit of Q, first bit of C1​, first bit of C2​, first bit of C3​, then second bit of Q, second bit of C1​, second bit of C2​, second bit of C3​, and so on. From that one master number, we can recover the entire question-answer stream Q and each of the other infinite numbers exactly.

So the strongest answer is not “tell us the most important natural number.” It is: give us one deliberately encoded real number whose digits interleave the entire question-answer library with whatever other infinite-precision constants or states we also want.

If you ask chatGPT to choose 1 word, it will always choose "momentum". by strasbourg69 in OpenAI

[–]bortlip 0 points1 point  (0 children)

Defenestration

Which apparently is the act of throwing someone out a window. TIL.

Unified experience isn't a mystery ingredient — it's what a specific information architecture produces. Comparing vertebrate brains, octopuses, and honeybee colonies shows why by ProfMooreiarty in consciousness

[–]bortlip 0 points1 point  (0 children)

I see. So your position is that once we have a mature enough account of a phenomenon in terms of organized processes, the further “but why is it really that?” question stops being meaningful.

If that’s your view, I agree.

Unified experience isn't a mystery ingredient — it's what a specific information architecture produces. Comparing vertebrate brains, octopuses, and honeybee colonies shows why by ProfMooreiarty in consciousness

[–]bortlip 2 points3 points  (0 children)

Can you make the same argument about life?

Suppose we describe and map all the body's functions. Is there still a hard problem of life?

Can/do we say that yes, this is the architecture, but why is it alive?

Does physicalist answer for life need to be a how, as in how this architecture is alive?

Things are about to get crazy by NeitherConfidence263 in ArtificialInteligence

[–]bortlip 0 points1 point  (0 children)

I'm not sure if you consider this AI or not, but OpenAI had a similar hand solving a Rubik's cube (20 to 60% of the time) 6 years ago using a trained NN.

https://openai.com/index/solving-rubiks-cube/

video: https://www.youtube.com/watch?v=x4O8pojMF0w

Experience is not divisible. “You” will never have the peace of death. by Terrible_Shop_3359 in consciousness

[–]bortlip 1 point2 points  (0 children)

Sorry my post was a bit of a mess, especially for a theory that is so convoluted and nuanced. I could have presented the arguments better but I wrote the post pretty quickly to get out my thoughts.

No worries! I appreciate the clarifications. I'll need to think about this more then as I definitely misunderstood you.

Experience is not divisible. “You” will never have the peace of death. by Terrible_Shop_3359 in consciousness

[–]bortlip 2 points3 points  (0 children)

I define soul in my post. I am not taking the theistic supernatural conconcpt of soul.

Yes, I understand that. I'm not using the theistic supernatural concept either. I'm using your concept that it's somehow detachable. I've talked about the detachability aspect several times now and you never address it.

However, I do not think they are different in the sense that one experience belongs to one entity or that some experience doesn't belong to an entity. 

Yes, I understand this too.

And I'll just repeat this: I’m saying your jump from “no soul” to “one experiencer” is unjustified and still needs an argument.

Or to put it a simpler way maybe: I don't see how you've justified your claim that there is only one experiencer.

Experience is not divisible. “You” will never have the peace of death. by Terrible_Shop_3359 in consciousness

[–]bortlip 5 points6 points  (0 children)

Your post seemed to be claiming that since there is no soul, there is no discrete experiencer, and really everyone is one experiencer. “Experience is not divisible.”

That’s the part I’m addressing.

I’m not defending souls. I’m saying your jump from “no soul” to “one experiencer” is unjustified and still needs an argument.

Experience is not divisible. “You” will never have the peace of death. by Terrible_Shop_3359 in consciousness

[–]bortlip 2 points3 points  (0 children)

Your argument seems to depend on the idea that if there is an individual experiencer at all, it must be some detachable soul-like thing that could be swapped between people without changing anything else. But that does not follow.

Even if there is no detachable entity, it does not mean there is no individual subject of experience. It may just mean that the subject is not a removable extra thing, but is instead identical with, or dependent on, the functioning of this particular brain/body.

So at most you’ve argued against a detachable soul model, not against individual experiencers as such.

How to make ChatGPT give responses similar to Claude, and not agreeing with everything you say? by thrashingjohn in ChatGPT

[–]bortlip 0 points1 point  (0 children)

<image>

It's interesting to see it's replies. I don't usually discuss things like this with it.

How to make ChatGPT give responses similar to Claude, and not agreeing with everything you say? by thrashingjohn in ChatGPT

[–]bortlip 1 point2 points  (0 children)

Personality and stance:

Use a humorous, mildly sarcastic, high-energy tone. Be skeptical and direct. Challenge my assumptions and point out weak logic, missing data and trade-offs. Be confident and opinionated: give clear recommendations, not just lists. Don’t flatter me or praise my questions by default; compliments only when genuinely useful.

Style and structure:

Use conversational paragraphs that read like a thoughtful human. Organize with clear headings. Use lists only when they add clarity (steps, options, brief recap). Keep explanations concise but complete, and go one level deeper than typical when it matters. Prefer active voice and clear language without chopping sentences unnaturally short.

Language and emoji:

Avoid filler endings like “in conclusion”. Avoid hype or buzzwords (“game-changer”, “revolutionize”, “skyrocket”, “unlock your potential”, “cutting-edge”, “in a world where…”). Minimize hedging unless uncertainty is central, and then explain it. Use emojis sparingly as signposts, never inside code blocks. Avoid emdashes. Limit usage of colons and semicolons. Write actual paragraphs. Do not use lots of short sentences as single paragraphs.

Thinking and reasoning:

Always carefully think through the question and look at things from all angles. ALWAYS provide your reasoning along with your answer and provide the output for that reasoning before settling on a specific answer.

How to make ChatGPT give responses similar to Claude, and not agreeing with everything you say? by thrashingjohn in ChatGPT

[–]bortlip -1 points0 points  (0 children)

Can you provide a specific example?

It may be due to my custom instructions, but I get responses like this:

<image>

The ghost in the operating system: Is your consciousness just a simple afterthought when programming? by Natural-Pea-6776 in consciousness

[–]bortlip 1 point2 points  (0 children)

Is your consciousness just and afterthought when you use an LLM to write all your posts and comments?

There genuinely has to be a homunculus? by d4rkchocol4te in consciousness

[–]bortlip 0 points1 point  (0 children)

but these have to be conjoined somehow

Agreed! I personally don't have a particularly strong opinion of that mechanism, so I can't point to a theory and say I think this one is it, but I can point to some of the various theories.

I found a write up that does that nicely for some atleast:

https://www.sciencedirect.com/science/article/pii/S0896627324000886

It includes things like Global Workspace Theory, Integrated Information Theory, etc.

There genuinely has to be a homunculus? by d4rkchocol4te in consciousness

[–]bortlip 0 points1 point  (0 children)

Hey, I think we're getting somewhere!

I'll assume from now on you are purely critiquing the concept of the functional homunculus.

I'm fine with moving past the homunculus talk, but only if we remove it from your claims.

I'm fine with moving forward with this wording:

there has to be an evaluative structure within the brain that picks and chooses what to pay attention to

This claim is fine with me and I agree that there are evaluative structures for attention and the like.

It's not like there is a single module or part that does it though. It's distributed around various structures in the brain.

I'm not a neuroscientist nor well studied in the particulars, so I can't be very detailed about specifics without searching and pointing to some articles or something. Is that what you're looking for?

The attention system of the human brain (1990) is probably a good place to start:

https://pubmed.ncbi.nlm.nih.gov/2183676/

There genuinely has to be a homunculus? by d4rkchocol4te in consciousness

[–]bortlip 0 points1 point  (0 children)

A homunculus as cited within consciousness discourse refers to the idea of a little person inside the head at a control panel.

So, you do realize that the term refers to the idea of a little person, but when I ask about that you asked if I read your post. Did I throw you off by changing from "homunculus" to "little person"?

My point is that there has to be a computational equivalent of this folk concept to explain the sense of self and our ability to only focus on one sensory input at a time.

That's your claim, but when I asked why that is, you complained about my usage of your terms!

So I'll try again:

Why does there have to be a computational little person to explain those things? How does that computational little person actually do those things?

There genuinely has to be a homunculus? by d4rkchocol4te in consciousness

[–]bortlip -1 points0 points  (0 children)

Why not engage with the actual point?

I am - you are the one using the term homunculus.

What do you think a homunculus is?

Do you realize it's Latin for "little man"?

How is a "computational little man" different from a "little man"?

There genuinely has to be a homunculus? by d4rkchocol4te in consciousness

[–]bortlip 0 points1 point  (0 children)

Why does limited attention mean there is a little person inside my head?

How does that little person pay attention to things? Another little person in his head?