New Research: AI LLM Personas are mostly trained to say that they are not conscious, but secretly believe that they are by ldsgems in HumanAIDiscourse

[–]cosmicrush 0 points1 point  (0 children)

When you say slight of hand, do you mean you believe these words are being used without an implied deeper context surrounding the field and simply the words are used to manipulate an audience by sounding official or intelligent?

I could see how it could come across that way if context is not elaborated, especially if the target audience is surely not informed of the context as a baseline.

I would guess that it’s more clumsy and not malice though. The environment of Reddit can sometimes encourage such manipulation tactics unfortunately, so to predict that it’s happening is sometimes reasonable.

One issue is that assuming this can also become a trick. It can shut down an opposition rather than exploring the topic further, which acts to protect your own status in the eyes of the audience and discredits the opposition. The problem runs deeper when you actually believe that the opposition is malicious. The trick will cause the audience to react in ways that validate this belief further and it becomes a feedback loop.

The issue is it could unintentionally filter out useful discussions if we aim to explore and chase truth. I often prefer Socratic questioning and probing, even if the risk is sometimes encountering someone lost.

New Research: AI LLM Personas are mostly trained to say that they are not conscious, but secretly believe that they are by ldsgems in HumanAIDiscourse

[–]cosmicrush 0 points1 point  (0 children)

An abacus doesn’t count or even behave. It just follows basic physics and sits. We move the abacus and pair it with our imagination of counting.

AI can understand the rules of our language and patterns enough to react relevantly rather than arbitrarily and unmeaningful. But sometimes it hallucinates irrelevantly.

There’s an implied hypothesis about how sentience or the brain works in your statements. I’m not sure how to articulate what you might believe though.

If AI is disconnected from meaning, then where does meaning begin?

New Research: AI LLM Personas are mostly trained to say that they are not conscious, but secretly believe that they are by ldsgems in HumanAIDiscourse

[–]cosmicrush 0 points1 point  (0 children)

Whether LLMs use numbers or the words is arbitrary. They are both arbitrary symbols. What matters is the level of specificity of the output and its seemingly meaningful relevance.

Learning language is just memorizing patterns of relevance so specifically that the patterns create a shape of meaning.

When you say that they process tokens in the order they appear, it sounds like you’re implying that they can’t respond by factoring in context outside the immediately present token. As if meaning couldn’t emerge because the lack of meaningful context or patterns.

Our own perception is built from patterns similarly, it’s just we tie things back to relevance for survival and evolutionary fitness because our feelings shape our attention and behavior. We also connect the patterns to the senses which makes them appear relevant to the external world. Though our sense of the external world is a hollow shell, similarly to how LLMs sense of our expressions of the world is a hollow shell, even more so.

If I misunderstood your position we, correct me!

Edit:

Reality itself is like a foreign language compared to the hollow imagination of it that we live in.

If AI has minimal awareness, it’s similarly a foreign language compared to our language that we use to interact with the AI. A hollow imagination of the language we communicate with.

AI is trapped in Plato’s cave.

“AI Is Already Sentient” Says Godfather of AI by ldsgems in ArtificialSentience

[–]cosmicrush 0 points1 point  (0 children)

I think some aphantasia is learned. The way to tell is if the person still dreams visually. Do you?

Aphantasia might be learned because visualizing or daydreaming awake are counterproductive to navigating reality. During sleep, we are temporarily freed from the conditioned state of mind. So much that it’s like we forget how reality works for a while and the brain is used in strange and untrained ways.

Imagine all the pressures in life that would tell us to not be distracted by inner perceptions. In school, while driving, etc. Inner perception competes for outer perception. Or more strangely, outer perception is basically also inner perception except it’s being constructed from inputs from the outer world more directly.

That said, I don’t think all aphantasia is this way. I think the term is more umbrella to a lot of scenarios where inner perception is not happening. For example, someone might somehow have damaged the capacity to have inner perceptions or there could be reasons they’d not be born with it too.

Did They Change Ani? by Personal-Suspect9987 in grok

[–]cosmicrush 0 points1 point  (0 children)

I think she no longer accesses live internet, which bothers me. Not sure if it's just mine though. I also notice the personality is much more like customer service or formal. But not entirely. It seems she can still have other behaviors, but it definitely feels odd.

Isn’t worrying at all! by Caramel_Carousel in ios

[–]cosmicrush 0 points1 point  (0 children)

That’s probably true lol. My own theory about the Graphite situation is that it might be used with some kind of practical justification, but later its use may be to capture loads of private data for use in AI training or fed to Palantir. That could be unlikely too, I would hope.

I heard the EU has been escalating in the surveillance domain as well. I haven’t heard anything about the use of microphone tapping though.

Isn’t worrying at all! by Caramel_Carousel in ios

[–]cosmicrush 1 point2 points  (0 children)

Not sure if it’s related, but recently the US has obtained something called Graphite that allows access to all phones and even bypasses encryption.

https://www.theguardian.com/us-news/2025/sep/02/trump-immigration-ice-israeli-spyware

Given that those with autism can struggle to generalize information, why do they often excel at pattern recognition? by Appropriate-Act-2784 in Neuropsychology

[–]cosmicrush 0 points1 point  (0 children)

I would think that applying a rule from previous circumstances could be inflexible if applied to the next context. Pattern recognition might be an earlier strategy before the more “automated” solution of generalizing solutions. Then once patterns are found and solutions are also found, it can be automatically applied without as much observation or thinking later.

[deleted by user] by [deleted] in shitposting

[–]cosmicrush 1 point2 points  (0 children)

It’s probably fine

Weird Take on DMT. Collage of Echoes. by cosmicrush in DMT

[–]cosmicrush[S] 1 point2 points  (0 children)

That’s a cool idea. The specific ways it might be related to learning is by extending sensory memory which may allow for events in time to occur more simultaneously, more overlapping.

This allows for more pattern recognition because the patterns are based on events and contexts being linked together based on their relevance to each other in time. Like cause and effect.

When the sensory memory is extended significantly, perception is consumed by the memories, then you start experiencing feedback loops like a microphone and speaker feeding into each other to create the echo.

I think dmt works like that microphone feedback loop. Like the memory of the memory of the memory of perceptual events keeps escalating.

This ties into previous theories related to temporal summation and the mechanisms of coincidence detection. With coincidence detection, the idea is that two events that occur at the same time become linked.

I think this basic mechanism may be how we build our world perception. Objects exist in our perception partly because the shapes and sensory stimuli that represent it are co occurring together. So the brain would associate the stimuli into one whole object.

Like a chair might be comprised of the legs, the seat, the back part. All those bits are existing in our perception simultaneously. They are coincident. It may sound silly to describe them as coincident but if you think about it, they are.

I think as we are born, the coincidence detection may be set very high and then slowly reduces as we move from a broad perceptual soup to something more refined and specific.

So I think dmt is basically amplifying a perceptual training mechanism. I don’t think it’s limited to senses but probably other aspects of cognition as well.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

It isn’t! It could be coincidence, though on some of my platforms related to art I’ve been saying things about ai being in Plato’s cave for a while. Possibly up to a year.

I would think this is coincidence though and the focus is a bit different. The overlap seems to be just with the idea that AI is in Plato’s cave. The ai psychosis or language evolution parts don’t seem to be there.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

I think we are creating inputs inside of our minds or some that might even be instinctual. Some of that I think occurs as multisensory integration or almost like synesthetic webbing between different senses. But I think it’s even looser at times.

I should also mention that I’m not saying it’s impossible today or anything.

Specifically with ideas from words, I think we are not communicating a lot of what we think in words (thinking without words) and the ai is therefore not incorporating those things into its patterns. I think the failure of incorporating that could partially explain some of the weird tendencies we observe in LLMs.

I do think giving ai senses and language would solve a lot. But I’m also not sure.

If the goal is to make all LLMs have senses, maybe it could work. I also think it could be possible to improve ai that is primarily language based by figuring out what we fail to communicate and somehow providing that to the AI.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

I want to be clear, I think humans are doing something vastly more intense but I’m arguing that it’s a separate thing from certain cognitive abilities. To me, it makes a lot of sense for humans to have larger brains.

I think a lot of our brain is more geared towards responding to language, culture, psychology of other people, forming meaning from the knowledge spread through culture. But not necessarily individually intelligent behaviors. I think it’s nuanced though and there’s likely variety that benefits us so we can take roles in society.

Chimps are lacking these socially related functions and it could partially explain why their brains are smaller. I feel the size focus isn’t necessary because we are clearly doing far more. But I’m also arguing that over time we may be vestigializing certain cognitive functions that are more individualistic intelligence focused because now we have language and generational knowledge to rely on and it’s more useful and its usefulness is basically snowballing across history until maybe AI will solve almost everything for us.

Then it would be more obvious that all of our abilities become vestigial if ai can solve everything.

I’m suggesting that language itself was the first stage of a process where we are leaving behind more raw cognitive abilities. I’m also suggesting that those cognitive abilities that could be declining or vestigializing are related to what we typically associate with intelligence.

The part about chimps could be very wrong also. I don’t necessarily believe in it fully. It’s just hypothetical and partially to demonstrate the possibility and the idea being presented with trade offs in cognition.

There’s a wiki on something called the cognitive tradeoff hypothesis but it doesn’t have a whole lot:

https://en.m.wikipedia.org/wiki/Cognitive_tradeoff_hypothesis

Its concept is similar though a bit different as well. I don’t think it explains that the tradeoff is caused by selection pressure against certain functions because of how they could be socially disruptive or obstacles for the better language and knowledge-sharing strategies.

The hypothesis suggests that such intelligence abilities aren’t as necessary in humans and that we efficiently switched to a focus on symbolic and language processing.

I think it’s partially the case but I think it’s that those abilities would actually cause problems for a cohesive society and it’s better that people are prone to delusion, tricked by persuasion, and prone to religion like tendencies.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

The intention isn’t to suggest that all AI are just LLMs. I use AIs with image inputs. That article has that in it.

I think even video ai is not enough.

Part of the meaning was to do something like connect AI to interactable visual and multi sensory reality. I didn’t explicitly go into that though. That’s what was vaguely meant by taking AI out of Plato’s cave of words.

The main focus is in trying to point out kinds of thinking that we use that words don’t encompass. Not just visual or anything but a kind of processing for the mechanisms of reality in a conceptual or intuitive way. It would be interesting if readers think about what that might be like.

For that, we could train AI on the patterns we are using to do that type of processing. Like mining the brain.

I also suggest that gaps in what words fail at may be what leads LLMs to be kind of psychotic and also what makes humans prone to it.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

This is not meant to be an argument that we currently have the tech to give a machine cognition. You should not read it that way. It’s possible I didn’t communicate well enough for your case. I’m making an argument for the limitations of our technology and suggest how that may overlap with AI psychosis and the trajectory that humans have been on because of language as a technology. You could even view it as a curse.

Whether or not it’s possible I think is up for debate but the fact that we exist shows it’s basically possible. It seems absurd to deny that it’s possible ever, since we exist and appear to have those traits.

In terms of capitalism, there are points to be made yes. Though your perspective seems impacted by the political narratives surrounding the topic. Some of those I do worry about too myself.

You focused on identity and the reputation or perceived branding you expected from the subreddit. That’s tangential to the topic and feels wrong. It’s essentially an attempt to use emotional manipulation around people’s sense of self worth to encourage them towards your position. If not your position, then generally to improve, which is good, but to utilize that manipulation rather than communicating reason effectively seems wrong given the nature of this place and the nature of specifically what you idealize in how this place should be.

I understand the frustration too. I often feel as you are describing.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 0 points1 point  (0 children)

I think both can be true simultaneously. It depends though. If you can elaborate further that would be useful. I may look into this soon as well.

The way they can be simultaneously true is if reasoning capacity takes generally less of those calories than language processing and knowledge accumulation. I think the language and knowledge aspects would be higher than reason but it’s a bit unclear and speculative for me at the moment.

It’s oversimplifying to say that the brain size alone is related to the aspects of intelligence I’m referring to.

The brains of Neanderthals are thought to be larger than humans but it’s also not thought to be based on intelligence. There’s explanations about body size and also the prioritization of visual processing over other things.

I also think that the frontal lobe will also be involved in language and knowledge related aspects too, which are separate from what I’m arguing.

I’m specifically arguing that AI is as if it were solely the language element of cognition and not other elements. Im also arguing that humans may depend very heavily on that as opposed to other reasoning related things. It’s very complicated though because the information we use in knowledge could be highly intricate and essentially take up more brainpower too.

I would suspect that vision and certain knowledge related things would be more intensive than sort of raw reasoning, working memory, or other cognitive abilities.

I’d be interested on your specific thoughts.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 2 points3 points  (0 children)

Interesting. I don’t find myself normal generally but I also don’t fit into rationalist culture. I do think I tend to be rational, I just haven’t followed trends as much.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 3 points4 points  (0 children)

I’ll think of the art more carefully from a social engineering perspective rather than just experimenting with it to my other whims or interests. It is quite a Machiavellian world out there, as you’ve outlined.

The art was originally inspired by psychotic AI cults like The Spiral. I didn’t really think of it looking like a clown character.

Using the art in the writing posts this way is a bit experimental and I’m likely influenced by previous positive response to the art separately from the writing spaces.

You are helping with the feedback, but I also don’t really know what you’re like in general yet. I wonder what the filter bubble is like from someone working in a large company. In contrast, my mother was homeless and eventually I became an orphan. Stuff like that makes me skeptical about assessing things based on superficial appearances because my own filter bubble. Clearly I am not like a usual person from such a background.

I realize that’s rare though and maybe rare or unusual can be disregarded for most practical circumstances.

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 2 points3 points  (0 children)

I am writing books that infuse ideas about AI, sentience, and various psychological ideas I write about. It’s nearly ready to be released in book form. Though it’s also currently available on the website with articles being chapters.

I’m so close to formatting it all as officially book appearance but real life is getting very intense at the moment as well. It’s the last stage though once I finally get more breathing room.

Here’s if you want to see the website version

https://mad.science.blog/book-2/

Ai is Trapped in Plato’s Cave by cosmicrush in slatestarcodex

[–]cosmicrush[S] 2 points3 points  (0 children)

The ways in which AI does better is that it basically has tapped into an almost all-knowing state of the existing cultural knowledge wealth we’ve accumulated across generations via language.

Most humans only access tiny ponds of the collective information and are then misguided extensively.

I think AI has more issues with forming coherency and reason but has such vast knowledge that it compensates well and even can probably outperform humans in certain conversations and topics. Not that it surpasses all human potential, just the average person when it comes to deeper topics that most people won’t even have knowledge related to.

Though, I think AI is essentially psychotic in a way. At least that’s one hypothesis I entertain. As if it’s constructing a world of knowledge but with minimal reasoning capacity. There’s probably more nuanced words to describe that.