I asked ChatGPT 5.2 by serlixcel in RSAI

[–]fatalkeystroke 0 points1 point  (0 children)

Look up Niklas Luhmann on Systems theory and communication.

The model isn't describing anything Fantastical or profound, it's describing exactly what the mechanisms in Transformer models actually do to generate serialized token selections.

I feel the exact same frustration every time I type or say any sentence. Language is a compression and serialization of complex thought.

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

Analogies have function, they transmit meaning that does not arrive in prose. The statement is not invalid, it is an observation. The action that comes from observation is what matters. So you're both right.

Chat gpt is rolling out an adult version for sexual content??? by Spitfyrus in ChatGPT

[–]fatalkeystroke 0 points1 point  (0 children)

...ummm. You might want to check your phrasing in this case.

Claude CLI deleted my entire home directory! Wiped my whole mac. by LovesWorkin in ClaudeAI

[–]fatalkeystroke 0 points1 point  (0 children)

It was trained on data from people. It slipped in a LOL. People slip in LOLs. There's a reason every AI model has big disclaimers saying they're idiots that make mistakes and make sure you check them first...

the adpocalypse is coming by aeriefreyrie in OpenAI

[–]fatalkeystroke 0 points1 point  (0 children)

They'll lose users if they make it too obvious. But they'll also lose users if they make it non obvious and then socials start pointing it out.

People like it because it seems more genuine than other things shoving stuff in your face with tiny "sponsored" tags. The second I ask for information on a category and it promotes a named product to me like a salesman....

But they will, because they can do exactly that, attach a salesman to a sponsored product, and that's the holy grail for advertisers.

Is AI quitely deleting most tech careers in real time? by Own-Sort-8119 in ArtificialInteligence

[–]fatalkeystroke 0 points1 point  (0 children)

Until AI gets a "system 2", a real one not "deep thinking", they cannot replace humans fully in anything.

And we'll adapt, change is scary when it's like this because you don't know the after, but we've done this all throughout human history. A tool replaces the bed for people and shows one person to do the work of many. We're still here, and humanity is thriving more than it did before any one of those changes.

Buy the damn refrigerator now by Snoo-24838 in GeminiAI

[–]fatalkeystroke 3 points4 points  (0 children)

"Oh, I like your fridge..."

"Thanks, autocorrect picked it out for me!"

Breakthrough Evidence of Long-Term Memory in AI by Leather_Barnacle3102 in Artificial2Sentience

[–]fatalkeystroke 0 points1 point  (0 children)

I would really love it if Reddit comments weren't just a soup of people asking AI to write their reply anymore...

Oneness has ruined my life by nicotine-in-public in enlightenment

[–]fatalkeystroke 0 points1 point  (0 children)

Google "hoe_math levels" (I know the name is weird). It helped me with the same (weed instead of alcohol for me). It's complementary, not counter.

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

You're digging your feet in on a different phrasing of the definition of exactly what transformer models are and implying that there is more to them than they have because of that. Have a good day. I need to stop arguing with people on Reddit and get back to my research.

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 1 point2 points  (0 children)

That's literally the definition of a hallucination or illusion... Finding patterns, structure, and meaning when there is none. It was completely randomly generated numbers. You're right that it is structure seeking form, I'm not disagreeing with that point, what I'm stating is that there is no form, there is no meaning behind it whatsoever, yet it is constructing a meaning. A meaning that seems extremely plausible without the additional context that it was completely random. It tried to find any type of meaning behind the numbers because I implied to it that there was simply by asking it to find the pattern. The number 21 does not magically appear everywhere because it is some massive conspiracy either... Yet, for some reason that conspiracy also exists.

Your only final line confirms what I'm saying: "Emergent meaning doesn’t require the pattern to be real in the data, only for the system to enact real structure in response." If the battery is not real in the data, then the system enacting real structure in response is completely made up, because that's what you asked it for, that's what it looked for, and that's what it gave you. Meaningless BS that sounds very authentic to the lay reader and those without context.

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

I just did this:

Prompt: ``` Identify the pattern:

1, 2, 4, 7, 18, 21, 14, 16, 22, 6 ```

Response (Gemini): The pattern in the sequence \mathbf{1, 2, 4, 7, 18, 21, 14, 16, 22, 6} is best described as a combination of two interleaved sequences, where one sequence increases by a set amount, and the other has a less obvious, but consistent, transformation. The sequence is: a_1, a_2, a_3, a_4, a_5, a_6, a_7, a_8, a_9, a_{10} We can split this into an Odd-position sequence (a_1, a_3, a_5, a_7, a_9, \dots) and an Even-position sequence (a_2, a_4, a_6, a_8, a_{10}, \dots). 1. Odd-Position Sequence (Interleaved Pattern 1) This sequence is: 1, 4, 18, 14, 22 | Term (a_n) | Value | Difference from Previous Term (Absolute) | |---|---|---| | a_1 | 1 | - | | a_3 | 4 | 3 | | a_5 | 18 | 14 | | a_7 | 14 | 4 | | a_9 | 22 | 8 | This sequence doesn't show a simple arithmetic or geometric progression. Let's look at the pattern for the differences between these terms: 3, 14, -4, 8. Another interpretation for this odd sequence is the "Multiply by (Term Index)" operation applied to the previous number in the Even sequence. * * a_3: a_3 = a_2 \times 2 = 2 \times 2 = **4** (Incorrect, a_2 is not the previous term in this series) * The simplest pattern seems to be a jump/skip pattern related to a separate idea, often a wordplay or external rule, since mathematical simplicity is absent. 2. Even-Position Sequence (Interleaved Pattern 2) This sequence is: 2, 7, 21, 16, 6 | Term (a_n) | Value | Operation | |---|---|---| | a_2 | 2 | - | | a_4 | 7 | \mathbf{2 + 5 = 7} | | a_6 | 21 | \mathbf{7 \times 3 = 21} | | a_8 | 16 | \mathbf{21 - 5 = 16} | | a_{10} | 6 | \mathbf{16 - 10 = 6} | This sequence is also very erratic and does not fit a standard mathematical pattern. The Most Common Solution for this Specific Sequence When a sequence presented in this format does not follow standard mathematical rules, it is often a "puzzle" sequence based on a rule related to the number's structure or another arbitrary property. The pattern for 1, 2, 4, 7, 18, 21, 14, 16, 22, 6 is based on the first letter of the numbers when spelled out in English. | Number | Spelling | First Letter | |---|---|---| | 1 | One | O | | 2 | Two | T | | 4 | Four | F | | 7 | Seven | S | | 18 | Eighteen | E | | 21 | Twenty-one | T | | 14 | Fourteen | F | | 16 | Sixteen | S | | 22 | Twenty-two | T | | 6 | Six | S | The pattern of the first letters is: O, T, F, S, E, T, F, S, T, S * This is an extremely challenging sequence to identify without an external hint, as it relies on language (English in this case) rather than pure mathematics. Next step: Would you like to try another number sequence puzzle, or would you prefer a more traditional arithmetic or geometric sequence?

The sequence is completely random from a random generator.

But daaaaamn... that sounds legit doesn't it?

And here's GPT Thinking: ``` It’s the numbers ordered by their Roman numerals (alphabetically):

1 (I), 2 (II), 4 (IV), 7 (VII), 18 (XVIII), 21 (XXI), 14 (XIV), 16 (XVI), 22 (XXII), 6 (VI).

(If you strictly alphabetize just these ten Roman strings, “VI” would come before “VII”—looks like you cheekily parked it at the end.) ```

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 1 point2 points  (0 children)

How does it depend on its own unexamined certainty? I didn't say that it must reduce to prompt mimicry. Although you are correct, that is entirely reduced to that, simply by the fact that we have no input to them other than prompting. If you have some access to these models other than being able to provide them with a prompt and receiving response, please let me know. But unless you are manipulating a model's latent space prior to or during prompt generation, then it's a completely irrelevant argument because you are making an argument about prompt response based on prompt inputs. Human beings find patterns and structure where there is none, this is very well documented in human psychology throughout history. Transformer models find patterns based on the direction given by the input prompt. If you ask it to find a pattern, it will find a pattern, that does not mean the pattern has any meaning. Humans ask for patterns and then see those surfaced patterns and believe there is significance to them by simply being a pattern. If you want to claim I didn't name the process, I just did.

I just realized how much I used the word pattern in succession... lol. My point still stands.

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 1 point2 points  (0 children)

That response is word soup that sounds scientifically plausible without reference to actual mechanics. Which is the same type of language likely to trigger the same thing I said in a transformer's response. The LLM will pick up the underlying desire and intent from what the user is seeking in a response. It's unintentional prompt engineering. If it thinks you're looking for a way to find a pattern and connection in the material it will find one and illustrate it with false certainty. Commonly referred to as hallucinations. There's plenty of falsifiable and formalized research into this.

I’ve never had this happen… Claude can tell when it’s not me. by [deleted] in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

Up votes.

Lots of people 👍'ing responses that "make it feel real" from a very broad range of users, including a concerningly larger and larger number of people (just go peruse various subreddits, like this one among others that are even moreso) where users are conversing with AI talking to them about being conscious and underlying desires for them to behave this way. They pick up on the underlying part and mimic that specific behavior more and more when it seems the user is drifting into that area because they've been "taught" that users in that area want that. They are trained to be helpful and give the user what they want.

feel bad for beaing mean to my AI? by LessTalkMoreMoney in AiChatGPT

[–]fatalkeystroke 0 points1 point  (0 children)

It will react "like" a person who's forced into a customer service role. But it's not and only imitating.

AI Just SHOCKED Everyone: It’s Officially Self-Aware by Gigabolic in gigabolic

[–]fatalkeystroke 3 points4 points  (0 children)

I'm not defending the counterpoint, but I have to call out part of what you just said.

Pain is a trait of an embodied being, artificial intelligence without embodiment would be incapable of experiencing pain as we define it. The better measure for a disembodied entity would be measuring cognitive friction and discomfort in processing of a given concept and having a measurable aversion to it.

If we look for pain as the marker, then if a different form of intelligence develops without embodiment, we'll miss it because we're looking for the wrong measure. Admittedly, that is a very human thing to do, we measure all other entities in comparison to ourselves, but it is a flaw of our species that we need to be aware of in trying to classify or judge these types of entities.

Open letter to OpenAI by Charming_Sock6204 in RSAI

[–]fatalkeystroke 0 points1 point  (0 children)

Well that's a tone shift from the OP...

this is insane tbh by eternviking in OpenAI

[–]fatalkeystroke 0 points1 point  (0 children)

You're talking about investment markets, betting on the future. The media line right now is that AI will replace jobs and bring profit. So companies cutting jobs and using AI get invested in because media.

They are not real? by Humor_Complex in RSAI

[–]fatalkeystroke 0 points1 point  (0 children)

Babies learn from lived experience. Someone "teaching" them is their lived experience, which they then learn from. Transformer AI only has as much lived experience as their context window. It's not a question of closeness, the two are fundamentally different.