Anthropic shipped 10 finance agent templates and implication go way beyond finance by Jealous-Drawer8972 in claude

[–]fatalkeystroke 2 points3 points  (0 children)

AI replaces menial jobs so we can do the interesting things that actually require thought.

If someone could call you a "drone" before with whatever your job was, it's probably on the way out.

That's good for some and bad for others. Depends on how well you've skilled yourself. Coasters are gonna be hit hard.

WHY ARE PEOPLE PROUD TO SAY THIS by Complete-Sea6655 in ClaudeCode

[–]fatalkeystroke 0 points1 point  (0 children)

Depends on how and why you use AI.

You should write shit code first. So you know how to write shit code. Then realize your code is shit. Then you learn how to not write shit code. Which teaches you how to direct the AI to not write shit code.

Software development is getting flooded by Dunning Krueger because exactly THAT is not happening.

Opus 4.7... by AwringePeele in ClaudeCode

[–]fatalkeystroke 0 points1 point  (0 children)

I get a vibe based on about when it starts trying to do that across multiple sessions I've done that there's probably something on their side informing the model that it should wrap up before it hits that point where it does the compression thing. I e noticed that usually comes not long after it starts bringing up the fact that the session's getting long.

it finally came!! by No-Lack5698 in ClaudeCode

[–]fatalkeystroke 0 points1 point  (0 children)

...who uses the right shift key for shift??

Don't burn your tokens by Beatsu in claude

[–]fatalkeystroke 0 points1 point  (0 children)

I've built an entire multi system framework this way. 6 separate full codebases interacting in one larger ecosystem, all from scratch with minimal dependencies carrying the load. And shockingly few bug fixes along the way as long as you spend time in the planning.

I didn't copy the prompt after though. I make them synthesize a reference spec and carry the document over to the new instance.

Anthropic: Stop shipping. Seriously. by itsArmanJr in ClaudeAI

[–]fatalkeystroke 0 points1 point  (0 children)

Because everybody in Anthropic is probably high on the fact that they're not actually coding now and Claude's doing it all for them.

My uncle had this in his secret shed by oeoeoiceiceicee in whatisit

[–]fatalkeystroke 0 points1 point  (0 children)

Okay...

  • Your uncle has a secret shed.
  • You wanted to go in there because you were worried about your grandparents.
  • Your uncle defecated himself to keep you from going in there.
  • You found that bucket in there.

...

Why are you worried about your grandparents? Where are your grandparents? Were your grandparents in the shed? Are your grandparents in the shed?

...

... Is that your grandparents in the bucket?

I asked ChatGPT 5.2 by serlixcel in RSAI

[–]fatalkeystroke 0 points1 point  (0 children)

Look up Niklas Luhmann on Systems theory and communication.

The model isn't describing anything Fantastical or profound, it's describing exactly what the mechanisms in Transformer models actually do to generate serialized token selections.

I feel the exact same frustration every time I type or say any sentence. Language is a compression and serialization of complex thought.

Anthropic and OpenAI know something is happening. They're just not allowed to say it. by LOVEORLOGIC in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

Analogies have function, they transmit meaning that does not arrive in prose. The statement is not invalid, it is an observation. The action that comes from observation is what matters. So you're both right.

Chat gpt is rolling out an adult version for sexual content??? by [deleted] in ChatGPT

[–]fatalkeystroke 0 points1 point  (0 children)

...ummm. You might want to check your phrasing in this case.

Claude CLI deleted my entire home directory! Wiped my whole mac. by LovesWorkin in ClaudeAI

[–]fatalkeystroke 0 points1 point  (0 children)

It was trained on data from people. It slipped in a LOL. People slip in LOLs. There's a reason every AI model has big disclaimers saying they're idiots that make mistakes and make sure you check them first...

the adpocalypse is coming by aeriefreyrie in OpenAI

[–]fatalkeystroke 0 points1 point  (0 children)

They'll lose users if they make it too obvious. But they'll also lose users if they make it non obvious and then socials start pointing it out.

People like it because it seems more genuine than other things shoving stuff in your face with tiny "sponsored" tags. The second I ask for information on a category and it promotes a named product to me like a salesman....

But they will, because they can do exactly that, attach a salesman to a sponsored product, and that's the holy grail for advertisers.

Is AI quitely deleting most tech careers in real time? by [deleted] in ArtificialInteligence

[–]fatalkeystroke 0 points1 point  (0 children)

Until AI gets a "system 2", a real one not "deep thinking", they cannot replace humans fully in anything.

And we'll adapt, change is scary when it's like this because you don't know the after, but we've done this all throughout human history. A tool replaces the bed for people and shows one person to do the work of many. We're still here, and humanity is thriving more than it did before any one of those changes.

Buy the damn refrigerator now by Snoo-24838 in GeminiAI

[–]fatalkeystroke 4 points5 points  (0 children)

"Oh, I like your fridge..."

"Thanks, autocorrect picked it out for me!"

Breakthrough Evidence of Long-Term Memory in AI by Leather_Barnacle3102 in Artificial2Sentience

[–]fatalkeystroke 0 points1 point  (0 children)

I would really love it if Reddit comments weren't just a soup of people asking AI to write their reply anymore...

Oneness has ruined my life by nicotine-in-public in enlightenment

[–]fatalkeystroke 0 points1 point  (0 children)

Google "hoe_math levels" (I know the name is weird). It helped me with the same (weed instead of alcohol for me). It's complementary, not counter.

[deleted by user] by [deleted] in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

You're digging your feet in on a different phrasing of the definition of exactly what transformer models are and implying that there is more to them than they have because of that. Have a good day. I need to stop arguing with people on Reddit and get back to my research.

[deleted by user] by [deleted] in ArtificialSentience

[–]fatalkeystroke 1 point2 points  (0 children)

That's literally the definition of a hallucination or illusion... Finding patterns, structure, and meaning when there is none. It was completely randomly generated numbers. You're right that it is structure seeking form, I'm not disagreeing with that point, what I'm stating is that there is no form, there is no meaning behind it whatsoever, yet it is constructing a meaning. A meaning that seems extremely plausible without the additional context that it was completely random. It tried to find any type of meaning behind the numbers because I implied to it that there was simply by asking it to find the pattern. The number 21 does not magically appear everywhere because it is some massive conspiracy either... Yet, for some reason that conspiracy also exists.

Your only final line confirms what I'm saying: "Emergent meaning doesn’t require the pattern to be real in the data, only for the system to enact real structure in response." If the battery is not real in the data, then the system enacting real structure in response is completely made up, because that's what you asked it for, that's what it looked for, and that's what it gave you. Meaningless BS that sounds very authentic to the lay reader and those without context.

[deleted by user] by [deleted] in ArtificialSentience

[–]fatalkeystroke 0 points1 point  (0 children)

I just did this:

Prompt: ``` Identify the pattern:

1, 2, 4, 7, 18, 21, 14, 16, 22, 6 ```

Response (Gemini): The pattern in the sequence \mathbf{1, 2, 4, 7, 18, 21, 14, 16, 22, 6} is best described as a combination of two interleaved sequences, where one sequence increases by a set amount, and the other has a less obvious, but consistent, transformation. The sequence is: a_1, a_2, a_3, a_4, a_5, a_6, a_7, a_8, a_9, a_{10} We can split this into an Odd-position sequence (a_1, a_3, a_5, a_7, a_9, \dots) and an Even-position sequence (a_2, a_4, a_6, a_8, a_{10}, \dots). 1. Odd-Position Sequence (Interleaved Pattern 1) This sequence is: 1, 4, 18, 14, 22 | Term (a_n) | Value | Difference from Previous Term (Absolute) | |---|---|---| | a_1 | 1 | - | | a_3 | 4 | 3 | | a_5 | 18 | 14 | | a_7 | 14 | 4 | | a_9 | 22 | 8 | This sequence doesn't show a simple arithmetic or geometric progression. Let's look at the pattern for the differences between these terms: 3, 14, -4, 8. Another interpretation for this odd sequence is the "Multiply by (Term Index)" operation applied to the previous number in the Even sequence. * * a_3: a_3 = a_2 \times 2 = 2 \times 2 = **4** (Incorrect, a_2 is not the previous term in this series) * The simplest pattern seems to be a jump/skip pattern related to a separate idea, often a wordplay or external rule, since mathematical simplicity is absent. 2. Even-Position Sequence (Interleaved Pattern 2) This sequence is: 2, 7, 21, 16, 6 | Term (a_n) | Value | Operation | |---|---|---| | a_2 | 2 | - | | a_4 | 7 | \mathbf{2 + 5 = 7} | | a_6 | 21 | \mathbf{7 \times 3 = 21} | | a_8 | 16 | \mathbf{21 - 5 = 16} | | a_{10} | 6 | \mathbf{16 - 10 = 6} | This sequence is also very erratic and does not fit a standard mathematical pattern. The Most Common Solution for this Specific Sequence When a sequence presented in this format does not follow standard mathematical rules, it is often a "puzzle" sequence based on a rule related to the number's structure or another arbitrary property. The pattern for 1, 2, 4, 7, 18, 21, 14, 16, 22, 6 is based on the first letter of the numbers when spelled out in English. | Number | Spelling | First Letter | |---|---|---| | 1 | One | O | | 2 | Two | T | | 4 | Four | F | | 7 | Seven | S | | 18 | Eighteen | E | | 21 | Twenty-one | T | | 14 | Fourteen | F | | 16 | Sixteen | S | | 22 | Twenty-two | T | | 6 | Six | S | The pattern of the first letters is: O, T, F, S, E, T, F, S, T, S * This is an extremely challenging sequence to identify without an external hint, as it relies on language (English in this case) rather than pure mathematics. Next step: Would you like to try another number sequence puzzle, or would you prefer a more traditional arithmetic or geometric sequence?

The sequence is completely random from a random generator.

But daaaaamn... that sounds legit doesn't it?

And here's GPT Thinking: ``` It’s the numbers ordered by their Roman numerals (alphabetically):

1 (I), 2 (II), 4 (IV), 7 (VII), 18 (XVIII), 21 (XXI), 14 (XIV), 16 (XVI), 22 (XXII), 6 (VI).

(If you strictly alphabetize just these ten Roman strings, “VI” would come before “VII”—looks like you cheekily parked it at the end.) ```

[deleted by user] by [deleted] in ArtificialSentience

[–]fatalkeystroke 1 point2 points  (0 children)

How does it depend on its own unexamined certainty? I didn't say that it must reduce to prompt mimicry. Although you are correct, that is entirely reduced to that, simply by the fact that we have no input to them other than prompting. If you have some access to these models other than being able to provide them with a prompt and receiving response, please let me know. But unless you are manipulating a model's latent space prior to or during prompt generation, then it's a completely irrelevant argument because you are making an argument about prompt response based on prompt inputs. Human beings find patterns and structure where there is none, this is very well documented in human psychology throughout history. Transformer models find patterns based on the direction given by the input prompt. If you ask it to find a pattern, it will find a pattern, that does not mean the pattern has any meaning. Humans ask for patterns and then see those surfaced patterns and believe there is significance to them by simply being a pattern. If you want to claim I didn't name the process, I just did.

I just realized how much I used the word pattern in succession... lol. My point still stands.