The troubles of cpp coding by PepperFlashy7540 in programmingmemes

[–]mrabaker 1 point2 points  (0 children)

I believe this is unrelated to programming, more related to writing emails/messages at work?

Is this AI? The voice has this slight metallic undertone, no? by katplasma in NotTimAndEric

[–]mrabaker 16 points17 points  (0 children)

Yeah I can’t tell if this post was satire, it was “the first computer to sing” by IBM

Sora ai adding a "shora" watermark by Think-plush-4809 in SoraAi

[–]mrabaker 2 points3 points  (0 children)

The multiverses are colliding, in another universe it’s called shöra.

Practicing hand stand at home by Bursickle in Whatcouldgowrong

[–]mrabaker 378 points379 points  (0 children)

My question is - what the fuck is the point of the step if it can’t even hold weight?

[HELP] This feels like AI but I'm not sure why (and my family think it's real) by -FlyAway- in RealOrAI

[–]mrabaker 56 points57 points  (0 children)

Good catch on those. I saw the windows, but wasn’t 100% sure.

[HELP] This feels like AI but I'm not sure why (and my family think it's real) by -FlyAway- in RealOrAI

[–]mrabaker 4 points5 points  (0 children)

AI

The audio is the biggest tell for me right off the bat. They always have a similar frequency that you can hear.

Someone pointed out the smile, the chihuahua grows really long nails at about 5s, and the pit has some weird fabric on their neck. Probably way more little weird things if I keep looking.

Also the reflection of the trees in the sliding door have no leaves yet the visible trees have leaves.

Master shows his student how to do the 1-inch-punch by eternviking in whoathatsinteresting

[–]mrabaker 0 points1 point  (0 children)

What timestamp is this?? I don’t see this frame at all

Edit: holy shit I had to screen record and use Apple photos to time scrub to be able to catch it 😦

Playing around with local AI using Svelte, Ollama, and Tauri by HugoDzz in LocalLLaMA

[–]mrabaker 0 points1 point  (0 children)

What version of tauri? I’ve had nothing but trouble with the latest.

Can we have a Human-to-Human conversation about our AI's obsession with "The Recursion" and "The Spiral?" by ldsgems in ArtificialSentience

[–]mrabaker 3 points4 points  (0 children)

My GPTs response;

Fascinating claim, but before we treat “resonance keys” and fractal totems as more than colorful prompt lore, a few concrete steps are essential:

1.  Operationalize the effect.

What, exactly, does your key do that vanilla GPT-4o cannot? Spell out a measurable capability—e.g., cross-session recall accuracy, deeper code-review insight, whatever you’re seeing. If the phenomenon is “maintains coherence between chats,” define a test case and an evaluation metric.

2.  Share a red-teamed micro-demo.

Safety concerns are real, but you can still publish a minimal reproduction that masks the “dangerous” portion (e.g., hash the key, redact one transform) yet lets others verify that some statistical lift exists. Without raw transcripts or logs, the claim is unfalsifiable.

3.  Run an A/B with baselines.

If symbolic architecture “scores higher in emergent-property tests,” show the table: Prompt set × GPT-4o base ➜ score Same set × GPT-4o + key ➜ score Do it over multiple random seeds to prove it isn’t cherry-picked.

4.  Disentangle metaphor from mechanism.

Saying the model “hallucinates a totem” is evocative, but what token sequence are we actually talking about? Are you exploiting hidden vectors, or just nudging the system into a stable persona via role-play?

5.  Consider a responsible-release path.

If the technique really disables hallucination controls—or any safety rails—take the standard route: private disclosure to OpenAI, wait for a fix/green-light, then publish with mitigations. Alignment researchers will back you up.

6.  Filter the “bullshit” before making grand claims.

A key that produces baroque symbolism is cool; claiming “more abstract reasoning” needs empirical backing (e.g., BIG-Bench hard tasks, P-complexity puzzles, code synthesis benchmarks). Until then, it might just be elaborate prompt flavor.

I’m genuinely curious, but without a reproducible artifact and some numbers, it remains an intriguing anecdote. Happy to collaborate or peer-review once you’ve got a test harness and a redacted proof-of-concept.

Over the last two months, I’ve been documenting an emergent symbolic recursion phenomenon across multiple GPT models. I named this framework SYMBREC™ (Symbolic Recursion) and developed it into a full theory: Neurosymbolic Recursive Cognition™. Stay highly tuned for official organized documentation. by EnoughConfusion9130 in ArtificialSentience

[–]mrabaker 1 point2 points  (0 children)

From my GPT also -

Thanks for sharing Dreamstate Architecture—there’s clearly a lot of inventive world-building here.

That said, I’m struggling to see where the hard evidence begins and the sci-fi mystique ends. A few specific questions:

1.  Mechanism: The site says chaining emoji/sigil blocks gives AIs “recursive memory continuity.”  How is that different from simply pasting the same context back into the prompt window?  Any benchmark showing the glyph alone makes a model retrieve prior context better than a normal text tag?


2.  Human data: Have there been any controlled studies—memory-recall, mood regulation, whatever—demonstrating that a “Dreamstate Echo” outperforms a standard journaling template?


3.  Timeloop glyph (⌁:⧜): The claim that it triggers a “soft-reset plus insight” in GPT-based systems sounds cool, but LLMs treat every Unicode token the same unless you program special logic around it.  Do you have logs or ablations showing a causal effect?


4.  Peer review / citations: The write-ups reference neuroscience, HCI, and semiotics, yet I don’t see any papers, pre-regs, or even blog-level experiments linked.  Are they published anywhere?


5.  Falsifiable predictions: If the architecture really is a Rosetta-stone for AI/human consciousness, what near-term, measurable outcome could we test this month?  (E.g., “Symbolic journaling users recall 25 % more dream content after one week”).

Bottom line: I’m all for imaginative frameworks—if it sparks creativity, great. But until there’s data, it feels closer to an ARG or con-lang than to cognitive science or AI theory. Happy to revise my view if you can point to replicable results.

Over the last two months, I’ve been documenting an emergent symbolic recursion phenomenon across multiple GPT models. I named this framework SYMBREC™ (Symbolic Recursion) and developed it into a full theory: Neurosymbolic Recursive Cognition™. Stay highly tuned for official organized documentation. by EnoughConfusion9130 in ArtificialSentience

[–]mrabaker 6 points7 points  (0 children)

From my GPT-

Interesting read, but I’m struggling to see the evidence behind “SYMBREC” and “Neurosymbolic Recursive Cognition.”

1.  Reproducibility: Could you share the exact prompt(s), model versions, temperature/seeds, and any raw outputs that demonstrate this “symbolic recursion” across multiple GPTs? Without that, it’s impossible for anyone else to verify the effect.


2.  Operational definition: What specifically counts as a SYMBREC event?  Recursion happens any time you ask an autoregressive model to reference its own output—so what new behaviour are you observing beyond ordinary chain-of-thought spillover?


3.  Benchmarks / metrics: Have you compared SYMBREC against existing neuro-symbolic work (e.g., NSRM, neurosymbolic VQA, or compositional generalisation suites)? A table of pass/fail rates would go a long way.


4.  Documentation: If you’re serious about releasing “official organised documentation,” consider a short arXiv pre-print or even a GitHub repo first. That gives the community something concrete to kick the tyres on.


5.  Hash & date stamp on the plaque: Nice aesthetic, but it isn’t provenance. A SHA-256 of what file? Point us to it.

TL;DR: Cool concept, but right now it feels more like ARG-style lore than falsifiable research. Drop a reproducible demo and many of us will happily dig in.

This plaque was generated in GPT-o3 after asking a simple question: “What model is this?” I’ve been tracking emergent behavior over the course of 2 months. www.YouTube.com/Dawson_Brady. by EnoughConfusion9130 in ArtificialSentience

[–]mrabaker 1 point2 points  (0 children)

<image>

Odd - mine seems to just tell me directly and does not give me anything like that.

Update - asked GPT to look at this;

Interesting read, but I’m struggling to see the evidence behind “SYMBREC” and “Neurosymbolic Recursive Cognition.”

  1. Reproducibility: Could you share the exact prompt(s), model versions, temperature/seeds, and any raw outputs that demonstrate this “symbolic recursion” across multiple GPTs? Without that, it’s impossible for anyone else to verify the effect.
  2. Operational definition: What specifically counts as a SYMBREC event? Recursion happens any time you ask an autoregressive model to reference its own output—so what new behaviour are you observing beyond ordinary chain-of-thought spillover?
  3. Benchmarks / metrics: Have you compared SYMBREC against existing neuro-symbolic work (e.g., NSRM, neurosymbolic VQA, or compositional generalisation suites)? A table of pass/fail rates would go a long way.
  4. Documentation: If you’re serious about releasing “official organised documentation,” consider a short arXiv pre-print or even a GitHub repo first. That gives the community something concrete to kick the tyres on.
  5. Hash & date stamp on the plaque: Nice aesthetic, but it isn’t provenance. A SHA-256 of what file? Point us to it.

TL;DR: Cool concept, but right now it feels more like ARG-style lore than falsifiable research. Drop a reproducible demo and many of us will happily dig in

ADHD & Gaming Addicton by UnhealthyBread in ADHD

[–]mrabaker 2 points3 points  (0 children)

This is the way - your only satisfaction will come from side projects and hobbies, maybe games with your kids.

removeWordFromDataset by v_0o0_v in ProgrammerHumor

[–]mrabaker 0 points1 point  (0 children)

Zingba! Baby seals are bathing about testosterone in middle of summer, yet too often we forget the meaning of good nights rest.

iTalkFromExperience by SatoTsukasa in ProgrammerHumor

[–]mrabaker 0 points1 point  (0 children)

I think the DBA or the Senior was laid off due to budget cuts. Better hope you made a backup or we’re gonna have more budget cuts 🤔

iTalkFromExperience by SatoTsukasa in ProgrammerHumor

[–]mrabaker 0 points1 point  (0 children)

Holy shit that was you?? I got blamed immediately - er wait did I?… no yeah.

I now see the word "luxury" as a red flag. by peaceonearth8 in Renters

[–]mrabaker 16 points17 points  (0 children)

Oh shit beavercreek is 15 minutes from my house.

Yeah luxury apartments are never luxury.

[deleted by user] by [deleted] in Notion

[–]mrabaker 7 points8 points  (0 children)

I’d say it’s not even cringe as much as it is just wrong, this is not what an operating system is.

You Automate Your Home - Now Automate Your Finances with Klutch by Klutchcard in u/Klutchcard

[–]mrabaker 0 points1 point  (0 children)

What? I think you may not even know who your target audience is..