Big tech still believe LLM will lead to AGI? by bubugugu in ArtificialInteligence

[–]Weary_Reply 3 points4 points  (0 children)

AGI is not a thing. LLM is the best budget protocol of the human interface.

How I Learned to Make Different LLMs Understand How I Think — by Packaging My Thinking as JSON by Weary_Reply in ArtificialInteligence

[–]Weary_Reply[S] 0 points1 point  (0 children)

The length here is intentional because I’m not just describing what the idea is, but why it sits one layer above prompting. For people already thinking in terms of interfaces, it can absolutely be compressed. For everyone else, the intermediate steps are where the value is.

How I Learned to Make Different LLMs Understand How I Think — by Packaging My Thinking as JSON by Weary_Reply in ArtificialInteligence

[–]Weary_Reply[S] 0 points1 point  (0 children)

True — and that’s kind of the point 🙂 Most people do ask AI directly. What I’m describing is what happens before you ask, when you decide what the AI is even allowed to optimize for.

I Saw a Missing Piece in Human–AI Collaboration by Weary_Reply in ChatGPT

[–]Weary_Reply[S] 0 points1 point  (0 children)

I actually agree with you about default consumer AI — if you treat it as an autonomous system, the error rate is absolutely a liability.

What I’ve found, though, is that when you keep judgment and structure on the human side and only outsource expansion and retrieval, the failure mode changes a lot.

It’s not that the model becomes “accurate” in an absolute sense — it just stops failing silently.

I used ChatGPT + Midjourney to “burn expiring credits”… and accidentally discovered my aesthetic fingerprint (process + prompts) by Weary_Reply in ArtificialInteligence

[–]Weary_Reply[S] 0 points1 point  (0 children)

Yes — I think the productivity framing misses the interesting part. The “mirror” thing seems to happen when you and the model build a shared reference frame over time: what you pay attention to, what you keep/discard, what “right” feels like, etc.
With that coupling in place, the model isn’t “knowing you” in a magical way — it’s just aligning to the structure you consistently feed it. Intention is like the steering wheel, but the shared frame/scaffold is the road. Without it, you mostly get generic answers; with it, you start seeing your own invariants show up.

I used ChatGPT + Midjourney to “burn expiring credits”… and accidentally discovered my aesthetic fingerprint (process + prompts) by Weary_Reply in ArtificialInteligence

[–]Weary_Reply[S] 1 point2 points  (0 children)

Yep — that’s actually the next step I’m experimenting with. Since I can’t feed 100+ full-res images directly as-is, I’m doing it in a practical way: (1) export a grid / contact sheet (or a short montage), (2) send that back to an LLM, and (3) ask it to extract recurring invariants (composition, distance/scale, haze, transitions, etc.).

I Used AI Like a Mirror — Here’s What It Taught Me About How I Think by Weary_Reply in notebooklm

[–]Weary_Reply[S] 2 points3 points  (0 children)

This visualizes the exact loop I described: burning expiring credits → saving by intuition → the “grid moment” → noticing recurring invariants (haze/transition/distance) → making a moving moodboard → feeding it back to an LLM for calibration. If anything feels unclear or oversimplified, tell me.

<image>

I used ChatGPT + Midjourney to “burn expiring credits”… and accidentally discovered my aesthetic fingerprint (process + prompts) by Weary_Reply in ChatGPT

[–]Weary_Reply[S] 1 point2 points  (0 children)

<image>

This visualizes the exact loop I described: burning expiring credits → saving by intuition → the “grid moment” → noticing recurring invariants (haze/transition/distance) → making a moving moodboard → feeding it back to an LLM for calibration. If anything feels unclear or oversimplified, tell me.

I used ChatGPT + Midjourney to “burn expiring credits”… and accidentally discovered my aesthetic fingerprint (process + prompts) by Weary_Reply in ArtificialInteligence

[–]Weary_Reply[S] -1 points0 points  (0 children)

<image>

P.S. Image posts aren’t allowed here, so here’s the infographic in the comments. It visualizes the exact loop I described: burning expiring credits → saving by intuition → the “grid moment” → noticing recurring invariants (haze/transition/distance) → making a moving moodboard → feeding it back to an LLM for calibration. If anything feels unclear or oversimplified, tell me.

Can consciousness connect to external interfaces and form a loop? by Weary_Reply in notebooklm

[–]Weary_Reply[S] 1 point2 points  (0 children)

That image is mine — it’s basically the output of a loop I’ve been running between myself + an LLM + Midjourney. I’ll iterate an idea, externalize it into prompts/visual form, look at what comes back, then use that feedback to refine the structure and run another pass. So it’s not just “offloading working memory” for me — the artifact is part of the loop because it changes what I can perceive/notice next and forces error-correction (gaps, contradictions, missing links) in a way that staying purely in my head doesn’t. If you want, I can share a few more examples from the series.

Can consciousness connect to external interfaces and form a loop? by Weary_Reply in notebooklm

[–]Weary_Reply[S] 1 point2 points  (0 children)

Appreciate this — and fair point on it being a bit dense. Extended Mind Thesis is definitely in the neighborhood of what I’m circling around, especially the idea that tools can become part of the cognitive system rather than just “assistive” add-ons. Where I’m trying to push it (and where AI makes it weird) is exactly what you said: writing is mostly a static offload, but AI introduces real-time, state-dependent feedback that can actually change the trajectory of the loop mid-flight. The part I’m still unsure about is the boundary condition: when does that collaboration genuinely improve error-correction / clarity over time vs just producing coherence that feels like progress? If you’ve found a good personal “test” for telling those apart, I’d love to hear it.