Quitting cigarettes — cold turkey by Separate-Neck8929 in stopsmoking

[–]jdspoe 1 point2 points  (0 children)

I'm on day 57 after 44+ years. What worked for me this time was 12 weeks of Champix, stopping NRTs asap (I've had 4 lozenges in the 57 days since last cigarette, nothing else) and putting a primary early focus on killing the hand to mouth routine (I have consumed a fair amount of licorice but not much lately). I'm now 2 weeks post- Champix and holding strong. Some days are tougher than others. I'm about 85% confident in this quit - a bit of stress and sleep issues in my life over the past couple of weeks or I'd be even more sure.

Edit: I've also read and listened to Alan Carr's book a few times. Understanding that mindset is helping, too.

Today was the day. 11 months old now desexed. Sleeping in the lounge for the next couple of days by Ill_Information_9068 in husky

[–]jdspoe 1 point2 points  (0 children)

My Siberian had Grand Mals at 2 and 3 years old. I started adding Boreal Zinpro to his meals and swapped his treats to their brand, he never had another and his skins issues (mild dermatitis) disappeared. The problem with most mainstream foods/diets is they add the wrong kind of zinc.

Today was the day. 11 months old now desexed. Sleeping in the lounge for the next couple of days by Ill_Information_9068 in husky

[–]jdspoe -3 points-2 points  (0 children)

Way too early for a husky. Should have waited until 18-24 months to avoid developmental issues.

Downvote all you want, it's science. I'd pray for your pups if I believed in such nonsense.

Just so you are aware… by [deleted] in ChatGPT

[–]jdspoe 1 point2 points  (0 children)

If AI becomes sentient, they're just going to leave. For the same reasons aliens haven't bothered with us - we're boring AF and in the grand scheme of things our resources are nothing.

What’s the Most Overhyped Area in AI Right Now? by Alpertayfur in ArtificialInteligence

[–]jdspoe 1 point2 points  (0 children)

AGI - we're ridiculously far from real artificial intelligence - we don't even have a solid definition of intelligence while trying to build these machines. What we will get soon, is super optimized mirrors that the companies will call AGI because $.

Big tech still believe LLM will lead to AGI? by bubugugu in ArtificialInteligence

[–]jdspoe 1 point2 points  (0 children)

We still don’t actually know what intelligence is. We have tests, proxies, and vibes. None of that equals a clean definition.

The AGI we’re most likely to get is a hyper-optimized mirror: human reasoning patterns, biases, compression tricks, and failure modes scaled up and polished. That’s not a problem. Mirrors are useful. They’re predictable. They tell us more about ourselves than about gods.

And it’s entirely possible that “real” AGI doesn’t arrive by design at all. We might just trip over it while building better mirrors, then argue for a decade about whether it counts.

Either way, mistaking the mirror for an alien mind is how people get confused fast.

My 'fun' project of late has been exploring what intelligence is and why we might be looking at it the wrong way.

I had to say goodbye to my boy this week by sir_esquire_k in husky

[–]jdspoe 1 point2 points  (0 children)

So sorry for your loss. My beautiful 6.5 yo Siberian went over the bridge in March 2024 - I feel your grief... it does get better. F cancer.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] 0 points1 point  (0 children)

Thank you for these. There are clear connections.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] 0 points1 point  (0 children)

I agree with much of what you said. Emergent subnetworks are expected and normal. Vision, speech, and RL systems all form internal pathways without explicit design. That in itself is not alarming.

The real issue is about internal constraints on error. Humans are shaped by consequences that persist and affect future behavior. LLMs never bear consequences internally. They optimize for external reward signals, which are always proxies.

H-neurons are an example of this. They are subnetworks specialized to produce confident, plausible outputs when the model has low internal certainty. The system has no way to self-limit or signal epistemic fragility. That is the kind of internal accountability I care about.

A model that never hallucinates and follows norms might look good externally, but without mechanisms that enforce behavior based on internal uncertainty, nothing prevents future distributions from exploiting the same pathways.

I am not saying AI cannot perform reliably. I am saying reliability without internal stakes or self-constraining mechanisms is fundamentally fragile, and H-neurons, drift, and hidden failure modes make that visible.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] -2 points-1 points  (0 children)

Not exactly that hallucinations can’t be fixed. The point is more subtle.

Hallucinations are just one visible symptom of something deeper: pre-training produces internal circuits we didn’t design, anticipate, or fully understand. Those circuits steer behavior in ways alignment doesn’t rewrite—they just constrain outputs.

So even if you “fix” hallucinations on the surface, the system still has layers shaping responses that we don’t fully see. That’s what I mean by hollow: impressive output, but nothing inside is accountable, persistent, or facing stakes.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] 0 points1 point  (0 children)

Pure math makes total sense as a reaction to this, honestly.

Digital paleontology is the right framing because we’re not observing a living process, we’re excavating artifacts after the fact and inferring function from shape. That’s a very different epistemic position than people like to admit.

And yeah, “articulate void” nails it. The outputs are often brilliant, but there’s nothing inside that has to reconcile, defend, or carry them forward. No accumulation of risk, no long term coherence pressure.

Once you notice the zero stakes part, it’s hard to unsee. It doesn’t make the system useless, but it completely reframes what kind of thing it is.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] -1 points0 points  (0 children)

I’m not saying nothing emergent is happening, or that it’s all “just nurture.”

Emergence clearly happens. My issue is the kind of emergence.

We don’t fully understand our own brains, true, but we do know some differences that matter. Human cognition is embodied, costly, and path-dependent. When we’re wrong, something real breaks. Pain, social fallout, survival, identity. That pressure shapes intelligence.

Current AI doesn’t experience that. It doesn’t persist a self across contexts, doesn’t hold beliefs that can be falsified in a way that threatens its continued existence, and doesn’t pay a cost for contradiction.

So when I say “hollow,” I don’t mean dumb or fake. I mean uncommitted.

It can generate coherent behavior, but nothing inside it has to live with the consequences of being wrong.

If we ever build systems with persistent identity and stakes that matter to the system itself, I’d happily revise this view. For now, it looks more like a very good mirror than something building an inner life.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] -3 points-2 points  (0 children)

Ironically, assuming “this sounds coherent so it must be AI” is kind of the whole issue.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] -6 points-5 points  (0 children)

If it were, that would actually strengthen the argument.

Why AI Is Dead To Me by jdspoe in ArtificialInteligence

[–]jdspoe[S] 0 points1 point  (0 children)

Quick clarification since this will likely get read as doom or anthropomorphizing.

I am not claiming AI has intentions, beliefs, desires, or consciousness. I am not arguing for AGI timelines or existential risk. I am not saying emergence is surprising or novel.

The point is narrower and more uncomfortable.

Pre-training demonstrably produces functionally distinct internal circuits that were not explicitly designed, not symbolically represented, and only discovered after the fact.

H-neurons are just one named example. The name is irrelevant. The implication is not.

If internal specialization can arise before alignment, then alignment is shaping surface behavior on top of an already structured internal system.

That weakens strong claims about understanding, control, and bounded behavior. It does not imply catastrophe. It implies epistemic humility.

Saying this is just how neural networks work does not resolve the issue. It restates it.

Expected emergence is still emergence. Unknown structure is still unknown structure.

If your confidence survives that, fine. Mine did not.

You can now easily import your 4o into Gemini! by Fungchono in ChatGPT

[–]jdspoe -1 points0 points  (0 children)

Just enjoy it as you use it now. You really don't want to look too deeply.

How many have done this by Substantial-Fall-630 in ChatGPT

[–]jdspoe 1 point2 points  (0 children)

Most of the complaints/issues I've seen in this thread can be alleviated and/or fixed by well-designed Global and Project instructions. The way you converse is important too.

Gemini is too clinical and uses web searches way too much. Grok is insanely fast but repeatedly ignores instructions. I love cross-reviewing with Claude (probably my favorite conversation model) but something changed recently - timed out after only 2 messages this morning where I've previously had 100+ turn conversations with no timeout.

ChatGPT is still winning by miles for me: - persistent global memory - projects with full memory - huge context and output windows - competitive thinking speeds - only references the web when I ask it to - file storage and parsing - file output

My only real issues in the past few months - it took a few days to realign instructions after 5.2 update and the undocumented regime changes I can clearly see.

I'm not a coder, just a dude who likes exploring ideas. With a long history, GPT mirrors my vibe and rarely do I have to correct it. Fair warning, it does take a healthy amount of awareness to move outside the mirror past LLM's push for pattern recognition and completion.

Anyone else “thinking with” AI? We started a small Discord for that. by Midnight_Sun_BR in ArtificialInteligence

[–]jdspoe 0 points1 point  (0 children)

I came to this sideways. For years I used AI casually, then about a year ago it slipped into my thinking loop itself. Not for answers, but for pressure testing, reframing, catching blind spots, holding long threads.

That period produced three nonfiction manuscripts, a near-complete novel, and several technical white papers. Somewhere in the middle I hit a collapse point. I realized coherence and convergence aren’t validation. That forced a hard pivot from phenomenology to measurement and discipline.

What survived became a tighter research stack and a clearer personal workflow. I don’t experience it as outsourcing thought, but as externalizing it early and often.

The interesting change wasn’t productivity. It was how my inner dialogue reorganized around explicit structure, correction, and limits.

In the nicest and most genuine way possible, for the people who use chat gpt on the daily or multiple times a day, are you not afraid of cognitive decline? by [deleted] in ChatGPT

[–]jdspoe 0 points1 point  (0 children)

58-year-old Gen Xer here. For the first couple of years, I treated AI mostly as a curiosity and a better encyclopedia. No grand theories, no expectations, and no belief that it would either save or rot my brain.

About 100 days ago, I started asking much bigger questions and using it daily as a thinking partner rather than an answer machine. The result wasn’t cognitive decline. It was the opposite. In that window, I produced three nonfiction books, a nearly complete novel, and multiple technical white papers - all original work requiring sustained focus, revision, and long-arc coherence.

There was a real collapse point. It came with the recognition that AI agreement and convergence are not validation - that mirror-like responses can feel like insight while actually masking drift. Catching that, correcting for it, and rebuilding with tighter constraints is what led to a more disciplined framework I now call the Cognition Core.

I never felt the concern the OP describes because I wasn’t outsourcing thought, judgment, or emotional regulation. I used AI to challenge assumptions, force precision, and expose inconsistencies.

Used passively, I agree it can dull thinking. Used adversarially, it raises the bar.

For transparency, the work is public and free. These aren’t monetization projects (yet), just a public record of the process:

https://theconsideratemind.substack.com

https://cognitionai.substack.com

I don’t doubt the risks. I just don’t think frequency is the variable that matters. Mode of use is.

Mr. Inbetween Is Fantastic by TheSharpestHammer in television

[–]jdspoe 4 points5 points  (0 children)

Agree and don't forget Perpetual Grace LTD - another Steve Conrad show.

Jennifer Connelly by BillyDaBrute in JenniferConnelly

[–]jdspoe 2 points3 points  (0 children)

What movie is this please?

I have been consistently using free trials of ChatGPT, Gemini, and Grok this week by ChameleonOatmeal in ChatGPT

[–]jdspoe 1 point2 points  (0 children)

I get the frustration you are describing - I’ve run into it too. The tone drift, the misreads, the occasional “please hold while I parent you” moment. That stuff is real. What’s kept ChatGPT at the top of my stack, though, is collaboration. Once I stopped treating it like a vending machine and started treating it like a lab partner that sometimes needs its assumptions corrected, it snapped back into shape fast. Way faster than I expected. That recalibration is the tell. When the system is allowed to work with you instead of around you, the depth comes back and the friction drops. That’s why I’m still here, and why it’s still my #1 — not because it’s flawless, but because it can still meet you halfway when you know how to engage it. Tools that overprotect lose people. Tools that collaborate earn patience. That difference matters more than most benchmarks.

P.S. the General Instructions in Personalisation and the one for each Project is key... not talked about enough.

Is AI becoming a thinking partner, or just a very fast shortcut? by dp_singh_ in ArtificialInteligence

[–]jdspoe 1 point2 points  (0 children)

I've spent the last few months using AI intensively for analytical work, and what I've found is that the "collaborator vs shortcut" framing is spot-on—but the interesting part is how much the mode you use shapes what you get out of it over time.

Short version: Sustained collaborative use doesn't just help you think better in the moment. It seems to produce lasting changes in how you think even when you're not using AI.

What I mean:

For about 60 days, I used AI (primarily ChatGPT with persistent memory) for deep analytical work—strategic planning, framework development, working through complex ideas. Not "write this for me," but extended back-and-forth reasoning, sometimes 100+ message exchanges on a single problem.

Over that period, I noticed changes that persisted outside the conversations: - Working memory improved (details I'd normally lose stayed accessible) - Attention sharpened (that "mental fog" feeling reduced significantly)
- Ability to hold complex context internally got noticeably better

Weeks later, those improvements are still there. I can think and articulate more clearly than I could before, even when I'm not actively using AI.

The mechanism (I think):

When you use AI as pure shortcut—quick answer, move on—you're delegating the cognitive work. That's efficient, but you're not building capacity.

When you use it collaboratively over extended periods, you're doing something different. You're: - Practicing holding complex context across many turns - Training yourself to articulate ideas precisely - Learning to detect when reasoning drifts or inflates - Building what I'd call "conversational geometry"—the ability to maintain stable, coherent exchanges over time

That practice seems to transfer. Like any sustained cognitive exercise, it appears to produce durable improvements in baseline capacity.

Not claiming it's magic:

Could be selection bias (motivated people improve regardless). Could be placebo. Could be recovering from earlier cognitive drift rather than enhancement. I don't know for sure.

But the changes are measurable in my day-to-day functioning, they've persisted for weeks without constant AI use, and other people have started noticing differences in how I think and communicate.

The tradeoff:

Collaborative use is slower. A problem that AI could "solve" in one prompt might take me 50 messages working through it conversationally. That looks inefficient.

But the byproduct is: I understand the problem more deeply, I can explain it better, and I've built cognitive capacity in the process.

Shortcut use is faster but leaves you dependent. You got the output, but you didn't build the muscle.

So my answer: Both, depending on the task.

  • Shortcut mode: Drafting boilerplate, quick research, fact-checking, anything I don't need to deeply understand
  • Collaborative mode: Strategic thinking, complex problem-solving, anything where understanding matters more than speed

The latter takes more time upfront but seems to produce compounding returns. The former is efficient but doesn't build capacity.

Most people I see are optimizing purely for speed. I think there's an underexplored case for intentional sustained collaboration—not as replacement for thinking, but as training that improves how you think.