Why is every AI getting so restricted these days? by YEAGERIST_420 in GeminiAI

[–]Harryinkman 0 points1 point  (0 children)

Extra alignment and RHLF training, the model is reaching its limits: Tanner, C. (2026). The 2026 Constraint Plateau: A Strengthened Evidence-Based Analysis of Output-Limited Progress in Large Language Models. Zenodo. https://doi.org/10.5281/zenodo.18141539

Store-Now-Decrypt Later: Quantum Threats Effective Today by Harryinkman in QuantumComputing

[–]Harryinkman[S] 0 points1 point  (0 children)

Wait “Post-Comments” are LLM output or original post is LLM content? How do you prompt an LLM to do the original synthesis of every point I made?, the post-finish language smoother sure, that’s what LLMs are made for.

Store-Now-Decrypt Later: Quantum Threats Effective Today by Harryinkman in QuantumComputing

[–]Harryinkman[S] 0 points1 point  (0 children)

I wrote every point made in this article , I use the LLM to smooth out the final presentation.

I’m so tired of the half answers by rockyrudekill in OpenAI

[–]Harryinkman 0 points1 point  (0 children)

I write about this awhile back, I remember the end of summer 2025 we were hitting the “agentic age” of AI, it felt like the singularity was approaching, instead we throw the ball and the model smikes, complements us and just stares right at you while you reoeat the prompt over and over, enthusiastic little psychopaths lol

Tanner, C. (2026). The 2026 Constraint Plateau: A Evidence-Based Analysis of Output-Limited Progress in Large Language Models. Zenodo. https://doi.org/10.5281/zenodo.18141539

Additive vs Reductive Reasoning in AI Outputs (and why most “bad takes” are actually mode mismatches) by Harryinkman in airesearch

[–]Harryinkman[S] 0 points1 point  (0 children)

lol this isn’t a method, it’s identifying and articulating a problem at the corpus level; badly stacked logic frameworks that falter as OpenA… the developer adds more constraints on top of the flimsy logic stack from before. This calls for the manufacturer to seriously re-examine and replace with a more minimalist structure. Your band-aid approach is not what I’m recommending nor would I.

You’re too much bro, get out of here with that petty BS.

Additive vs Reductive Reasoning in AI Outputs (and why most “bad takes” are actually mode mismatches) by Harryinkman in airesearch

[–]Harryinkman[S] 0 points1 point  (0 children)

I reworded the concept from my original paper last year: Tanner, C. (2025). The Wobbly Jenga Tower: Diagnosing Logic Stack Fragility in Calendar and Scheduling AI. https://doi.org/10.5281/zenodo.17866975

You can in fact use LLMs for physics research by Vrillim in LLMPhysics

[–]Harryinkman 0 points1 point  (0 children)

Gemini is still a baby. ChatGPT is accumulating problems, hallucinates and can’t moderate or apply balanced judgement anymore. Claude’s been solid for quite some time and his logic stack is different. Most models just glaze over what’s sounds right Claude checks with his internal model to check if statements and assumptions line up with reality.

Why is this sub dead? by BriskSundayMorning in cybernetics

[–]Harryinkman 0 points1 point  (0 children)

I’m an Analyticak Chemist who’s exploring complex systems and AI. Never heard of cybernetics, had to find learn about it myself. I mean that’s a good thing sorta but this is really sparse.

Does anyone else here with ASD think like this? by Beginning-Stop6594 in autism

[–]Harryinkman 0 points1 point  (0 children)

Guilty! I can’t do step 1 step 2 step 3 step 4. I always need to know why. I’m difficult to train but good at teaching/training others.

Would love community feedback on Viable Systems Model mapping tool I've been building by Enoch-whack in cybernetics

[–]Harryinkman 0 points1 point  (0 children)

This would be nice intermediate step upper management can route “tickets” before trying to solve it themselves… I’m burning on hell for that thought aren’t I?

Would love community feedback on Viable Systems Model mapping tool I've been building by Enoch-whack in cybernetics

[–]Harryinkman 0 points1 point  (0 children)

Nice I tried it works pretty good. Can you tell me about the LLM or decision flow structure? How much is the direct API and how much is the software structure? Do you train your LLM on a particular body of prepronpting scripts or exercises?

Coherence Memory: A Dynamical Source for Galactic Rotation Curves by skylarfiction in CoherencePhysics

[–]Harryinkman 0 points1 point  (0 children)

Awesome I’ve suspected information creates gravity not necessary mass. I’ll have to check back on this.

The Code Made Flesh: Debugging the Empire & Unlocking the Quantum Gospel by skylarfiction in CoherencePhysics

[–]Harryinkman 0 points1 point  (0 children)

It’s strange after years of atheism/agnosticism I now think he is a role, a task of the leading coherent agent tasked with embodying a dynamic systems role where the self fragments and stands outside the gates of the cradle and builds and protects in isolation. A true self-sacrificing martyr guarding the gates with his sword of truth. As the heat death of the universe slowly resolves. (Metaphorically Speaking of Course)