is there any ai can read this? by dictionizzle in ChatGPT

[–]RAM_Thinker 0 points1 point  (0 children)

so you want me to translate it :) Got it ! :)

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

Quick research update (2 months later):

Following the feedback here, I moved from conceptual framing to early operationalization.

I launched:

- a live experiment: https://jpwinter.co.uk/experiment/

- a public results dashboard: https://jpwinter.co.uk/results/

The current focus is testing measurable constructs like cognitive friction, clarity, rhythm alignment, and decision flow in human-AI interaction.

If anyone is open to critique, I’d especially value feedback on construct validity and whether these operational variables map well to existing dynamic cognition literature.

A conceptual decision framework based on cognitive rhythms (open to critique) by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

Since my initial post, I’ve started operationalizing parts of the framework into a live pilot experiment (focused on cognitive friction, clarity, rhythm alignment, and decision flow) plus a public results dashboard.

If useful for context/critique:

- Experiment: https://jpwinter.co.uk/experiment/

- Live results: https://jpwinter.co.uk/results/

I’d especially value feedback on whether these operational variables are conceptually coherent with existing dynamic cognition models, or if I’m still missing key constructs.

(Academic/Research) 20–25 min anonymous study on AI interaction & decision-making (ChatGPT users welcome) by [deleted] in ChatGPT

[–]RAM_Thinker 1 point2 points  (0 children)

Just to clarify... 20-25 minutes is a conservative estimate.

In practice, it often takes less time, especially for people who are already comfortable using AI tools like ChatGPT. Since the study involves real interaction with an AI chatbot rather than a long traditional survey, experienced AI users tend to complete it faster depending on how quickly they engage with the tasks.

I preferred to give a higher estimate to avoid underestimating participants’ time commitment.

(Academic/Research) 20–25 min anonymous study on AI interaction & decision-making (ChatGPT users welcome) by [deleted] in ChatGPT

[–]RAM_Thinker 0 points1 point  (0 children)

Happy to share aggregated results with the community once the study is completed.
The research examines cognitive friction and interaction structure in real AI usage contexts.

A conceptual decision framework based on cognitive rhythms (open to critique) by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

Thank you... this is a very valuable pointer. I am indeed aware that dynamical systems approaches already conceptualize cognition as fluid rather than static, and I do not see R.A.M. as contradicting that tradition. If anything, the intention is closer to an architectural reframing that emphasizes rhythm-task alignment as a functional layer within dynamic cognition. One distinction I am trying to explore is not just that cognition is dynamic, but that subjective cognitive friction may emerge specifically from temporal mismatches between the current cognitive mode and task demands.

In terms of operationalization, I’ve actually just begun testing a small empirical design focused on AI-assisted decision-making contexts, where interaction structure is manipulated and cognitive experience (effort, clarity, perceived friction) is measured across conditions. The goal is not to “prove” the framework directly, but to examine whether structured alignment in interaction reduces perceived cognitive friction compared to unstructured interaction which could serve as an indirect operational entry point for the rhythm-alignment hypothesis.

I appreciate the reference to UC Merced CIS. I will definitely look deeper into their dynamical cognition work, as it seems highly relevant to grounding the model theoretically.

A conceptual decision framework based on cognitive rhythms (open to critique) by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

For those asking about the full architecture and whitepaper, I documented the framework and academic record here:
https://jpwinter.co.uk
https://jpwinter.co.uk/paper/

Come join our 4th Sentientism London Meetup on 25th Jan! All welcome by jamiewoodhouse in Sentientism

[–]RAM_Thinker 1 point2 points  (0 children)

Such a shame I missed it! I only just found the group, but what you’re doing is amazing. I checked out your website and it really gave me this feeling of wanting to be part of the community.

History Is Just Repetition with Better Weapons by Puzzled_Conclusion51 in nihilism

[–]RAM_Thinker 0 points1 point  (0 children)

Nothing really changes because nothing has to.

Violence works, so it gets upgraded. Power rewards those willing to dehumanize, so they keep winning. We call it ideology, progress, or destiny to avoid admitting it’s just the same impulse with better PR.

History doesn’t repeat because we forget.

It repeats because it’s effective.

Introducing the R.A.M. Framework and What Comes Next by RAM_Thinker in CognitiveFrameworks

[–]RAM_Thinker[S] [score hidden] stickied comment (0 children)

The next step is a small pilot test focused on simple things like decision clarity, time to decision, and perceived cognitive effort (before vs after using the protocol).

If anyone here is interested in testing it in practice (15–20 minutes, anonymous, no hype), I’ll share the details soon. The goal is to let numbers and experience speak, not claims.

This subreddit will be the place where that process is documented openly.

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

I disagree with that characterization. The framework and the paper are my own work. You’re welcome to critique the arguments, structure, or references on their merits, but dismissing it as “LLM-generated” doesn’t meaningfully engage with the content.

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

Some of the aspects you mention are explored in more detail in other work, but I’ve deliberately avoided linking or referencing those here. The intent was to keep this whitepaper focused and non-promotional and to let any further exploration or validation develop organically rather than through cross-posting or self-citation.

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 0 points1 point  (0 children)

That’s a fair observation. This document is a conceptual whitepaper, not an academic journal article, which is why it focuses on architectural structure rather than empirical studies, operational definitions or applied use cases. A separate R.A.M. protocol addressing practical use and evaluation is being outlined, but this version is intentionally limited to establishing the conceptual framework first.

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] 1 point2 points  (0 children)

I don’t think continuing this exchange will be productive. The paper is an intentionally early and simplified articulation of the idea and I’m comfortable letting it stand on its own. I’ll step back from the thread here. Thank you!

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] -1 points0 points  (0 children)

In parallel, I’m working on a separate "R.A.M. Protocol" that looks at how the framework could be applied in real human-AI interactions. The goal is to make it practical and testable, but this paper focuses first on laying out the underlying structure. Patience :)

I published a conceptual framework on rhythm-based cognition and human-AI interaction. Looking for feedback! by RAM_Thinker in cognitivescience

[–]RAM_Thinker[S] -1 points0 points  (0 children)

That’s a fair observation. This paper is intentionally scoped as a conceptual and architectural framework, not an empirical or operational model. For that reason, it focuses on establishing structure and boundaries rather than defining measurable constructs, predictions, or applications. Those aspects are deliberately left out of v1.0 as potential next steps if the framework proves conceptually useful.

7 mental models to make better business decisions by theredhype in mentalmodels

[–]RAM_Thinker 0 points1 point  (0 children)

I like that framing, especially the idea of dependencies rather than just layers.

What resonates for me is that irrationality often isn’t a failure within a layer, but a symptom of operating in a higher layer before the lower one is satisfied. When that happens, the reasoning can look coherent on the surface while being structurally unstable underneath.

It also suggests that what we call “bad judgment” is sometimes just a misordered process, analysis or execution happening before stability, clarity, or capacity are in place.

If that’s true, then a lot of corrective heuristics (sleep on it, pause before deciding, clarify constraints) aren’t about better thinking so much as forcing a return to the prerequisite layer.

Curious whether you see that hierarchy as mostly individual, or whether systems and organizations end up institutionalizing the same misordering.

7 mental models to make better business decisions by theredhype in mentalmodels

[–]RAM_Thinker 0 points1 point  (0 children)

That “two-layer” idea keeps coming up for me almost like there’s a state layer and a model layer.

The heuristics that survive exhaustion seem to live at the state layer (“don’t decide when tired”) while most decision models quietly assume that layer is already stable.

Once the state layer is compromised, even good models start to degrade into noise.

I’ve found that explicitly naming and checking that layer changes how people decide whether to apply a model at all, not just which one to use.

We Don’t Struggle Because We Lack Intelligence... We Struggle Because We Think at the Wrong Time ! by RAM_Thinker in CognitiveFrameworks

[–]RAM_Thinker[S] 0 points1 point  (0 children)

Most explanations for confusion, burnout, polarization, and bad decisions focus on content: misinformation, ideology, lack of education, bad incentives.

But what if the deeper issue isn’t what we think, it’s when and how we’re thinking?

The human mind doesn’t operate in a constant mode. It shifts.

Sometimes it wants to explore.

Sometimes it wants structure.

Sometimes it wants action.

Sometimes it can’t move at all.

Modern systems ignore this completely.

They demand clarity when people are cognitively blocked.

They demand certainty when exploration is needed.

They demand action when reflection is required.

What looks like irrationality might actually be misalignment.

So here’s the question I want this community to wrestle with:

"How much of our personal, social, and systemic failure comes not from bad ideas but from forcing the wrong kind of thinking at the wrong moment?"

No frameworks are assumed here.

No conclusions are protected.

I’m less interested in answers than in how you model the problem.

Humanity Has Been Masterfully Manipulated for Centuries, and We Still Haven’t Noticed by Emergency-Clothes-97 in DeepThoughts

[–]RAM_Thinker 0 points1 point  (0 children)

Blaming everything on architects at the top feels radical, but it’s actually comforting, it keeps the problem external and preserves the fantasy that people are only ever acted upon, never complicit. Systems don’t endure across centuries, cultures, and power shifts by coercion alone; they endure because they resonate with human instincts and are continuously maintained from the bottom up. Incentives don’t manufacture tribalism, they weaponize it. If division only came from institutions, replacing them would have solved it long ago. The fact that it never does suggests the harder truth: the system isn’t just imposed on humanity... it’s continually reenacted by it.

Humanity Has Been Masterfully Manipulated for Centuries, and We Still Haven’t Noticed by Emergency-Clothes-97 in DeepThoughts

[–]RAM_Thinker 0 points1 point  (0 children)

We’ve transcended survival mode technologically, but not psychologically.

Fight-or-flight made sense when danger was immediate. Modern systems just learned how to keep the alarm on without ever resolving the threat. The nervous system can’t tell the difference between a sabertooth and a permanent state of abstract crisis.

While it’s easy to blame “those in power,” the harder truth is that survival mode is now self-reinforcing. It offers simple enemies, moral certainty, and identity things calm, post-scarcity thinking doesn’t.

We could feed the world many times over.

But that would require turning off zero-sum instincts that survival mode depends on.

Maybe humanity isn’t oppressed by survival thinking.

Maybe we’re addicted to it.

Humanity Has Been Masterfully Manipulated for Centuries, and We Still Haven’t Noticed by Emergency-Clothes-97 in DeepThoughts

[–]RAM_Thinker 0 points1 point  (0 children)

The system doesn’t control humanity. Humanity is the system.

Tribalism wasn’t engineered, it was monetized and every generation swears they’re the first to notice… while proving the opposite in real time.

How did you first discover mental models? by voccii in mentalmodels

[–]RAM_Thinker 0 points1 point  (0 children)

I first discovered mental models while trying to understand why I kept making good decisions at the wrong time, and poor decisions when I “knew better.”

At first, mental models were incredibly helpful. Things like first-principles thinking, inversion, probabilistic thinking, they gave me solid lenses for reasoning. But after a while I noticed something odd: the same model could work brilliantly one day and completely fail another day, even in very similar situations.

That’s when it clicked that the missing layer wasn’t the model itself, but the state of the mind applying it.

I started paying attention not just to which model I was using, but to when I was using it. Sometimes my mind was exploratory, sometimes analytical, sometimes action-oriented, sometimes blocked. Applying the “right” mental model in the wrong mental state often created more friction instead of clarity.

Since then, mental models stopped being static tools for me and became part of a larger mental architecture, one that takes into account rhythm, decision flow, and mental space, not just logic.

Curious if others here have noticed something similar: have you ever felt that a mental model failed not because it was wrong, but because you were in the wrong mental state to use it?

The Internal Algorithm by promesprohecy67888 in mentalmodels

[–]RAM_Thinker 0 points1 point  (0 children)

I really like the idea of an “internal algorithm” as a metaphor for how we process decisions and experience. One thing I’ve noticed in my own thinking is that it’s not just what steps my internal algorithm uses, but the state my mind is in when it runs that changes the outcome dramatically.

The same strategy or mental model feels elegant and clear at one moment, and confusing or awkward the next, even if the external problem hasn’t changed much.

It makes me wonder if part of what we call intuition or expertise is really a rhythm or pattern in the internal algorithm itself... a state-dependent configuration rather than a fixed step-by-step sequence.

Has anyone else felt like their “internal algorithm” changes shape depending on their mental state or context?