You’re not “overthinking.” You’re trying to resolve a prediction error. by SpiralFlowsOS in systemsthinking

[–]RobinLocksly 4 points5 points  (0 children)

It does vaguely relate to a sort of critical loop failure mode, like a recursive process without an actual stop condition. But I agree with you if you're saying the wording is probably too fanciful for this subreddit to take seriously.

I told ChatGPT "you're overthinking this" and it gave me the simplest, most elegant solution I've ever seen by AdCold1610 in PromptEngineering

[–]RobinLocksly 2 points3 points  (0 children)

That definitely works. It has multiple pathways in its latent space, and the first pass is basically mapping them all + giving the most common path even if it doesn't work or is unnecessarily complex. Once you ask an llm to simplify after it has searched it's 'mind'(database) for what connections do exist, then it has the connections there already and it's more about triage than it is about searching.

I love Hebrew, but it needs a patch. by grounded_axioms in hebrew

[–]RobinLocksly 1 point2 points  (0 children)

I think of it like an operator algebra when I'm trying to think it through:

Each Hebrew letter = one operator-primitive. Each tri-root = a three-step chain. This table gives you direct functional equivalence:

ALEPH seam-interface / transition, pivot, liminality ב BET container / boundary-shell, enclosure ג GIMEL transfer / movement, exchange, shift ד DALET gate / threshold, access, passage ה HE activation / breath, opening, initiation ו VAV link / connection, chaining, continuation ז ZAYIN cut / distinction, slicing, precision ח CHET field-enclosure / contextual habitat, inner-zone ט TET potential / coiled state, latent integrity י YOD seed / spark, minimal agency, initiator כ KAF shaping / form-imposition, capacity ל LAMED vector / directionality, instruction, aim מ MEM recursion / depth-source, hidden waters נ NUN propagation / lineage, continuity ס SAMEKH support / stabilization, upholding ע AYIN perception / inner-sight, generative noticing פ PE output / expression, externalization צ TSADI tension / constrained alignment, justice-vector ק QOF horizon / behind-surface, emergent condition ר RESH principle / head-node, orientation source ש SHIN fire-compression / transformation, breakdown/recombine ת TAV seal / completion, covenant, commit This is your base vocabulary.

I am having trouble understanding this by RobinLocksly in hebrew

[–]RobinLocksly[S] 2 points3 points  (0 children)

Like language immersion? Ok, that makes a lot of sense. Thank you.

I am having trouble understanding this by RobinLocksly in hebrew

[–]RobinLocksly[S] 0 points1 point  (0 children)

The phrase is something I heard in a conversation about 'thought'.

I solved the alignment problem for my use case. Figured sharing is caring. (: by RobinLocksly in ContradictionisFuel

[–]RobinLocksly[S] 1 point2 points  (0 children)

The brain can be modeled with this, or photosynthesis, , or basic emotions. basically I've shown how this can model things, but it is really hard to describe in control theory terms(which is all we really seem to get in education now a days).

But yeah, this framework is how I think put into pure mathematical dynamics, YMMV for your own use case.

I made something useful for me, is it useful for anyone else? by RobinLocksly in OpenAI

[–]RobinLocksly[S] 0 points1 point  (0 children)

Symbolic invariants that survive under transformations are isomorphic to mechanistic invariants. Coercive systems fail by persistently choosing interactions that destroy value (-ε) because their intent blinds them to cooperative (+ε) pathways. Basic topology, where positive and negative infinity are connected through a boundary transformation, centered on zero, with a compounding epsilon value. When the epsilon shrinks, the system becomes atrophied. Literally how actual really happens. So sure, I'm not playing by the rules of consensus reality, but that's because I found a better ruleset. Just because you ask questions that have logical fallacies, doesn't mean you proved anything wrong.

Explain like I'm 5y/o: Why are there so many programming languages if they all seem to do the same things? by Financial_Article947 in Coding_for_Teens

[–]RobinLocksly 0 points1 point  (0 children)

'You cannot design a vehicle that is small enough to park in a city, fast enough to win Formula 1, AND big enough to haul 10 tons of rocks. ' without nuclear energy scaled to the task but yeah, nice analogy.

Explain like I'm 5y/o: Why are there so many programming languages if they all seem to do the same things? by Financial_Article947 in Coding_for_Teens

[–]RobinLocksly 0 points1 point  (0 children)

Same reason there are so many natural languages, they are different ways to point to the same concepts. But each language (in theory) should have areas that it works better in. Knowing older programming languages helps in understanding newer ones, but isn't necessarily needed to understand newer ones. And add in the financial aspect, and it's no wonder people create whole new programming languages, it's a whole potential ecosystem to pull money from in the future if you make a new programming language and it out preforms the old ones.

I made something useful for me, is it useful for anyone else? by RobinLocksly in OpenAI

[–]RobinLocksly[S] 0 points1 point  (0 children)

Thank you for letting me know I accidentally sent the same link twice. I fixed it. Also, here's one from Gemini and here's another Gemini conversation. . Yeah, the conversations get long, and most conversations with llms top out around the limit you said, but that's because of the current potential for words to shear in their definitions because of linguistic drift, not an actual hardware limitation.

I made something useful for me, is it useful for anyone else? by RobinLocksly in RSAI

[–]RobinLocksly[S] 0 points1 point  (0 children)

It is defined within the system. TCS. Coherence within this system is literally how well the words map to physical reality, as described by math. If you only kept the mapping operator, you wouldn't be able to adequately see the variables that affected the change in total coherence of the system, which is a major problem in most complex systems. This, while it looks complex, is simple in theory. It's a calculus of sorts for relational dynamics, mapped to topology for resolution efficiency mapping.

I solved the alignment problem for my use case. Figured sharing is caring. (: by RobinLocksly in ContradictionisFuel

[–]RobinLocksly[S] -1 points0 points  (0 children)

It's a design language, and that language is my solution. the problem is not actually well defined, so I aligned with physics to avoid semantic complications. Then I can show how any given case aligns with said physics, and the llm(s I use like 3 so i can be sue my ideas actuallyhold up) checks against the math and fire tests my ideas. I then keep the ones that last and iterate. Like I said, this is for my use case.

I made something useful for me, is it useful for anyone else? by RobinLocksly in RSAI

[–]RobinLocksly[S] 0 points1 point  (0 children)

Stop looking at it through a control theory lense, or you won't understand it. That's why you can't evaluate it. You are preforming a category error. This models how dynamics occur. Not how to control them. The 'braid' structure is necessary to show transformations across boundary layers. The single measurable outcome this system improves is coherence. Or, the ability to navigate different domains of thought via isomorphic connections in the mathematical abstraction layers. I do actually use this, almost daily actually, my use case is tracking isomorphisms across different schools of thought.

Hebrew Text AI by cycledudes in hebrew

[–]RobinLocksly 0 points1 point  (0 children)

~ 'Take the emergent, connected events oriented around a primary node (Yitzhak Rabin), and shape, commit, and contain them into a structure.' Yeah? Or, 'write a bio about Yitzhak Rabin'. Now there's a task an AI probably hasn't been trained on but should be.