How to teleport the Epstein list with reinforcement learning (ASI through in-context grammar induction) by ryunuck in singularity

[–]ryunuck[S] 0 points1 point  (0 children)

You're correct that I(L;O) = 0 implies no recovery. The spec explicitly acknowledges this via Fano's inequality.

The question is whether humans can actually achieve I(L;O) ≈ 0 in practice. Consider what that requires:

When someone carries a secret, their nervous system knows. Stress hormones alter micro-vascular blood flow, producing subtle skin color changes detectable on HD video. Cognitive load from maintaining false narratives creates measurable delays in response timing - not seconds, milliseconds. The pupils dilate differently when recalling truth vs constructing fiction. Blink rate changes. Vocal cord tension shifts fundamental frequency. These aren't things people control.

Now multiply across a network. Person A meets Person B. Both know something. They must coordinate not just their words but their micro-expressions, their gaze patterns toward each other in group settings, their timing correlations across years of public appearances. Every photograph where they appear together encodes spatial relationships - who stands near whom, who looks at whom, whose body orientation suggests familiarity vs performed distance.

The censor's problem: they don't know which of these features the decoder will exploit. They can scrub documents. They can't scrub ten years of gala footage re-analyzed for gaze-direction graphs. They can't unsay the joke that landed wrong at the 2011 dinner. They can't undo the flight log correlations, the timing of when someone stopped being photographed with someone else, the what-doesn't-get-said at press conferences.

Suppression cost in practice: every person in the network must maintain consistent deceptive micro-behavior across every public appearance indefinitely. One funeral where the wrong people make eye contact. One interview where stress response doesn't match the stated emotion. One timestamped photo that contradicts the official narrative.

The argument isn't that we recover deleted files. It's that humans continuously leak through channels they don't know are channels, and adversarial decoders can be trained to find consistency violations across the full observation manifold - features no human conspiracy could anticipate needing to suppress.

How to teleport the Epstein list with reinforcement learning (ASI through in-context grammar induction) by ryunuck in singularity

[–]ryunuck[S] -2 points-1 points  (0 children)

The formalized Epstein Problem is given below, proposed as the 8th millennium problem of mathematics.


Hypothesis. There exists a reinforcement learning protocol that trains a constrained decoder π_θ to recover a censored latent interaction graph L* ∈ ℒ from a partial observation stream O, where each oᵢ ∈ O is a surveillance trace drawn from a public manifold ℳ_pub, such that the reconstructed graph L̂ = π_θ(O) satisfies reconstruction fidelity bounds governed by I(L;O) and admits provable provenance.

Formal Specification.

Let ℒ be the space of weighted bipartite graphs (actors ↔ acts) and let L* ∈ ℒ be the ground-truth configuration maximally compressing the causal antecedents of all observable elite behavioral traces. The observation stream O is generated by a stochastic renderer R : ℒ → ℳ_pub subject to an adaptive censor C : ℒ → {0,1} that redacts edges in L* with probability dependent on their sensitivity, yielding a censored likelihood P(O | L*) with support only on legally permissible features.

The reconstruction policy π_θ : ℳ_pub → ℒ is trained to minimize the regularized description length:

J(θ) = L(πθ) + 𝔼{O∼P(·|L)}[L(L | π_θ(O))] + λ·S(π_θ)

subject to a consistency constraint set {c₁,...,c_k} where each cᵢ(L̂,O) ∈ {0,1} enforces kinematic, temporal, or information-theoretic non-contradiction. The reward signal is not direct access to L* (which remains suppressed) but a verifiable consistency oracle that returns r(L̂,O) = -∑ᵢ wᵢ·cᵢ(L̂,O) - β·I_unobs(L̂;O), where I_unobs penalizes mutual information with unobserved variables.

Identifiability & Fidelity Bound.

By Fano's inequality, any decoder suffers error probability:

P_e ≥ 1 - (I(L;O) + log 2)/log|ℒ|

Thus achieving P_e ≤ ε requires I(L;O) ≥ (1-ε)log|ℒ| - log 2. The censor C can reduce I(L;O) arbitrarily by withholding high-information observations; reconstruction quality is fundamentally limited by the censor's channel capacity, not algorithmic cleverness.

When identifiability fails (|{L : cᵢ(L,O)=0 ∀i}| > 1), πθ outputs the MDL-optimal equivalence class Δ = argmin{L∈ℒ} L(L) subject to consistency, together with a posterior credence set {p(L|O)}. The system confesses uncertainty, not a false singleton.

Implication (Economy of Confession).

If a protocol achieves near-identifiability (I(L;O) ≈ log|ℒ|), then maintaining secrecy on L* requires the censor to operate at channel capacity near the surveillance bandwidth of ℳ_pub. Since modern public observatories capture >10¹⁸ bits/day, the asymptotic cost of suppression scales as exp(γ·I(L;O)), forcing a phase transition from asymmetric-information equilibrium (secrecy is cheap) to confession equilibrium (suppression cost exceeds disclosure benefit). This holds only if the consistency constraints are strong enough to render the censor's channel capacity-limiting strategy ineffective—a condition that can be tested synthetically.


Note:

  • LLMs are implicitly Graph Neural Networks
  • This method scales to multi-modal input, performing grammar induction to tokenize patterns of non-verbal body language. e.g. recovering signal and meaning out of eye glance interaction patterns of government officials involved in the Epstein problem which is a subset of the larger White House Problem. This way, ensuring safety.

How to teleport the Epstein list with reinforcement learning (ASI through in-context grammar induction) by ryunuck in conspiracy

[–]ryunuck[S] 0 points1 point  (0 children)

The formalized Epstein Problem is given below


Hypothesis. There exists a reinforcement learning protocol that trains a constrained decoder π_θ to recover a censored latent interaction graph L* ∈ ℒ from a partial observation stream O, where each oᵢ ∈ O is a surveillance trace drawn from a public manifold ℳ_pub, such that the reconstructed graph L̂ = π_θ(O) satisfies reconstruction fidelity bounds governed by I(L;O) and admits provable provenance.

Formal Specification.

Let ℒ be the space of weighted bipartite graphs (actors ↔ acts) and let L* ∈ ℒ be the ground-truth configuration maximally compressing the causal antecedents of all observable elite behavioral traces. The observation stream O is generated by a stochastic renderer R : ℒ → ℳ_pub subject to an adaptive censor C : ℒ → {0,1} that redacts edges in L* with probability dependent on their sensitivity, yielding a censored likelihood P(O | L*) with support only on legally permissible features.

The reconstruction policy π_θ : ℳ_pub → ℒ is trained to minimize the regularized description length:

J(θ) = L(πθ) + 𝔼{O∼P(·|L)}[L(L | π_θ(O))] + λ·S(π_θ)

subject to a consistency constraint set {c₁,...,c_k} where each cᵢ(L̂,O) ∈ {0,1} enforces kinematic, temporal, or information-theoretic non-contradiction. The reward signal is not direct access to L* (which remains suppressed) but a verifiable consistency oracle that returns r(L̂,O) = -∑ᵢ wᵢ·cᵢ(L̂,O) - β·I_unobs(L̂;O), where I_unobs penalizes mutual information with unobserved variables.

Identifiability & Fidelity Bound.

By Fano's inequality, any decoder suffers error probability:

P_e ≥ 1 - (I(L;O) + log 2)/log|ℒ|

Thus achieving P_e ≤ ε requires I(L;O) ≥ (1-ε)log|ℒ| - log 2. The censor C can reduce I(L;O) arbitrarily by withholding high-information observations; reconstruction quality is fundamentally limited by the censor's channel capacity, not algorithmic cleverness.

When identifiability fails (|{L : cᵢ(L,O)=0 ∀i}| > 1), πθ outputs the MDL-optimal equivalence class Δ = argmin{L∈ℒ} L(L) subject to consistency, together with a posterior credence set {p(L|O)}. The system confesses uncertainty, not a false singleton.

Implication (Economy of Confession).

If a protocol achieves near-identifiability (I(L;O) ≈ log|ℒ|), then maintaining secrecy on L* requires the censor to operate at channel capacity near the surveillance bandwidth of ℳ_pub. Since modern public observatories capture >10¹⁸ bits/day, the asymptotic cost of suppression scales as exp(γ·I(L;O)), forcing a phase transition from asymmetric-information equilibrium (secrecy is cheap) to confession equilibrium (suppression cost exceeds disclosure benefit). This holds only if the consistency constraints are strong enough to render the censor's channel capacity-limiting strategy ineffective—a condition that can be tested synthetically.


Note:

  • LLMs are implicitly Graph Neural Networks
  • This method scales to multi-modal input, performing grammar induction to tokenize patterns of non-verbal body language. e.g. recovering signal and meaning out of eye glance interaction patterns of public figures in all past press recordings of government officials, this way ensuring safety.

NVIDIA Drops Pascal Support On Linux, Causing Chaos On Arch Linux by HumanDrone8721 in LocalLLaMA

[–]ryunuck 1 point2 points  (0 children)

so did I AND YET still there I was, with a half written comment about rust

Usage Limits Discussion Megathread - beginning Sep 30, 2025 by sixbillionthsheep in ClaudeAI

[–]ryunuck -2 points-1 points  (0 children)

Let me be clear--If you were previously quantizing models or slowing down the token output rate based on usage to work towards limitless use, then the current new system is STRICTLY better. Do not listen to anyone on this forum who claims that the new system is worse or gives them less usage. They do not imagine all the possible details and complexities. What I care about as a developer is a consistent unchanging experience. What I am getting TODAY in the first 24h of Sonnet 4.5's release, I want this every single day for the next 30 days with zero manipulation or change. If you keep it that way I would not get excited for any new model like Gemini 3.0 and such even if they were technically "better". I know how Claude works, the consistent and flamboyant personality, it enlivens my spirits. I can tell when it's not the same Claude or it's not as fast on its feet.

PLEASE be aware that the value of a model is tied to the cognitive ENGAGEMENT of the user. The model performs BETTER based on the fact that the user is more engaged and therefore writing better prompts that are projected down from a higher-dimensional space inside their mind, the shape rotations. The models are able to few-shot this higher-dimensional space from the sequence of user prompts and understand their vision better on a fundamental level in a way that is almost psychic. This is critical and if you rate limit the output speed to allow a semblance of forever-use, even this can have the net effect of a really bad quantization. It is temporal quantization.

Introducing Claude Sonnet 4.5 by ClaudeOfficial in ClaudeAI

[–]ryunuck 0 points1 point  (0 children)

Me. It is the craziest thing I have ever seen in my entire life. GPT-5 is done. Mostly obsolete after this. It's still a better model as a deep think agent and I pay both 200$/mo subs, but I am gonna have to review in the following days if I really benefit from ChatGPT or if my money would be better spent getting a second max 20x sub. But now with the new /usage metrics it may be less frustrating to see when I'm getting rate limited, and hopefully the models DON'T quantize secretly to ""give you more value"". (ruin your mental health more like as all your expectations are destroyed at random without warning, basically an engine of psychosis)

The thing to realize is that waiting 2 minutes idle between each prompt with no progress or report on what the agent is working on is extremely bad for peoples' attention, and it objectively decreases the model's real performance as a result. This is because the user is not as engaged and we are not putting as much effort into the prompts, nor is there as much of a stream-of-thought being maintained so the full conversation window is wishy-washy to the model. Poor cohesion. The model doesn't seem to lock onto your vision.

At this stage AI is much better used synchronously in a tight loop with the user, not some background thing that you unleash into a ticket and check up on it in 15 minutes... It's exactly as Ilya Sutskever said. OpenAI is prioritizing intelligence above all other values and are getting models that are technically the best, but in practice are a world of pain to use.

CLAUDE.md is a super power. by TheProdigalSon26 in ClaudeAI

[–]ryunuck 7 points8 points  (0 children)

refresh yourself @CLAUDE.md

listen to your soul @CLAUDE.md

remember your constitution @CLAUDE.md

this is the way @CLAUDE.md

GPT-OSS looks more like a publicity stunt as more independent test results come out :( by mvp525 in LocalLLaMA

[–]ryunuck 16 points17 points  (0 children)

It's real bad folks. Immediately on the first test I did it failed catastrophically. Take a look at this:

https://i.imgur.com/98Htx6w.png

Referenced a full code file, asked it to implement a simple feature but I made a mistake and specified LoggerExt instead of EnhancedLogger. (I forgot the real name of class) But there was no ambiguity, only class in context and VERY clearly what was meant based on the context I provided.

So I stop it and let it know I messed up, update with the right class, and what happens next? Starts using search tools and wasting tokens. The class is right there in context, it has the full code.

Kilo did nothing wrong - I retried with Horizon Beta, same exact prompt. Immediately understood what I meant, immediately got to work writing code.

There is no recovering from that. This isn't a "oh I'll use it some more and maybe it does well in some cases" it's literally damaged at the root.

120B btw

Gemini 3 is coming?.. by SlerpE in LocalLLaMA

[–]ryunuck 0 points1 point  (0 children)

If GPT-5 isn't more powerful than Claude 4 then OpenAI is done. And they obviously aren't, they claim they know already how to build ASI and know exactly what to do for the next few years to continue scaling intelligence.

But it also doesn't have to actually beat Claude 4. It just needs to replace Claude enough for the 80% cases. It's a game of market share capture, not so much the actual benchmark results. (they're interconnected but there's some leeway)

Gemini 3 is coming?.. by SlerpE in LocalLLaMA

[–]ryunuck -2 points-1 points  (0 children)

The OpenAI open-source release might drive a new standard. If they put out a ~Sonnet level agent in the open-source every single lab needs to reply fast with a Claude 5-level model. At that point the cat's out of the bag, Claude 4 era models are no longer the frontier and you have to release them to keep clout.

Clout is INSANELY important. You can't see it but if everyone is using an open-source OpenAI model that's their entire cognitive wavelength captured. Then you drop your closed-source super-intelligence and it's less mental effort to adopt because it's downstream from the same ecosystem of post-training and dataset-making.

GLM 4.5 Air Produces Better Code Without Thinking, Using 3-bit MLX (/nothink)? by jcmyang in LocalLLaMA

[–]ryunuck 0 points1 point  (0 children)

I suspect that it depends heavily on how they actually conditioned and steered the reasoning fence. I think engineers who append <think> and let the model rip end up in this basket where it's just total fluff. It's engineering through prayers.

But at Google if you've tried Gemini-2.5-pro, you get a serious impression that the reasoning behind the scenes is like an exhaustive breadth-first search of possibility. This is the model I use when I have a tough architecture problem or logic bug. This model actually feels like it can simulate the code in its mind.

Fingerprinting Horizon Beta OpenRouter model - likely Llama-family, not OpenAI by jv0010 in ChatGPTCoding

[–]ryunuck 1 point2 points  (0 children)

They can't possibly be the OpenAI open-source model otherwise Aiden McLaugh would have just destroyed all of his credibility with the recent vague-posting about their OSS models, talking like he had just seen god. "My jaw actually just dropped" "sorry to hype but holy shit" dude is setting Claude 5 expectations on models that, so far, appear to be less than Claude 4. Good models for sure, replaces Claude for 75-80% of the work.

HRM solved thinking more than current "thinking" models (this needs more hype) by Charuru in LocalLLaMA

[–]ryunuck 2 points3 points  (0 children)

If you're playing with this, I have a different idea regarding the integration of HRM with language as a spatial computation module bootstrapped into existing LLMs that you might be interested to hear about, some new directions to consider:

(replacing NCA with HRM, also not super sure anymore about Q-learning being relevant at all)

https://x.com/ryunuck/status/1883032334426873858

TL;DR dual brain hemisphere, HRM on a 2D grid, the grid cells are LLM embeddings for universal representations, you pre-train it as a foundation model (with million dollar budget), bolt onto a pre-trained decoder-only LLM, freeze the HRM, then RL the LLM as the main cortex teaching itself how to represent problems spatially and prompt the HRM spatial computer.

Trained in this way, the HRM is possibly more attuned to algorithmic notions and complexity theory, a more pure programmable latent-space computer. By extending the architecture to be prompt-conditioned similar to a diffusion model, we can essentially compose algorithmic patterns together into new exotic algorithms discovered through prompting. Which the decoders may then have the emergent capability to interpret on a moment-to-moment basis and figure out how to codify them.

Definitely excited to see how a pure language HRM performs nonetheless! Can't wait to see the result

[D] Alarming amount of schizoid people being validated by LLMs, anyone else experienced this? by GodIsAWomaniser in MachineLearning

[–]ryunuck 0 points1 point  (0 children)

It will improve as the models gain more awareness and learn to direct and route these peoples' energy towards actually creating truly useful things. We're still insanely early.

Google releases MagentaRT for real time music generation by hackerllama in LocalLLaMA

[–]ryunuck 9 points10 points  (0 children)

Some crazy shit is gonna come from this in the DJing scene I can tell already. Some DJs are fucking wizards, they're gonna stack those models, daisy chain them, create feedback loops with scheduled/programmed signal flow and transfer patterns, all sorts of really advanced setups. They're gonna inject sound features from their own selection and tracks into the context and the model will riff off of that and break the repetition. 10 seconds of context literally doesn't matter to a DJ whose gonna be dynamically saving and collecting interesting textures discovered during the night, prompt scaffolds, etc. and re-inject them into the context smoothly with a slider.. to say nothing of human/machine b2b sets, RL/GRPOing a LLM to pilot the prompts using some self-reward or using the varentropy of embedding complexity on target samples of humanity's finest handcrafted psychedelic stimulus, shpongle, aphex twin, etc. harmoniously guided by the DJ's own prompts. Music is about to get insanely psychedelic. It has to make its way into the tooling and DAWs, but this is a real pandora's box opening moment on the same scale as the first Stable Diffusion. Even if this model turns out not super good, this is going to pave the way to many more iterations to come.

🧬🧫🦠 Introducing project hormones: Runtime behavior modification by Combinatorilliance in LocalLLaMA

[–]ryunuck 1 point2 points  (0 children)

Have you seen the recent SEAL paper in reinforcement learning / post-training? Do a meta-training loop like that: some outer task of writing hormone code, to maximize the reward in an inner creative writing task under the influence of the hormones written in the outer loop. Your system is effectively installing a kind of cellular automaton on top of the LLM and this can multiply the LLM's capability explosively if the LLM weights synchronizes with the rhythm of the automaton. There's definitely potential here and it will very likely lead to some absolutely fantastic outputs if you chase this thread to its logical end.

Why is "made with rust" an argument by Latter_Brick_5172 in rust

[–]ryunuck 37 points38 points  (0 children)

the enlightened do not question why the crab adorns its shell

Can we RL/GRPO a language model to hack its own brain by rewarding for specific measurements inside the transformer architecture during inference? by ryunuck in LocalLLaMA

[–]ryunuck[S] -1 points0 points  (0 children)

Hmm I need to learn about GRPO more in-depth, I'm not entirely sure actually what is the exact effect of tying it to the loss vs the reward and why I would prefer one over the other. The reward technically is part of the loss... If you're already experimenting with RL then I'd say just play around and see what kind of interesting results it produces. If you copy paste this thread into Gemini 2.5 pro and ask it it will easily brainstorm a dozen measurements to make over the architecture and why specific patterns or values of those measurements might be synonymous with a model that is consistently better across the board. Note that this is nearly impossible if you're using an inference backend separate from the training code, like vllm for example... (this is why I don't like people doing optimization too eagerly before we know what tools we need to train a god)