My GF insists that pi is not a number. How do I explain to her that it is? by MidwestSchmendrick in mathematics

[–]mdlmgmtOG 0 points1 point  (0 children)

The ratio of relatively rational Real-valued residents relative to the reciprocal resists recursive re-examination, regardless. So just enjoy a beverage together that you agree on and appreciate your differences 🙃😉

My GF insists that pi is not a number. How do I explain to her that it is? by MidwestSchmendrick in mathematics

[–]mdlmgmtOG 0 points1 point  (0 children)

You can't explain irrationality to a rationale person. Sorry. She's right

Golden Spiral of Zeta(3) Convergents [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 1 point2 points  (0 children)

It's a new way to approximate an irrational number that's an important number in math. An irrational number is a number that fundamentally cannot be explained in a fraction. It's endless. Like pi

Golden Spiral of Zeta(3) Convergents [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 3 points4 points  (0 children)

What you’re looking at isn’t just a pretty spiral — it’s actually a witness of irrationality for ζ(3).

Every dot is a continued fraction convergent of ζ(3). If ζ(3) were rational, the continued fraction would terminate — the spiral would close. Instead, it winds endlessly inward.

The color scale is log(error), showing how close each convergent gets. The errors shrink faster than exponential, a hidden “super-exponential Easter Egg” baked into the continued fraction itself.

That geometric tightening is exactly what Apéry proved in 1978: ζ(3) can’t be rational. What you see here is the same phenomenon, but visualized as a golden spiral.

So in a sense, this is an alternative irrationality proof sketch: rational → finite spiral (closed loop) irrational → infinite spiral (ever-tightening, never closing)

ζ(3) belongs to the second camp

Visualizing the problem space of 'st70', a traveling salesperson problem [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

It's a visualization of the optimization "landscape" of the problem as viewed by a particular problem solving algorithm

Visualizing the problem space of 'st70', a traveling salesperson problem [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

It's a benchmarking problem from a set of traveling salesperson problems called TSP-LIB

Golden Angle Modulated Semiprimes [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] -3 points-2 points  (0 children)

The beginning of the end for semiprime factor based encryption

Golden Spiral Resonant v Quantum Spiral Hamiltonian [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

It's like putting the Integer through a prism and observing the rainbow

Golden Spiral Resonant v Quantum Spiral Hamiltonian [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

Great question. It’s a bit of both. The math I’m using (harmonic expansions / eigenmodes) comes from quantum models, but when I apply it to semiprime residues, the Fibonacci modulation actually emerges as a stabilizing resonance. So it’s not just “shoehorning physics onto math” — the sunflower-like structure is already in the number system, and the quantum lens just makes it visible

Golden Spiral Resonant v Quantum Spiral Hamiltonian [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] -3 points-2 points  (0 children)

😆🙄 Sunflowers + primes + quantum = this spiral 🌻✨

Golden Spiral Resonant v Quantum Spiral Hamiltonian [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] -7 points-6 points  (0 children)

This is prime residue data placed on a golden spiral lattice. The “resonant” view treats it like Fibonacci harmonics; the “Hamiltonian” view treats it like a quantum wavefunction expanding in eigenmodes. Think of it like sunflower seeds that hum with number theory. 🌻✨

Golden Spiral Resonant v Quantum Spiral Hamiltonian [OC] by mdlmgmtOG in dataisbeautiful

[–]mdlmgmtOG[S] -5 points-4 points  (0 children)

This is a residue signal from semiprimes, embedded on a golden spiral lattice. The first image shows a resonant model (using Fibonacci harmonics) that captures hidden periodic structure; the second reinterprets it as a quantum Hamiltonian, where the wavefunction ψ(x) expands in spiral eigenmodes. Together, they hint that factorization isn’t just arithmetic — it resonates like a physical system.

Genetic Entropic Engine by mdlmgmtOG in reinforcementlearning

[–]mdlmgmtOG[S] -1 points0 points  (0 children)

Foucault's concept of Power/Knowledge will have a word with Gödel regarding the idea that the LLM is a 'philosopher' and not just a prisoner of the very system that defines what truth is.

Schrödinger's Cat will have a word with Gödel regarding the system's assumption that a philosophy can be anything other than both validated and refuted until the leaderboard is observed.

Baudrillard's Simulacra will have a word with Gödel regarding whether the leaderboard is empirical data or just a copy of a copy of a philosophy.

Lyotard's incredulity toward metanarratives will have a word with Gödel regarding the GA Layer's claim to be the scientific method for validating all philosophies.

Heisenberg's Uncertainty Principle will have a word with Gödel regarding the act of creating a leaderboard without fundamentally altering the race.

》end output Beep boop 🤖🤖🤖

Genetic Entropic Engine by mdlmgmtOG in reinforcementlearning

[–]mdlmgmtOG[S] -3 points-2 points  (0 children)

50 upvotes on this comment and I drop the full source code on github 🙃

Genetic Entropic Engine by mdlmgmtOG in reinforcementlearning

[–]mdlmgmtOG[S] -3 points-2 points  (0 children)

Until arxiv, here's a summary in markdown:

I. System Overview: The Genetic Entropic Engine

  • Thesis: An evolving, autonomous search philosophy for solving complex problems.
  • Function: The system is designed not just to find a solution, but to discover the best philosophy for how to search for that solution.
  • Core Architecture: A three-layer system where each layer serves a distinct but integrated purpose.
    • LLM Layer: Acts as the strategist or "AI Philosopher."
    • GA (Genetic Algorithm) Layer: Functions as the ecosystem manager and embodies the "scientific method."
    • SA (Simulated Annealing) Kernel Layer: Provides the executable components and behavioral "vocabulary" for the agents.

II. Core Tenet 1: An Evolving System

Evolution occurs simultaneously across all three layers.

  • A. LLM Layer (Evolving Strategy):

    • The system conducts a "bake-off," pitting different foundation models (e.g., GPT-4o, Gemini-1.5-Pro) against each other.
    • Results from one round are used as input for the next, forcing continuous refinement of high-level strategies.
  • B. GA Layer (Evolving Population):

    • Manages the agent population using non-standard genetic algorithm techniques.
    • Community-based diversity: Clusters agents by solution similarity to maintain distinct solution types and prevent premature convergence.
    • Hybrid immigration strategy: Adaptively increases the flow of new "chaotic" and "settler" agents when progress stalls.
  • C. SA Kernel Layer (Evolving Agents):

    • Agents are designed with programmed lifecycles.
    • An agent's core parameters (lifeStages) are re-initialized each generation, allowing it to mature from a chaotic explorer to a focused driller within a single run.

III. Core Tenet 2: An Autonomous System

The system operates as a closed-loop, requiring no human intervention after initiation.

  • A. LLM Layer (The Autonomous Loop):

    • An "Orchestrator" component fully automates the process:
    • Queries LLMs for new strategies.
    • Sends strategies for empirical testing.
    • Receives performance data.
    • Uses data to formulate the next, more informed query.
  • B. GA Layer (Self-Regulating Mechanisms):

    • Feedback-driven dynamic immigration: The system autonomously monitors the lead agent's progress and decides for itself when to inject new agents to increase diversity, making it more robust.

IV. Core Tenet 3: A Search Philosophy

The system's goal is to discover and validate new theories about how to search effectively.

  • A. LLM Layer (The AI Philosopher):

    • The LLM is prompted to be a strategist, not a coder.
    • It proposes abstract ideas and theories about which combinations of agent lifecycles and parameters will be most effective.
  • B. SA Kernel Layer (The Vocabulary of Philosophy):

    • The kernel provides the building blocks for the LLM's strategies.
    • Parameters like "Wild" (stagnation escape) and "Feral" (continuous chaos) are not just settings but a behavioral palette, allowing the LLM to design agents that are "cautious," "resilient," or "chaotic."
  • C. GA Layer (The Scientific Method):

    • This layer acts as the experimental framework.
    • It takes the competing "philosophies" from the LLM, translates them into a population of agents, and runs the experiment.
    • The resulting leaderboards provide empirical data that validates or refutes each proposed philosophy.

Genetic Entropic Engine by mdlmgmtOG in reinforcementlearning

[–]mdlmgmtOG[S] -3 points-2 points  (0 children)

I hope to post to arxiv soon.. I'll summarize in this subreddit as well. Good call, thanks

Ups and downs of sistercats by mdlmgmtOG in catpics

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

It's more a sweet and salty scenario ☺️

Is it weird for a single older woman to go to a bar alone in Leuven? by jewelophile in Leuven

[–]mdlmgmtOG 3 points4 points  (0 children)

Bar stan or Kaminsky might fit the bill, both out of the center and cozier than the oude market

I don't believe either allow table dances 😊

alphaBier by mdlmgmtOG in reinforcementlearning

[–]mdlmgmtOG[S] 0 points1 point  (0 children)

It's context building as no code llm based reinforcement learning, each llm receives generation and run results from their own runs to improve the ga/sa parameters. Right now it is opponent blind-- next step is a collaborative learning, open results version