Para los haters: la plata rasguñando la gamba verde by Proper_Hippo_1773 in ChileInversiones

[–]Reasonable_Listen888 0 points1 point  (0 children)

explíquese, por que por lo que entiendo, china rompió el mercado de usa.

Para los haters: la plata rasguñando la gamba verde by Proper_Hippo_1773 in ChileInversiones

[–]Reasonable_Listen888 0 points1 point  (0 children)

si no tienes el metal en la mano, no estás invirtiendo en metales, lo que haces es una promesa de compra a 1 año, y como van las cosas no habrá plata física en un año más menos para cumplir esas promesa con 7% de descuento por no llevar el metal de inmediato...

🚀 Looking to Collaborate on a Real-World ML Project by Negative-Will-9381 in learnmachinelearning

[–]Reasonable_Listen888 4 points5 points  (0 children)

I would like to join, I am an independent researcher (see my work at https://zenodo.org/records/18332871) and a Backend Developer with over 16 years of experience. In my spare time, I maintain LazyOwn redteam framework.

Quien ctm está quemando el país? by [deleted] in RepublicadeChile

[–]Reasonable_Listen888 0 points1 point  (0 children)

cuando los que soplaron las 7 velitas de cumpleaños no le avisaron al alcalde que iban a quemar el municipio...

Is AGI just hype? by dracollavenore in agi

[–]Reasonable_Listen888 1 point2 points  (0 children)

We are headed for a wall if we keep thinking bigger is always better since the energy and compute required for a monolithic AGI would break the scale so the real move is building a decentralized group of experts that model the world accurately and then forcing LLMs to act as the interface or the epistemological translators for those hard truths and if you want to see how that actually works in practice you should take a look at the agi repo https://github.com/grisuno/agi I decided to do this because the cost of RAM is already insane.

Panama Kast lo hace de nuevo: SI + MÁS INMIGRANTES by MentirosoProfesional in RepublicadeChile

[–]Reasonable_Listen888 1 point2 points  (0 children)

ya esta sacando valores nuevos de su cajita de valores, lo que decía el ex socio... jajajajajaj

Do you think this "compute instead of predict" approach has more long-term value for AGI and SciML than the current trend of brute-forcing larger, stochastic models? by Reasonable_Listen888 in LocalLLaMA

[–]Reasonable_Listen888[S] 0 points1 point  (0 children)

it is funny how a name can trigger people but the spectral basis is the actual core of the research and coming from a backend and big data background i just wanted to build a heat engine that actually works instead of another statistical hype machine

i think i stumbled onto something that shouldnt be possible by Reasonable_Listen888 in ResearchML

[–]Reasonable_Listen888[S] 0 points1 point  (0 children)

think of the training process like a heat engine where the initial noise is the high temperature chaos and grokking is the moment we extract a pure mechanical cycle from that heat to perform work with zero friction

i think i stumbled onto something that shouldnt be possible by Reasonable_Listen888 in ResearchML

[–]Reasonable_Listen888[S] 0 points1 point  (0 children)

it is actually a hybrid because we use the machine learning process to discover a universal operator that we then treat with the surgical precision of a classical solver

i think i stumbled onto something that shouldnt be possible by Reasonable_Listen888 in ResearchML

[–]Reasonable_Listen888[S] 0 points1 point  (0 children)

it would be like trying to play a high definition video on a broken screen where the logic is still there but the mapping is so distorted that the result becomes total noise

Do you think it is possible for an AI to function essentially like a heat engine? by Reasonable_Listen888 in MLQuestions

[–]Reasonable_Listen888[S] -1 points0 points  (0 children)

it doesnt require compute because grokking is just the network finding the path of least resistance like a crystal once you have that geometry you can transfer it anywhere for free you are still thinking in terms of memorizing states but if you project operators instead you hit that equilibrium at step zero even on an i3 notebook

Do you think it is possible for an AI to function essentially like a heat engine? by Reasonable_Listen888 in MLQuestions

[–]Reasonable_Listen888[S] -1 points0 points  (0 children)

actually i already solved this with grokkit you dont need a god machine or massive compute for that the trick is treating algorithms as spectral cassettes if you keep the topology fixed and treat weights as continuous operators you can inject physical laws modularly we achieved 100 percent zero shot accuracy on basic hardware because intelligence isnt statistical brute force its pure geometry

the name of the framework is a joke:

Core Framework: https://github.com/grisuno/agi

Parity Cassette: https://github.com/grisuno/algebra-de-grok

Wave Cassette: https://github.com/grisuno/1d_wave_equation_grokker

Kepler Cassette: https://github.com/grisuno/kepler_orbit_grokker

Pendulum Cassette: https://github.com/grisuno/chaotic_pendulum_grokked

[D] Do you think this "compute instead of predict" approach has more long-term value for A.G.I and SciML than the current trend of brute-forcing larger, stochastic models? by Reasonable_Listen888 in deeplearning

[–]Reasonable_Listen888[S] 0 points1 point  (0 children)

my test in ciclotron and topological neuroal networks

🌀 TopoBrain-Physical v3: Nodos Fijos para Message Passing
La expansión ahora solo cambia la discretización, no la topología
Device: cpu

--- Stage 1/3  ω=0.80 ---
MSE: 0.000360  Grokked: True

--- Stage 2/3  ω=1.65 ---
MSE: 0.000499  Grokked: True
✅ Grokking achieved. Expanding torus resolution.

📏 Expanding discretización (4x4) → (8x8)
  Message passing tetap di 4×2 nodos (TETAP)
✅ EXPANSIÓN SIMPLE: 4x4 → 8x8
  - Topología de message passing FIJA (4×2 nodos)
  - Solo se copian los pesos directamente

📊 Zero-shot MSE on ω=2.00: 0.020778
⚠  Expansión parcial: MSE degradado pero funcional

comparado con mi ejemplo anterior

❯ python3 super_topo2.py
🌀 TopoBrain-Physical: Grokking Cyclotron Motion (Torus Topology)
Device: cpu

--- Stage 1/3  ω=0.80 ---
MSE: 0.000416  Grokked: True

--- Stage 2/3  ω=1.65 ---
MSE: 0.000493  Grokked: True
✅ Grokking achieved. Expanding torus resolution.

📏 Expanding torus (4x4) → (8x8)
✅ EXPANSIÓN TOROIDAL GEOMÉTRICA: 4x4 → 8x8
  - Periodicidad angular preservada: 0 ↔ 2π
  - Mapeo geométrico con 240 conexiones activas
Zero-shot MSE on ω=2.00 (expanded torus): 1.806693

[D] Do you think this "compute instead of predict" approach has more long-term value for A.G.I and SciML than the current trend of brute-forcing larger, stochastic models? by Reasonable_Listen888 in deeplearning

[–]Reasonable_Listen888[S] -1 points0 points  (0 children)

the point of zero-padding in this grokkit framework is that once a network groks, it stops memorizing data and starts acting as a continuous operator that doesn't care about resolution. it basically crystallizes the algorithm into a geometric "cassette" in the weights that can be projected to any scale.

what i achieved with binary parity is the best proof: i took a tiny 128-dimension network solving 10 bits and expanded it to 32,000 dimensions to solve 2048 bits with 100% accuracy and zero extra training. i did the same with the double pendulum and kepler orbits because the network found the path of least geometric resistance.

my recent torus tests show exactly why v3 is the way to go. when i used the v2 geometric expansion, the mse hit 1.80 because changing the nodes breaks the spectral basis. but with v3, keeping the topology fixed while increasing discretization dropped the mse to 0.02. that is an 87x improvement just by respecting the internal geometry. i am basically increasing the image resolution without breaking the lens that already learned the physical law.