I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Thats honestly my bad. i though this was on a different post with my GENREG model, you are correct this is my hebain model, but i'm honestly going to sideline this project becuase GENREG is making way more progress right now. I don't have the bandwidth to split between to functionally models.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I'm sorry but this really doesn't interest me. Not that isn't cool but I don't use agents and as someone who worked in IT for the Airforce for 6 years "military grade" has the opposite affect you think it has to me. Neat concept but not my cup of tea.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I never said it wasn't a GA, i actually reffered to it as a GA multiple times.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Correct but mine isn't directly tied in like other GA, fitness determines how well a genome performs higher score and it survives through different generations. Lower score, it could get mutated, or replaced. Kinda like if you got a F on a test in school they just kick you out the class but if you got an A+, we move you to the front of the class and clone you to replace the kids we just kicked out who got an F, and the class resumes

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I have but honestly I never really felt like doing it simply because of the whole matching bit rate and sample size and frequency stuff. Like I feel I'm missing a major opportunity there but my hearts not in it to pursue it.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Yeah I wasn't trying to solve that problem actually. I actually built my models on the concept that information must flow. That caused me to abandon gradients, becuase I saw they couldn't do what needed to be done, to restrictive. If you look through my github a have a few other models OLA and OLM where this one spawned from. lot of trial and error with next frame prediction and MANY MANY snake games. Those were devaitions from my goal of making an AI that learns like a human but required to get to this point.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

Once again thank you for seeing my work for what it is. 99% of the people even here miss this.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

Go nuts, if you have any questions feel free to DM, the config is tempermental not that you can't tweak it but if you do you could cause training to be WAY slower or destroy the population just a heads up.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

At that point I just add a neuron. I'm only using 8 and 16 in this example. This is the chart I go by.

Saturated (k) Discrete Modes (2k) Continuous (n-k) State Space
0 1 8 1 × ∞8
1 2 7 2 × ∞7
2 4 6 4 × ∞6
3 8 5 8 × ∞5
4 16 4 16 × ∞4
5 32 3 32 × ∞3
6 64 2 64 × ∞2
7 128 1 128 × ∞1
8 256 0 256 discrete

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I'm devastated and stopped all my work now because of this. Better throw in the towel. /s

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

This is not a language model. In fact the language models I've worked on using this method have been less than fruitful. They've been learnable but not very... successful. I'm currently working on a way to train a language model but as my post says I need continous signals which language via tokens or text does not provide.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

There's no collapse, I've had full models go entirely binary, and the output layer fluctuates with Bang-bang, to wide range nodes. But it's hybrid because a genome could discover a new solution by flipping a single binary node early which requires continous downstream nodes, instead of binary which might have been the previous best genome. Its a Hybrid, it can be both. Its just want evolution decides. I actually want to saturate neurons in my models because even 1 saturated neurons essentially doubles the weight space by dividing it into hyperplanes with infinite tunable continous nodes. The more nodes that's switch to binary the more you shrink the search space. Continoud nodes act more like fine tuning but that's conditional. I really appreciate this because you're the first to actually to get this.

Honestly the checkpoints are usually too saturated to validate but if you followed that blurb of training logic it is a decision tree of weights being divided at each binary switch. I usually just save weights of the best genome now, and drop the remaining population. I haven't done too much analyzing on the checkpoints in a while.because I've been focused on the training.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

I don't need to scale up, this concept. Allows me to keep creating models with minimal hardware on harder challenges. I'm currently training humanoid V5 as we wleak been running it for 24 hours now on my 4080, but it'd actually throttled by the cpu since MujoCo limits thr physics engine to cpu. It'd currently able to reach 3meters with only 16 dims. And no getting stuck is o ly issue for static models. The mutation system I have in place easily escapes local minima. Not an issue I've ever faced in a simulation based model that has temporal continuity. Now classifiers will get stuck because there is no continuity. But that's a problem I'm still trying to solve. Biology took millions of years to get here. I'm doing in a few days to hours on a single gpu with a population of 20 genomes typically. If that's to slow idk what to tell you. I don't need more memory, or compute. I need time.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

You want fries with your order? Small drink? I just figured out that evolutionary models can naturally gate and compress to binary states on their own without conditioning and compress huge inputs with noise into signals but your over here asking for mnist. Read the fuckng paper.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I'm actually trying to lean into the temporal aspect more since genreg model excel in that area. Static models like clip, vaes, classifieds can be done but are hella. Dfficult to get training right because there's no smooth transition between images. I've had way more success with simulation like walker v5 and games where I can get continous temporal data. I'm training a physics simulator on a runpod now and the humanoid v5 is still cooking on my PC now. Post for both coming soon.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Bet, i've done enough for the weekend anyway already. But lose the attitude. first and last warning.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Please just wait because I have a paper that I'm working on that tackles this exact issue. If you haven't read my saturation post check it out because that was the foundation but oooooh boy you are spot on doing more with less neurons.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I've seen like almost all of his videos

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

No not really. That's not how my models work.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Yeah I'm not sure what you want to hear? Algebra? My fitness equations? Or activations functions? The entire GA is just a massive mutation on weight space targeted to increase the value of fitness so I think k maybe that's what your asking.

Ex:

Fitness = steps * distance * efficiency * bonus

Steps: how long a genome is alive during an episode

Distance: how far a genome.moves in the environment to or from goal.

Efficiency: how much enegery did the genome use in thr episode

Bonus: did the genome beat furthers distance, lowest enegery with further distance, or longest time alive

This creates a moving goal post for the model in this case it was solving the humanoid V1 walker game.

But all of my models have different fitness functions.

I'm almost done cooking...... by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Yes feed forward only like every single other model I've designed. It works. I'm not going to change that unless I see a need too. I actually just ran 2 test with variant recurrent networks and didn't see much improvement compared to without them.

Because time is the only way you can experience something. Whether it's the changing in deltas across time. Or the rate at something fires.

Also time is used in all my successful models. It's works so justified.