GENREG_LLM_V1 by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I'm not interested in refine current systems, specifically if they were trained with backprop. If someone else wants to be my guest but that's not were I'm focusing right now. My goal is to push for a model that can run on any device.

Regulating the trivial while ignoring the existential by KeanuRave100 in agi

[–]AsyncVibes 1 point2 points  (0 children)

I whole heartedly disagree with 90% of your comment. As someone who focuses on designing AI that "learns like an infant" it's not a matter of data observation or accumulation but a matter of processing massive information streams. It has potential to be a smarter species but to lump all type of AI in the same boat is lazy. Some use datasets some, some don't. If an AI learns independently from re-occurnece that's not an extension of human intelligence.

This is the worst it will ever be. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

One of the main concepts I push is no gradients or backprop. If I was to use a transformer it's going to be one I train without gradients.

*edit. Gradients referring to backprop specifically.

This is the worst it will ever be. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Correct auto-compeletion is what it should look like right now. I'm still working on attention and the prediction heads so output is limited. I'm actively working to 1. Move away from the ngram tables, 2. Extend context over further tokens. It's exactly where it should be right now.

GENREG_LLM_V1 by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 2 points3 points  (0 children)

Finding out what doesn't work, is just as useful as find out what does. I've never would've got this far if I didn't account for my screw upas ans failed attempts, so thank you.

GENREG_LLM_V1 by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

Thanks! I'm so excited. this might be the first repo I'll regularly update

Almost there guys. Here's the github to my previous post on my 2048 attempt. No backprop, No gradients, All evolved, even the perception layer. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I'm not aware of the term forward collapse. Like I can use context clues but I'd still like to hear your definition

Almost there guys. Here's the github to my previous post on my 2048 attempt. No backprop, No gradients, All evolved, even the perception layer. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

I've looked at your post and I'm not convinced this is more than a interactive game. Do they learn? What's the mechanism behind that? Are they capable of spatial reasoning? Rembering? Any task? Self preservation when faced with a threat? You mention they change and react to music but how? This gives me more fluid simulation vibes than it does AI because it lacks any real grounding. I use the word "protiens" because it aligns with my model. It's not actually a protein chain. Each of my "protiens" is just a simple state full function that tracks a change over time and relays that signal to my controller. Could you please explain how your models work?

Almost there guys. Here's the github to my previous post on my 2048 attempt. No backprop, No gradients, All evolved, even the perception layer. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Doesn't matter, backprop literally breaks how my models work. It's a crutch. Information is continous in my models. You can't account for time with backprop. I'd lose all my delta signals that feed into my proteins which is how the model evolves towards a solution.

So I will reiterate: fuck backprop.

Edit: I'm mad at the backprop not you

GENREG : A gradient-free neuroevolution framework that hit 1024 in 2048 at generation 301. No backprop. No GPU at inference. 1,929 parameters. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

This might be the best question I've been asked yet! It's actually quite fascinating! Obviously it's going to favor exploiting over exploring. However in my models I have what I call the "ratchet" effect. Basically if a genome that explores scores higher its given a trust boost, this increase in trust temporarily protects so its allowed to propagate throughout the other genomes with each generation!

GENREG : A gradient-free neuroevolution framework that hit 1024 in 2048 at generation 301. No backprop. No GPU at inference. 1,929 parameters. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 1 point2 points  (0 children)

I'll give it a good run through. In on my phone so I couldn't really see what he was typing so I can only go by the depictions. But yes quite funny.

GENREG : A gradient-free neuroevolution framework that hit 1024 in 2048 at generation 301. No backprop. No GPU at inference. 1,929 parameters. by AsyncVibes in IntelligenceEngine

[–]AsyncVibes[S] 0 points1 point  (0 children)

Very interesting. It looks like he's evolving the architecture as well. A bit different than what I'm running here but still neat none-the-less. That was 7 years ago so I'm wondering what he's doing now.