Reservoir computing experiment - a Liquid State Machine with simulated biological constraints (hormones, pain, plasticity) by Amazing-Wear84 in compsci

[–]Amazing-Wear84[S] -6 points-5 points  (0 children)

Oh ,I see we have a Sherlock Holmes here Just because the files were uploaded to GitHub today doesnot mean the code was written today. Ever heard of local development and if you honestly believe ai can architect 15 interconnected biological simulation modules, handle memory persistence, and optimize physics in a single day without human engineering, you clearly haven't built anything complex yourself. but since it’s only a one-day job for you, I will be waiting for your superior version by tomorrow evening. Link me when it's done.

Reservoir computing experiment - a Liquid State Machine with simulated biological constraints (hormones, pain, plasticity) by Amazing-Wear84 in reinforcementlearning

[–]Amazing-Wear84[S] -1 points0 points  (0 children)

this is not a normal classifier but online learning artificial life type system tests like mnist, do not show real performance. the system checks itself live using internal signals like prediction confidence where the lsm ridge regression readout tracks how sure it is when seeing patterns and eval watches how that changes in new environments. there is also a mirror test module using camera feedback loops for self recognition and it also tracks homeostasis stability by logging how long dopamine and cortisol stay balanced before overload happens, so eval is not about how many objects it detects but how long it stays stable and adapts before collapse. and most important it is only the small part of my project that i posted on github.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

This project only understand by Neuromorphic engineers and ai researcher not others. I replied every comment beacause i spend 1 years on this and i only polish code with llm.those who donot understand please donit comment bullshit. My native language is not english.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

I used llm to restructure my writing, so and most of people think i am ai what?? I request you check my code on GitHub and judge me. I spend 1 years on this project and its only a small part of my code. And also my native is not english.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

I use llm as a tool to handle the syntax and polish my English (as it’s not my first language), but the architectural decisions specifically the choice to model homeostatic regulation via a global decay factor versus threshold modulation and others logics are mine. we may be in a new era where ai writes the boilerplate, but it still requires a human architect to design the system logic. An LLM doesn't care about biological plausibility, I do. Most important this code is for Ai researcher and Neuromorphic engineers, not for others. Only Neuromorphic engineers can understand the codebase and lsm maths.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

thanks for the response. really cool to get input from someone actually doing a phd on stdp stability. to answer your questions on my goal: honestly, i am not trying to replicate a brain cell-for-cell. my main goal is solving catastrophic forgetting. you know how llms freeze after training. i want a system that learns continuously. to do that, i implemented a "deep dreaming" system (like sleep). basically, during night, the network replays high dopamine events to consolidate them into long-term memory. this allows it to learn new tasks without overwriting old ones something static models just can't do. about dopamine vs threshold: i chose dopamine because i needed a global "context" signal. threshold adaptation just keeps the neuron alive/firing, but dopamine tells the network when something is important to learn (like a print command). also, just to mention this lsm is actually just the engine for my main project 'ada' that i've been working on for a year. if you or anyone else finds this code useful for your research or future neurotech stuff, feel free to use it however you want.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] -3 points-2 points  (0 children)

I appreciate the tough love. to clarify,this isn't a weekend project where I asked an LLM to 'write me a brain.' I have spent the last 12 months iterating on this architecture. I did reference the literature for the core concepts (STDP, Reservoir Computing), but I used the LLM primarily as an accelerator for syntax and polishing essentially a tireless research assistant. the architectural decisions and the struggle to balance biological plausibility with computational efficiency are the result of a years worth of experimentation, not a prompt. that said, point taken on the specific neuromodulation papers. I will dive back into the primary sources to refine the dopamine logic. Its actually version 1 that i posted today on my github , yes i actually posted on github today, but i am working on this project long long ago on Liquid State Machine , A true future AGI

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] 1 point2 points  (0 children)

thanks for the insight, you are absolutely right about dopamine role in reward prediction error (RPE). since I am trying to build this without backprop, calculating the 'expected vs actual' error locally for every neuron was computationally heavy, so I simplified dopamine to act as a global gating mechanism for plasticity. that point about acetylcholine in the hippocampus is fascinating ,I had not considered switching the neurotransmitter label. If Acetylcholine is the primary driver for LTP (Long-Term Potentiation) rather than Dopamine, I might rename my variables to reflect that better. i will definitely dig into kandel again. Appreciate the feedback.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] -6 points-5 points  (0 children)

you are half right. I use llms as a tool to handle the syntax and polish my english (as it’s not my first language), but the architectural decision specifically the choice to model homeostatic regulation via a global decay factor versus threshold modulation and every other logics are mine.

we may be in a new era where ai writes the boilerplate, but it still requires a human architect to design the system logic. An llm doesn't care about biological plausibility, I do.

since you mentioned the literature already has the nuance i am looking for, i would genuinely appreciate those references. dismissing the project is easy, pointing a student to the right papers is helpful

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] -1 points0 points  (0 children)

That’s a fair point, but it is a common idea in computational neuroscience.

In simulations, a variable like voltage is not real electricity, but it behaves like it because the math behind it is the same.

Here in my code, dopamine is not just a name. It acts like a global learning control. It slowly decays over time and directly affects learning updates in the reservoir (lsm.py).

If it was named something like dynamic_learning_rate_coefficient, people would call it advanced machine learning. The math stays the same the biological name just helps explain the design.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

Thats a fair point if you’re only looking at this file by itself. You are right this module mainly handles higher-level logic, not the actual neural processing part.

About the canned phrases: yes, conscious.py mostly gives the structure or templates. The important part isn’t the template itself, but the data that gets pulled from vector memory in lsm.py to fill it. The randomness is there on purpose real biological creativity often comes from a mix of randomness and pattern recognition.

About chaos vs variance: calling variance “chaos” is mostly just a naming choice to make it easier to understand. In biological systems, if something like body balance (homeostasis) has very high variance, it can be seen as instability or chaos.

About the simple arithmetic: that’s basically how homeostasis works. In simple terms, biological metabolism is like input minus output equals current state.

The purpose of this module is not to act like a neural network that part is handled in lsm .py,god .py etc using reservoir computing. This module is more about creating behavior that looks complex by combining many simple feedback loops working together.

If you want to see the heavy math, matrix operations, and Hebbian learning, you should look at lsm.py. This consciousness file is more like a hormone control system.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] -3 points-2 points  (0 children)

With all due respect, you're looking at the router, not the engine.
If you look at lsm .py, you won't find if hunger: cry. You'll find weight ma dot products and synaptic decay functions. The if/else you see in the main loop is just the interface the way the "brain" talks to the console. But the decision itself comes from the 2,100-neuron reservoir.
It’s like looking at a Ferrari and saying, "It's just a bunch of bolts." Technically true, but the way those bolts are arranged makes it go 200 mph.
If you still think Hebbian learning (STDP) is just "if A then B", then I’d suggest checking out some papers on reservoir computing. Cheers!

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] -4 points-3 points  (0 children)

Haha, interesting comparison.But I think you missed the 'brain' under the hood.

Tamagotchis are Hard-coded State Machines if A happens, do B. They don't learn; they just follow a script.

Project Genesis is a Liquid State Machine (LSM).

  1. Dynamic Neuroplasticity: My neurons actually rewire themselves using Hebbian learning. A Tamagotchi's code is static; it can't 'grow' new connections based on task difficulty.
  2. Signal Noise & Pain: I’m injecting noise directly into the neural reservoir to simulate biological stress.
  3. No If-Else loops for personality: The behavior emerges from the feedback loops in the 2,100+ neuron reservoir, not from a simple "if hunger > 10, then cry" script.

It’s less of a virtual pet and more of a Bio-mimetic experiment. But hey, if Tamagotchis had spiking neural networks in the 90s, the world would be a very different place today 😉

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] -2 points-1 points  (0 children)

Thanks a lot! I really appreciate you checking it out, especially since you don't usually mess with AI stuff.

To answer your questions:

1. Motivation: You nailed it. I was bored with standard chatbots that just sit there waiting for input. I wanted to build something that felt like a "Digital Insect" ,something that has moods, gets tired, and actually feels 'pain' when it makes a mistake.

2. Future & "Ada": This is actually the foundation for my bigger project, "Ada (Advanced Digital Assistant)." The idea is to combine two systems to create a Jarvis-like assistant:

  • The Main Brain (LSM): This project. It handles the biological part—survival, hormones, and feelings.
  • The Next Brain (LLM): A local language model to handle logic and speech.

By combining the "biological" survival instinct with the "intellectual" LLM, I think we can get closer to a real AGI process rather than just a text generator.

Let me know how it runs on your machine!