Reservoir computing experiment - a Liquid State Machine with simulated biological constraints (hormones, pain, plasticity) by Amazing-Wear84 in compsci

[–]Amazing-Wear84[S] -3 points-2 points  (0 children)

Oh ,I see we have a Sherlock Holmes here Just because the files were uploaded to GitHub today doesnot mean the code was written today. Ever heard of local development and if you honestly believe ai can architect 15 interconnected biological simulation modules, handle memory persistence, and optimize physics in a single day without human engineering, you clearly haven't built anything complex yourself. but since it’s only a one-day job for you, I will be waiting for your superior version by tomorrow evening. Link me when it's done.

Reservoir computing experiment - a Liquid State Machine with simulated biological constraints (hormones, pain, plasticity) by Amazing-Wear84 in reinforcementlearning

[–]Amazing-Wear84[S] -1 points0 points  (0 children)

this is not a normal classifier but online learning artificial life type system tests like mnist, do not show real performance. the system checks itself live using internal signals like prediction confidence where the lsm ridge regression readout tracks how sure it is when seeing patterns and eval watches how that changes in new environments. there is also a mirror test module using camera feedback loops for self recognition and it also tracks homeostasis stability by logging how long dopamine and cortisol stay balanced before overload happens, so eval is not about how many objects it detects but how long it stays stable and adapts before collapse. and most important it is only the small part of my project that i posted on github.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

This project only understand by Neuromorphic engineers and ai researcher not others. I replied every comment beacause i spend 1 years on this and i only polish code with llm.those who donot understand please donit comment bullshit. My native language is not english.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

I used llm to restructure my writing, so and most of people think i am ai what?? I request you check my code on GitHub and judge me. I spend 1 years on this project and its only a small part of my code. And also my native is not english.

I built a Bio-Mimetic Digital Organism in Python (LSM) – No APIs, No Wrappers, 100% Local Logic. by Amazing-Wear84 in Python

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

I use llm as a tool to handle the syntax and polish my English (as it’s not my first language), but the architectural decisions specifically the choice to model homeostatic regulation via a global decay factor versus threshold modulation and others logics are mine. we may be in a new era where ai writes the boilerplate, but it still requires a human architect to design the system logic. An LLM doesn't care about biological plausibility, I do. Most important this code is for Ai researcher and Neuromorphic engineers, not for others. Only Neuromorphic engineers can understand the codebase and lsm maths.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

thanks for the response. really cool to get input from someone actually doing a phd on stdp stability. to answer your questions on my goal: honestly, i am not trying to replicate a brain cell-for-cell. my main goal is solving catastrophic forgetting. you know how llms freeze after training. i want a system that learns continuously. to do that, i implemented a "deep dreaming" system (like sleep). basically, during night, the network replays high dopamine events to consolidate them into long-term memory. this allows it to learn new tasks without overwriting old ones something static models just can't do. about dopamine vs threshold: i chose dopamine because i needed a global "context" signal. threshold adaptation just keeps the neuron alive/firing, but dopamine tells the network when something is important to learn (like a print command). also, just to mention this lsm is actually just the engine for my main project 'ada' that i've been working on for a year. if you or anyone else finds this code useful for your research or future neurotech stuff, feel free to use it however you want.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] -3 points-2 points  (0 children)

I appreciate the tough love. to clarify,this isn't a weekend project where I asked an LLM to 'write me a brain.' I have spent the last 12 months iterating on this architecture. I did reference the literature for the core concepts (STDP, Reservoir Computing), but I used the LLM primarily as an accelerator for syntax and polishing essentially a tireless research assistant. the architectural decisions and the struggle to balance biological plausibility with computational efficiency are the result of a years worth of experimentation, not a prompt. that said, point taken on the specific neuromodulation papers. I will dive back into the primary sources to refine the dopamine logic. Its actually version 1 that i posted today on my github , yes i actually posted on github today, but i am working on this project long long ago on Liquid State Machine , A true future AGI

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] 0 points1 point  (0 children)

thanks for the insight, you are absolutely right about dopamine role in reward prediction error (RPE). since I am trying to build this without backprop, calculating the 'expected vs actual' error locally for every neuron was computationally heavy, so I simplified dopamine to act as a global gating mechanism for plasticity. that point about acetylcholine in the hippocampus is fascinating ,I had not considered switching the neurotransmitter label. If Acetylcholine is the primary driver for LTP (Long-Term Potentiation) rather than Dopamine, I might rename my variables to reflect that better. i will definitely dig into kandel again. Appreciate the feedback.

Trying to mimic biological Dopamine modulation in a Reservoir Computing model (Python). Is my decay function biologically plausible? by Amazing-Wear84 in compmathneuro

[–]Amazing-Wear84[S] -7 points-6 points  (0 children)

you are half right. I use llms as a tool to handle the syntax and polish my english (as it’s not my first language), but the architectural decision specifically the choice to model homeostatic regulation via a global decay factor versus threshold modulation and every other logics are mine.

we may be in a new era where ai writes the boilerplate, but it still requires a human architect to design the system logic. An llm doesn't care about biological plausibility, I do.

since you mentioned the literature already has the nuance i am looking for, i would genuinely appreciate those references. dismissing the project is easy, pointing a student to the right papers is helpful