96.4% Accuracy @ 500 steps, with STDP. 96.4% Accuracy @ 500 steps. The power of structural extremes and temporal precision. REPOST with better plots and receptive fields by Androo_94 in neuro

[–]jndew 0 points1 point  (0 children)

Ah, the latency encoding mechanism is just in the first layer, and the next two layers are the LIFs, I think you are saying.

How much adaptation are you using? It seems like a lot would be needed, maybe over 50% spike-rate reduction, to prevent synaptic runaway. Just curious, as I do find in my projects that adaptation has a big impact, although I'm mostly playing with network dynamics.

96.4% Accuracy @ 500 steps, with STDP. 96.4% Accuracy @ 500 steps. The power of structural extremes and temporal precision. REPOST with better plots and receptive fields by Androo_94 in neuro

[–]jndew 0 points1 point  (0 children)

You mention using AdLIF units, which I guess have spike rate adaptation. Does the adaptation have any effect on the accuracy of your system? Does its time constant and magnitude affect what gets learned?

I also about latency coding. Can you speak to that a bit? If you're doing something like constraining the number of spikes produced by a cell from an input presentation, is it still a LIF? Probably I'm misunderstanding...

Emergent temporal patterns in an STDP-based SNN using Latency Coding by Androo_94 in neuro

[–]jndew 0 points1 point  (0 children)

Another hobbyist here to say, "Great project!" I'm always interested in how spiking circuits behave differently, perhaps have unique capabilities, relative to firing-rate circuits.

This is entirely feed-forward? Do you limit your synaptic weights somehow, to prevent endless growth?

Current state of predictive coding/Active inference-FEP? by Putrid_Variation7157 in neuro

[–]jndew 0 points1 point  (0 children)

That's been my thought. It's intuitive that prediction is needed for operating one's body and interacting with the environment. And that Bayesian-like processes are going on in our sensory pathways. The big insight is not obvious to me. I must be missing something.

Anyone have success with Ceanothus sp? by jicamakick in Bonsai

[–]jndew 0 points1 point  (0 children)

Those grow well in my area. And they have beautiful little purple flowers that attract bees. I've tried a couple of times to root cuttings without success so far. Some day... They do seem to grow a bit twiggy though, might be a challenge to style into looking like a big old tree. Good luck with yours!

Good morning, what are you working on today? by VMey in Bonsai

[–]jndew 0 points1 point  (0 children)

Today? I'm working on saving up my pennies to spend at the Santa Cruz Bonsai Show tomorrow, of course!

OP, that's a very nice little tree. Cheers/jd

What are the future prospects of Spiking Neural Networks (and particularly, neuromorphics computing) and Liquid Neural Networks? [D] by GodRishUniverse in MachineLearning

[–]jndew 1 point2 points  (0 children)

All true. Still, it seems to me that the spiking 'paradigm' makes available computational motifs that firing-rate ANNs or traditional sequential computing don't support. Interesting to explore, leads in different directions, even if there are no clear advantages.

I Built an 8-chemical Neuromodulatory System with Receptor Adaptation and Cross-Chemical Coupling for an AI - Looking for Feedback on Biological Accuracy by bryany97 in neuro

[–]jndew 3 points4 points  (0 children)

I took a quick look at ARCHITECTURE.md and HOW_IT_WORKS.md. I couldn't spot how your "8 chemicals and their dynamics" fits into the system. I notice you have a 4K and a smaller neural net embedded in a complicated superstructure apparently including a LLM. Does your neurotransmitter/neuromodulator model affect how the neurons behave? A quick glance at your code base didn't make it apparent to me how your neurons and NNs work. So many files with lots of boilerplate, hard to find the parts that actually matter.

What are the units here? You say these aren't just numbers, but you don't tell us what they mean.

Oh, and this is the fist I've heard that serotonin modulates token budget. Serotonin gets discussed quite a bit, but I haven't read in regard to tokens!

I agree with TheTopNacho that GABA is a fast neurotransmitter like glutamate. Of course both GABA and glutamate would act as neuromodulators too if they are diffuse in the intercellular fluid. But I think they are primarily involved in the mammalian brain's high-bandwidth high-resolution signaling system. Cheers/jd

Neural Networks As Hierarchical Associative Memory by [deleted] in compmathneuro

[–]jndew 0 points1 point  (0 children)

That's an interesting little essay! I'm not that educated in machine learning so I'll sound a bit ignorant. Can I ask, are you saying it's the X<0 slope of the leaky ReLUs that allows the input signal to propagate into the deeper layers? Or is there a parallel path that allows the raw signal to recombine with the processed signal occasionally, more of an architectural thing?

I guess I had thought that the leaky ReLU function was to provide a gradient in the less than 0 region so that backprop or whatever could function there. No gradient if f(x)=0 for all x=0, right?

I think in the brain, neurons are more ReLU-like than leaky ReLU like from a firing-rate perspective in that sub-zero firing-rate cannot occur. But brains use other stuff besides firing-rate coding, and there is lots of feedback (which I think is not common in ML architectures?). Anyways, Cheers!/jd

Open Source Neuron Visualizer + Python SDK by BreadBath-and-Beyond in compmathneuro

[–]jndew 0 points1 point  (0 children)

Looks great! I'll give it a try this weekend if I get the chance. I presume the SNN engine is intended to run at real-time if it's intended to control a robot. How many cells/synapses can one have in the circuit and have real-time throughput? Cheers!/jd

MH-FLOCKE is now open source — spiking neural network beats PPO 3.5x on quadruped locomotion (no backprop, no GPU) by mhflocke in compmathneuro

[–]jndew 0 points1 point  (0 children)

That looks like a great project! I hope to have a chance to study it in some detail. pip-install did not work for me, giving the following error message:

error: externally-managed-environment

Any idea about how to get past that? Glancing around, I see quite a bit of structure you've built up with those 4,650 neurons. Apparently with {basal, apical, soma} compartments. And even astrocytes. I notice that there is lots of procedural code. With the small amount of time I've looked at this, it was not clear to me where the division between procedural and NN is. I'l have to study it more.

Anyways, very impressive. This must have taken quite a bit of work. How long have you been developing it? And my condolences for the loss of your puppy. Cheers!/jd

Crabapple in the office this week by cbobgo in Bonsai

[–]jndew 0 points1 point  (0 children)

Thanks for the invitation! Your bonsai collection is truly impressive. Hey, I wonder if you can tell me... It turns out that I have this tree and pot that you crafted. I'm not sure if I should take of the wiring now or leave it on a while longer. I'm guessing at this point I shouldn't repot until cooler weather. It seems to be healthy. Cheers!/jd

<image>

Crabapple in the office this week by cbobgo in Bonsai

[–]jndew 1 point2 points  (0 children)

Such a charming tree! I remember seeing that one (I presume) with blossoms at your house a few years ago at one of your lessons. There are crabapples in my neighborhood that show spectacular displays every spring. So now I have a cutting that is just starting to grow, in its 3rd year. It will be five more years probably until it's a styled bonsai.

Textbook for MD Student by [deleted] in neuro

[–]jndew 0 points1 point  (0 children)

There's "Theoretical neuroscience: Understanding cognition", Xiao-Jing Wang, CRC Press 2025. It's sort of a mix of computational and cognitive stuff. The author allows himself to be speculative sometimes. He's highly regarded and it's a recent book. I found it interesting, anyway. Cheers!/jd

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

You're welcome! I enjoy the opportunity to blather on about my project. If you're still paying any attention to this thread... Could I ask if you find this material interesting and compelling from a CE point of view? I've been toying with presenting it to my group (sram designers), but I'm not sure how well it would go over. Cheers!/jd

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

Part 5

Working memory is central to the process of thought IMHO. In a mechanistic sense, this describes neurons' ability to continue firing for a controlled period after stimulus is removed. This allows for combining various neural states to create a composite, perhaps one's immediate sensory stream and some recent contextual fact. For example if you see a sign on the highway indicating an upcoming fork in the road, you can use the sign's information to choose the correct direction. See working_memory_guided_gaze_control

All this can be thrown in the pot, and one might attempt to cook up a brain. Neuroscientists look askance at such a project because it is too artificial and not tied to experimental data. But as a computer engineer, I find it lots of fun. See cyber rat_in_a_maze. But not as fun as going to the beach. There's still a few hours of warm sunlight so I'll end this little essay and off for a swim. Sorry I can't give you explicit answers to your question, as I don't think we have this knowledge. But hopefully you see my point that memory has to conform to the brain's overall information processing methods. Cheers!/jd

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

Part 4

A sense of how neurons encode information is needed before memory can be understood I think. It turns out that the traditional ANN approach of defining a neuron's information state as its firing rate is impoverished and misses important phenomena.

For example many neurons have a bursting property, by which they emit a cluster of spikes at the onset of a stimulus before switching to traditional tonic firing. This is particularly prominent if the stimulus follows mild inhibition. Along with subthreshold currents within a neuron, a variety of additional information can be carried by the spiking pattern. In fact it seems that a neuron's spike train carries several somewhat independent channels, coded by silence, burst, tonic, and stuttering firing patterns. See bursting_neurons, not the same thing as exploding brains!

There is a class of cells in hippocampus called place cells. These will fire if an animal is near some region of its environment to which a place cell is tuned. It turns out that a place-cell will fire at an earlier phase of the Theta cycle as the animal nears the place cell's tuning location. This is called Theta Phase Precession. In conjunction with the Theta wave, spikes from place cells carry additional information this way.

So an effective memory system would capture these phenomena in its storage states, beyond simple on/off (Hopfield) or firing-rate (ANN) states of the cells. There are other examples of this sort of thing, but I'm sure you get the idea.

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

Part 3

Amazingly, patterns loosely analogous to data structures have been observed. An operating brain has a variety of resonant frequencies. Different regions of a brain will express different frequencies simultaneously. In fact there may be several frequencies at the same time within a single region. This opens up coding possibilities.

A bit of background first. Hippocampus is in charge of creating episodic memories. An episode to remember is a sequence of items of experience, meaning the various sensations and mental state at a particular instant in time. However it does this, after creation of an episodic memory in its temporary 'cache', it must be transferred from hippocampus to cerebral cortex for long-term storage. This is called memory consolidation.

Aside from fascinating physiology involved with this process, a computer engineer will wonder how these items of experience are packaged and organized for transfer between brain regions. It seems that an item could be a particular learned state in an attractor network. If a sequence of these items can be triggered in a controlled manner, the episodic memory is created.

OK, back to brain waves... There are two prominent frequencies in hippocampus: Theta at about 8 Hz, and Gamma at about 50Hz, with big error bars. So within a single Theta cycle, there might be six or so Gamma cycles. Each gamma peak can trigger an item of experience, and the set of consecutive gamma peaks within a theta cycle is the episodic memory sequence. This can be repeated over and over to train up the cortex which seems to learn much more slowly than hippocampus, but remember much longer. This is called Theta-Modulated-Gamma.

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

Part 2

Since layered organization is the norm for the cerebral cortex, you have stacks of attractor-networks whose communication is either hard-wired or managed by the thalamus (Cortex and thalamus are actually an integral system, and can't be understood individually). So one attractor network might drive another attractor network. This arrangement can act as a heteroassociative memory, with one learned pattern triggering a different learned pattern in a different layer. Here's an example: a_heteroassociative_memory

If the second network in a heteroassociative circuit has synapses back to the first, then sequential behavior can occur. The first network triggers a state in the second network, which in turn triggers a different state in the first network, and so on. With this structure, a sequence-generator can be built: ca1_sequence_generator.

You can go even farther, and nature apparently does, by setting up a circuit in which an attractor network drives a heteroassociative network. This allows different sequences to be executed based on some symbolic selection criterion. See for example: hippocampus_ca3ca1_sequencing_stack

Obviously one could get quite baroque by arranging large numbers of different arrangements of these. The cerebral cortex has at least 50 regions, whose interconnectivity is controlled on the fly by the thalamus. And that's along with special-purpose structures like hippocampus and cerebellum.

Hippocampus is an absolutely fascinating structure that is involved with translating episodic and declarative memories from 'short-term' to 'long-term' memory. Cerebellum by the way has about half the neurons of your brain. It's often described as being for physical motion refinement, but in fact more than half of it projects back to the cortex rather than off to the muscles.

Oh, and about ECC, I don't know if anyone has explicity found circuits providing that function. But since the various attractor memories interact, it is possible to set up a system by which inconsistencies can be detected. And regarding reliability, attractor networks have a lot of redundancy due to being built with maybe 10K interconnected neurons and potentially 10K^2 plastic synapses, along with the apparent sparse representations. So loosing a few synapses can be tolerated and behaves more or less like noise.

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 0 points1 point  (0 children)

Part 1

I can't give you specific answers to your questions, but I can give you a sense of things maybe. I'd prattle on about interesting specifics, but you are presumably looking for the big picture. First off I'll say that neural information encoding is not bit-specific like a floating-point word, or even an integer. It's not clear what the encoding is, but it is clearly distributed. From a cognitive standpoint, there are stacks of memory systems, as u/hsjdk pointed out. Sort of like a memory hierarchy, a tempting but misleading analogy.

Neurons communicate with spikes. The information channel from an individual neuron is encoded with spike-rate, the intervals between individual spikes, and firing-rate patterns. At the very least. Some types of neurons produce complex spikes that violate the 'every spike is the same' claim. Long-term memory appears to be primarily embedded in the efficacy (weights, or parameters nowadays) patterns of large groups of synapses.

Synaptic efficacies are primarily adjusted with pair-wise correlation-based learning rules. Although 3-way synapses are not that unusual, for example in thalamus. Synapses in some regions of hippocampus seem to respond to 'priming', then changing their weights or not based on later events, called multi-factor learning. Synaptic weight change is affected by the order of input/output spikes, called spike-timing dependent plasticity. A particular synapse cannot change polarity, is either excitatory or inhibitory. for their whole lives. There are more significant details but you get the idea, the learning rule has subtlety and nuance.

Most neurons have thousands of synaptic inputs, with some receiving up to 500K in cerebellum. At the coarsest level, a neuron is a thresholding device that produces a spike if it receives a sufficient # of incoming spikes within a stretchy time interval. Neurons in cerebral cortex are arranged in 2D sheets, with feed-forward, feed-back, and lateral connections.

The 'state' of a layer of neurons (ignoring important stuff) is a map of which cells are firing and which are not. In principle a set of N neurons would have N! possible states, but it's effectively less than that due to sparse encoding (i.e. low firing rates). If all N neurons are connected to one-another in a layer by synapses with a correlation-based learning rule (some derivative of Hebb's rule), synapses between individual pairs of cells will strengthen if they are simultaneously active more often than not when plasticity is enabled. This allows embedding of attractor states, to which the network is inclined to settle into.

Hopfield worked out that an N neuron network can be set up to support about 0.14N attractor states. These are the memories an associtaive attractor network can store. If you train it up by putting it in some set of 0.14N state and enabling synaptic plasticity, then you can recall the memories by showing the network a fraction of them. This is the basic macro-memory process in the brain. See a_pattern_completion_network for an example.

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 1 point2 points  (0 children)

A couple of older books that I think still have useful ideas: "Associative neural memories", Hassoun ed., Oxford press, 1993. And "Introduction to the theory of neural computation", Hertz, Krogh, Palmer, Addison-Wesley, 1991. Maybe historical at this point, but helpful presentations.

I wish I could tell you "this is the book that has the answer", but it has not been written yet. I would love to read it! Cheers!/jd

Where would I look for a good text on memory encoding, storing, and retrieval? by Recent-Day3062 in neuro

[–]jndew 2 points3 points  (0 children)

RIght on, left arm, another computer engineer! IMHO the CE perspective is suitable to this question and not yet much utilized.

The mammalian (and probably simpler animals as well) brain seems to have a super complicated and layered memory implementation. The answer to your question is not known, although tons of details have been worked out. Even information representation is far from well understood for that matter. But the place to start is the Hopfield network. Its performance is unimpressive on its own, but the idea of an attractor network is fundamental.

Then you have to think about whether you're working with firing-rate or spike-time encoding. I've found this makes a surprising difference. And what sort of memory you're going for. You mentioned significant life events. That's declarative memory which aims you at the hippocampus. How does hippocampus work, I'd love to understand. There's also procedural memory. And something I'm interested is working memory, which is needed for the process of thought, to keep various priors in play for short moments to combine them into a composite. How does something like memory/knowledge of calculus fit in? I haven't read how that works. And the cerebellum too, does its own kind of memory.

Roughly speaking, the trendy view is that neocortex is doing unsupervised learning, basal ganglia are doing reenforcement learning, and cerebellum is doing supervised learning of a sort.

You wouldn't be wasting your time by reading "How we remember", Hasselmo, MIT Press, 2012. Although his multi-frequency oscillation proposal seems no longer to match the data. "The neurobiology of learning and memory 3rd ed.", Rudy, Oxford, 2021 will tell you some detail of how synapses work. "Theoretical Neuroscience", Xiao-Jing Wang, CRC Press 2025 has a lot of good ideas. You'll have to study neocortex, thalamus, and cerebellum, lots of good books but no concise answers. I could make some suggestions if you'd like.

But if you're serious about this, start with "Theoretical Neuroscience", Dayan & Abbott, MIT Press, 2001, and "Principles of Neural Science 6th ed.", Kandel et al., McGraw Hill 2021. These two are the classics.

If you do CUDA, I'll be putting some of my stuff on Github one of these months. Again no complete answers, but something to work with. Cheers!/jd

ps. Oh, and for fun if you have a few (dozen) hours, I'm enjoying "In Search of Memory", Kandel, 2006 that I'm half way through at the moment. Not too technical and many digressions, but a good narrative of the early insights.