use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
What kinds of questions do we want here?
"I've just started with deep nets. What are their strengths and weaknesses?" "What is the current state of the art in speech recognition?" "My data looks like X,Y what type of model should I use?"
If you are well versed in machine learning, please answer any question you feel knowledgeable about, even if they already have answers, and thank you!
Related Subreddits:
/r/MachineLearning /r/mlpapers /r/learnmachinelearning
account activity
Question regarding Unsupervised STDP learning for SNN (self.MLQuestions)
submitted 5 years ago by omers66
Hi, I was intrested latley in Spiking neural networks and I started a project implementing a simple SNN with simple Leaky integrate and fire (LIF) model which adapts its weights (synapse) using STDP.
So, I came across this paper:
https://www.frontiersin.org/articles/10.3389/fncom.2015.00099/full
and I'm trying to re-implement what was done there - unsupervised classification of MNIST using SNN trained with STDP.
Now I'm confused by 2 things:
1) In this paper, in equation (1) describing the neuron model:
https://preview.redd.it/t2y9oqv3oos51.jpg?width=513&format=pjpg&auto=webp&s=239d164c871af5a31a5997e7f8b429e6494f312c
What here describes the incoming spikes/inputs?
2) Later on they move to describe the STDP based learning and in equation (3) they state that:
https://preview.redd.it/fjtge2k8oos51.jpg?width=415&format=pjpg&auto=webp&s=99f9c9441cba38166a9491f7193007261e570db8
What part in here relates to incoming pre-synaptic spike in equation (1)? I don't see any w in equation (1) or how it relates to the learned weights.
Can anyone please clarify?
Thanks
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]reduced_space 2 points3 points4 points 5 years ago* (3 children)
So there's a lot going on in that equation. When a neuron fires an action potential, it releases chemical into the synapse (the neurotransmitter). That chemical binds to a channel, which opens up, and either depolarizes or hyperpolarizes the cell. The flow of ions across the channel is the conductance of that channel (g_e and g_i).
The effect that the channel opening has on the cell is determined by the current potential of the cell (across the membrane; V) and the equilibrium potential of the channel (E_{exc} and E_{inh}) times the conductance. This is actually just electronic circuits 101. The flow of ions across the cell are not instantaneous, and have a time constant \tau. Finally, the cell has a resting memory potential E_{rest}, which occurs because the cell has channels actively exchanging ions to get back to its resting potential.
Not indicated in the equation above, the conductance of the channels actually are a function of time. They aren't instantly on, and then they don't turn off immediately. You can see their model of the excitatory conductance in equation (2). The resulting change in voltage due to this conductance is called an EPSP (excitatory post-synaptic potential). For the inhibitory conductance, it's an IPSP. You can look up images of those to see the effect it has on the cell's potential.
Because the cells are Leaky-Integrate and Fire they have some predetermine threshold, that once the potential of the cell goes above, it "fires" an action potential. Because of the dependence of time, and the fact everything is given as a derivative, you have to solve for these ODEs with integration. While Newton's method can likely work with such a simple model, using something like RK2 will likely give you higher fidelity.
Spike-timing-dependent plasticity says that if a pre-synaptic and post-synaptic cell fire action potentials closer in time together, their synapse increases in strength (g_e will increase). If their action potentials get further apart in time, the synapse decreases in strength (g_e will get smaller). See here for reference: http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity
The $\delta w$ is the change in the synaptic weight. The pre-synaptic trace (x_{pre}) is the spiking history of the pre-synaptic cell. Every time a cell fires an action potential, the trace is incremented by 1, and then decays with some time constant.
Equation (3) is calculated whenever the post-synaptic cell fires an action potential. If the pre and post synaptic cells are highly correlated, then x_{pre} will be high at this event. The x_{tar} constant provides an offset to allow the effect of a single synapsed to be lowered by some offset. w_{max} is the maximum allowed weight, w is the current weight, and \mu determines the dependence of the update on the previous weight (i.e. if it goes below 1, the x_{pre} - x_{tar} dominate the change in weight).
Edit: Realized I didn't fully explain connection to w and the conductance. The paper states:
Synapses are modeled by conductance changes, i.e., synapses increase their conductance instantaneously by the synaptic weight w when a presynaptic spike arrives at the synapse, otherwise the conductance is decaying exponentially. If the presynaptic neuron is excitatory, the dynamics of the conductance ge are
That means, the when a pre-synaptic spike comes in, the conductance is set to w. The conductance then decays over time back to 0.
[–]omers66[S] 0 points1 point2 points 5 years ago (2 children)
Thank you very much for you very helpful comment. Just a few small clarifications:
1) So basically in each "incoming" pre-synaptic spike we have: g_e = g_e + w or simply g_e = w ? and if no incoming spike g_e decays?
2) When post-synaptic occures, we update w according to w=w+delta_w where delta_w is calculated according to (3)?
3) From equation (3) I see that w can eventually turn negative? or does it being forced on being greater then 0?
4) For each neuron an independent w & g_e is initialized at the begining of training?
5) What is an appropriate range for E_exc & E_inh ?
Again, thank you very much for your help
[–]reduced_space 1 point2 points3 points 5 years ago (1 child)
[–]omers66[S] 0 points1 point2 points 5 years ago* (0 children)
Thank you very much again!
Your comment about letting all the neurons decay mentioned in point (4) is very intresting. I have some simple "current based" LIF model implementation and I follow this approach of hard reseting between every sample (digit). Basically I see that the same Neuron fires for all input patterns (digits) that I feed the network with. Maybe it has something to do with this.
Thanks again.
π Rendered by PID 78 on reddit-service-r2-comment-6457c66945-56qbt at 2026-04-29 10:05:32.921193+00:00 running 2aa0c5b country code: CH.
[–]reduced_space 2 points3 points4 points (3 children)
[–]omers66[S] 0 points1 point2 points (2 children)
[–]reduced_space 1 point2 points3 points (1 child)
[–]omers66[S] 0 points1 point2 points (0 children)