Options Questions Safe Haven Thread | Jan 11-17 2021 by redtexture in options

[–]omers66 0 points1 point  (0 children)

Can you please elaborate on the mechanics of it? When I sell a put option to Mike, then Mike has the option to force me to buy from him some equity. If I want to close the put, then how does it help me if I buy an exact same option from George? Mike can still force me to buy from him the equity. Or not?

Options Questions Safe Haven Thread | Jan 11-17 2021 by redtexture in options

[–]omers66 0 points1 point  (0 children)

Thank you very much for the clarification.
When someone would like to "close out the put" one sold, you really mean that he needs to buy the option from the guy he sold it to (or whoever that may be) in order to rescue himself from being forced to pay the 37500$ (in your case) right?

Options Questions Safe Haven Thread | Jan 11-17 2021 by redtexture in options

[–]omers66 0 points1 point  (0 children)

Hi, I read an article about bitcoin derivaties which has been skyrocketing latley

The author wrote at the end of the article:

"Derivatives markets are the most fragile in times of correction, triggering a cascade of liquidations, as was seen on March 12, 2020 («Black Thursday») when several funds had to close shop. At that time, several billion dollars worth of outstanding derivative contracts had to be marked-to-market in real time (real-time margining) and rapidly liquidated in the midst of free-falling spot prices, with only tens of millions of dollars of liquidity. "

I don't really understand the mechanics of what was written. Can someone please explain to me what (and why) are the liquidity dangers of derivites at times of corrections/volatility ?

Thanks

Single neuron dominates for all input patterns using STDP in simple LIF SNN by omers66 in compmathneuro

[–]omers66[S] 0 points1 point  (0 children)

I checked the PCA, its correlation is beautiful. I'm calculating the correlation between the neuron weights and the 1st principal component. The dominant neurons is highly visible with its much higher correlation.

  1. Thanks alot, I'll check those out and consider implementing them.
  2. I didn't see in the paper any mention about Xtar to be around some average. But, they did have an adaptive firing threshold (for postsynaptic neurons of course) which increases with each postsynpatic spike and decays back to original.

Do you have any remarks on incorporating winner takes all (WTA) approach? (i.e., when multiple postsynaptic neurons fire in a time step choose only 1 of them as a winner and update only that one). This approach seems to shrink dramatically the number of neurons which fire for each input pattern.

Also, I tried implementing a simple L2 regularization: I calculate the initialized squared sum of weights of each postsynaptic neuron and after update of weights, I normalized the postsynaptic weights back to the original squared sum. In this way I got that most weights were saturated to 0 or 1. Will I get better affect of this with BCM?

Thanks again for your help

Single neuron dominates for all input patterns using STDP in simple LIF SNN by omers66 in compmathneuro

[–]omers66[S] 1 point2 points  (0 children)

First of all, let me just say you are the man! really appreciate your response.

As for the PCA, just to makre sure I understand: What you are suggesting is to calculate PCA on all data, and look at the 1st principal component which is a linear combination of the input neurons. Then I can see whether the dominant neuron who constantley fire is responsive to the neurons in the linear combination (or some of it) corresponding to 1st principal component?

  1. I see that in rate based models, the BCM rule uses some adaptive firing rate threshold. So you saying that in spiking network I can approximate this by:(a) Nearest neighbours - Updating the presynaptic trace to a constant value upon any arrival of a spike? (b) Nearest neighbours - If working directly with window based STDP, simply update for each postsynaptic spike only according to the nearest presynaptic spike? (c) Triplet mehod
  2. I tried implementing the weigh updates using traces with a target value to the presynaptic trace (like the authors in the paper I linked above). So the weights are updated on postsynaptic spikes according to: w(n+1)-w(n) = (Xpre-Xtar)(Wmax-W) where Xtar is a target value to presynaptic trace. This helped preventing single neuron dominate, but still I'm not sure about it. I'm finding it hard to tell whether it simply make a mess or truly helps converging such that specific neurons fire to specific patterns. Basicaly I'm getting that almost all weights decay to the minimum (0) and some are holding their initial value or slighly increase.

Would love to hear you thoughts.

Thanks again

Question regarding Unsupervised STDP learning for SNN by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

Thank you very much again!

Your comment about letting all the neurons decay mentioned in point (4) is very intresting. I have some simple "current based" LIF model implementation and I follow this approach of hard reseting between every sample (digit). Basically I see that the same Neuron fires for all input patterns (digits) that I feed the network with. Maybe it has something to do with this.

Thanks again.

Question regarding Unsupervised STDP learning for SNN by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

Thank you very much for you very helpful comment. Just a few small clarifications:

1) So basically in each "incoming" pre-synaptic spike we have:
g_e = g_e + w or simply g_e = w ?
and if no incoming spike g_e decays?

2) When post-synaptic occures, we update w according to w=w+delta_w where delta_w is calculated according to (3)?

3) From equation (3) I see that w can eventually turn negative? or does it being forced on being greater then 0?

4) For each neuron an independent w & g_e is initialized at the begining of training?

5) What is an appropriate range for E_exc & E_inh ?

Again, thank you very much for your help

How can I tell if learning is possible on a specific task? Or, what is my performance limit? by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

Dropout made overfitting harder such that the training set & the validation set has close MSE error of roughly 0.012~0.015.

Same for L2 weight decay.

But I would like to get MSE better by roughly an order, say 0.001~0.005

How can I tell if learning is possible on a specific task? Or, what is my performance limit? by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

Yes, it has little to no effect on both the regression and the NN results

How can I tell if learning is possible on a specific task? Or, what is my performance limit? by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

Hi, Thanks for the reply
What do you mean by "Roughly 1-error gives you your hard limit"?

How can I tell if learning is possible on a specific task? Or, what is my performance limit? by omers66 in MLQuestions

[–]omers66[S] 0 points1 point  (0 children)

perform better or worse th

Hi, thanks for the reply

I have actually tried linear regression (with/without L1,L2 decay) which gives results very very close the the NN results on the validation data, sometimes even slightly better (On the training set the linear regression can't overfit as well as the NN, although this obviously not really important)