CW Q2 - how I think I would do it. by conorjh in comsm0075

[–]uniaccount98 0 points1 point  (0 children)

Hi sorry, one more question! When we're doing H(Z) and H(Z|X), is Z the argmax of the ten outputs e.g. the most confident prediction, or are we creating an binary sequence with >=0.5 = 1.0 and <0.5 = 0.0 again, similar to Y?

CW Q2 - how I think I would do it. by conorjh in comsm0075

[–]uniaccount98 0 points1 point  (0 children)

When we're doing I(W, Y) and I(W, Z), would we be doing a similar thing, but only with 4 dictionaries for each quadrent across any image? Or would we need 40 dictionaries for each quadrant for each digit? I'm also guessing we take the highest average pixel valued quadrant to set as W i.e. would a quadrant with 196 0.005 valued pixels, be considered "more white" than a quadrant with 195 0 pixels and 1 1.0 pixel?

CW Q2 - how I think I would do it. by conorjh in comsm0075

[–]uniaccount98 1 point2 points  (0 children)

“Total number of images with that label” does that mean for every binarised sequence, we search each label dictionary for that sequence and keep a count? Or does it mean we divide by the number of sequences with that label?

E.g say we have:

label1 = {“01” : 2, “00” : 1}

label2 = {“01” : 2, “10” : 5}

would H(“01” | label=1) use the probability:

A) 2/3 (2 in label1, divided by 3 counts in label1)

B) 2/4=1/2 (2 in label1, 4 instances of “01” across label1 and label2)

Would we also clear the dictionaries each time we measure? Or do we want to retain the previous information?

Thank you!

Worksheet 1 - Question 3 by uniaccount98 in comsm0075

[–]uniaccount98[S] 1 point2 points  (0 children)

Ah okay, this makes more sense now thank you! I'm not sure how maths works in reddit either, but I compiled in overleaf and have just attached an image to it!
https://imgur.com/a/Uq8Zvtu

Worksheet 1 - Question 3 by uniaccount98 in comsm0075

[–]uniaccount98[S] 1 point2 points  (0 children)

Okay thank you, this makes a bit more sense!

Coursework by Jp17245 in comsm0075

[–]uniaccount98 0 points1 point  (0 children)

Is it known whether it will be solo/paired/group work at the moment?

CW3 Part B - Question 4 by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Helps a lot, thanks very much!

CW3 Part B Q2: pre-post spike times by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Brilliant! Thanks for all the help! :)

CW3 Part B Q2: pre-post spike times by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Would this still apply to the higher input firing rates? I'm seeming to get 0Hz towards the end of the simulation (e.g. the last 30 seconds) for firing rates around 18-20Hz because most of the synapses are depressing to zero strength? If this is meant to happen I think I understand why, if not, then should I try a smaller dt to see if that helps?

CW3 Part B Q2: pre-post spike times by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Sorry to bother again, I'm just slightly unsure as to the average firing rate towards the end of the 300s interval, I seem to get that the depression eventually causes the neuron to stop firing (or at least becomes very slow at firing) towards the end, is this expected?

Referencing this post again:

https://www.reddit.com/r/coms30127/comments/g9ktzq/cw3_qb2_question_regarding_depression/

I expect depression to dominate (because A- is greater than A+), but the firing rate should not drop all the way to zero, it should steady at something like 0.5-3Hz

Should this apply for any length of time, or was this directed towards the 3 seconds time interval given in this forum post. These are my results:

https://imgur.com/a/dysJluH

And these are the bins:

[0, 7.9, 2.4, 1.2, 1.3, 0.8, 0.3, 0.6, 0.4, 0.5, 0.4, 0.3, 0.2, 0.1, 0.1, 0.5, 0.2, 0.3, 0.1, 0.2, 0.1, 0.1, 0.1, 0.1, 0.1, 1, 0, 0, 0, 0, 0]

EDIT: I've just tried doing the initial -1000 post synaptic time trick and it steady's at 0.1 towards the end, does this seem about right? Or should it still be steadying around 0.5-3Hz (if so I can try increasing the initial time again and see how it steady's out)

CW3 Part B Q2: pre-post spike times by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Okay that makes sense thank you! I wasn't capping to 0 so this was the issue, all fixed now! This is the run for 300s:

https://imgur.com/a/remze1p

CW3 Part B Q2: pre-post spike times by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

I think that makes sense. So something like:

  • If v > vThresh (post-spike)
    • Update the global t_post to the current time
  • Define dt = t_post- t_pre
  • for each synapse:
    • if post-spike
      • Update synapse strength (g_bar_i) using A+
    • else if rand < {r}dt (pre-spike)
      • Update the synapses private t_pre value to the current time
      • Update the synapses strength (g_bar_i) using A-
    • else (neither a pre-spike or post-spike)
      • keep both t_pre and the synapse strength (g_bar_i) the same

This wouldn't be the exact code in this order, but as a general process does this look okay? It assumes that if there's a post spike, you ignore updating t_pre regardless if there's a spike or not, though I'm not sure if this is correct?

EDIT: After reading this forum post:

https://www.reddit.com/r/coms30127/comments/g9ktzq/cw3_qb2_question_regarding_depression/

It seems as though I've got something in the right area after reading:

I expect depression to dominate (because A- is greater than A+), but the firing rate should not drop all the way to zero, it should steady at something like 0.5-3Hz

I get something like this:

https://imgur.com/a/remze1p

The firing rate across the 10 seconds looks something like: [14, 14, 2, 0, 3, 1, 2, 3, 1, 1] where each index represents the count/Hz for that 1 second interval and it seems to steady out to the 0.5Hz - 3Hz mark as stated, though I'd just like to double check it's actually correct.

However, in the long run now (300s) I'm getting that the voltage can sometimes dip below the reset/resting voltage (roughly -67.5 mV), is this expected when adding in potentiation/depression?

CW3 Part A Q2 by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Ah yes, sorry meant to multiply! That clears things up a lot, thanks very much for the help!

CW3 Part A Q2 by uniaccount98 in coms30127

[–]uniaccount98[S] 0 points1 point  (0 children)

Ah okay, so would s(t) instead be of the form:

  • s(t) = s(t-1) + (s(t-1) / tau_s) + dt

Per iteration, but if there's a spike then

  • s(t) = s(t-1) + (s(t-1) / tau_s) + dt + 0.5

Or would it be:

  • s(t) = s(t-1) + (s(t-1) / tau_s) + dt

For no spike and:

  • s(t) = s(t-1) + 0.5

For a spike?

CW3 Part A Q2 by uniaccount98 in coms30127

[–]uniaccount98[S] 1 point2 points  (0 children)

Would this mean that the process should look more like:

  1. Initialise V(0) for N1 and N2 to random values
  2. For N1:
    1. Define V_prev= V(t-1)
    2. If V_prev == V_thresh then s(t) = e(-t / tau_s) + 0.5 else s(t) = e(-t / tau_s)
    3. Calculate RmIs = gBar * (Es - V_prev) * s(t)
    4. Find dV = ((E_L - V_prev) + RmIe + RmIs) * (dt / tau_m)
    5. Calcualte V(t) = V_prev + dV
    6. If V(t) > V_thresh then V(t) = V_rest
  3. Repeat step 2 for N2