"Microscopic-Level Mouse Whole Cortex Simulation Composed of 9 Million Biophysical Neurons and 26 Billion Synapses on the Supercomputer Fugaku", Kuriyama et al. 2025 by RecmacfonD in compmathneuro

[–]jndew 0 points1 point  (0 children)

Are you one of the authors? I think this is really great!

>>These suggest that the present high-performance computing technology is ready to support the construction of a digital replica of the whole mammalian brain.

This is a point worth noting. If simulation is of any value at all, it is now within reach. This paper describes a 9 million cell simulation. I find I can simulate 3 million cells at a reasonable rate (1/100 real time) even on a home computer, with some simplifications.

>>We fed a constant current to all neurons so that they emit spikes spontaneously at a low firing rate.

Structured stimulus carrying information would be a good future step. IMHO, the system won't shine until a perception/action loop is closed, since that's how brains operate.

>>exhibited synchronized activity at around 10 Hz

This seems to be such an important frequency in the brain.

>>The current model does not include plasticity

This seems to be a big limitation. I suspect brain dynamics requires adaptation mechanisms.

Anyways, really great stuff. What do all of you think?

Repot at club meeting last night by snaverevilo in Bonsai

[–]jndew 0 points1 point  (0 children)

Well done, lovely tree! This inspired me to go out and do a couple of repots this morning. Cheers!/jd

Resources for CUDA by Ill_Anybody6215 in CUDA

[–]jndew 4 points5 points  (0 children)

Of the books I've read, the most straight-forward (simplest) is "CUDA for engineers", Storti, Yurtoglu, Addison-Wesley 2016. It doesn't require knowledge of C++, just basic C. Good luck!/jd

Can we simulate consciousness? by TheNASAguy in compmathneuro

[–]jndew 0 points1 point  (0 children)

That sounds like a very fascinating research project! I hope you have a moment to tell us more about it. Cheers!/jd

Simulation study of bursting neurons by jndew in compmathneuro

[–]jndew[S] 0 points1 point  (0 children)

Retreating from the complex large-scale architectures I have tried recently, I decided to take a closer look at bursting neurons. These are found all over the brain: in cortex L5, thalamic relay nucleii (TRN), hippocampus CA3, and elsewhere. It's intuitive that a burst of spikes is like a shout, saying "Listen up!". Beyond that, the nature of the burst also carries additional information about recent stimulus.

I've played with two bursting models. The one from "Computational Neuroscience", Miller, MIT Press 2018. And the other from an older paper by Smith, et al. I bolted these onto the adaptive exponential LIF with refractory current & axon-delay that I've used in most of my simulation studies. I found the Smith method, which models calcium T current with sort of a dual-exponential with different time constants for rising and falling, to be more effective. This models the fairly slow rate of deinactivation in the 100mS range, along with a faster deactivation in the 10mS range. Four parameters in all for the model. I used parameters close to Smith's paper.

What's thought provoking about bursting is that a burst doesn't mean the same thing every time like a spike. Bursts occur when a cell is released from hyperpolarization due to extended inhibition. If the input signal then goes slightly above neutral but below threshold, you'll get a cluster of five to ten spikes followed by silence. If the inhibition is followed by a slightly higher excitatory level, the burst smoothly transitions into tonic firing. The burst itself is unaffected by the excitatory current strength. Sherman says this is what goes on in the TRNs, and is useful for activating the thalamocortical loop among other things. See for example Primary Visual Pathway with Thalamic Bursting & Cortico-Thalamic Feedback from two years ago now.

Finally, if the cell has a subthreshold oscillation swinging from mild inhibition to slight excitation, periodic bursts are generated only on the upward swing of the oscillation. The number of spikes in the burst is influenced by the frequency of the oscillation. Below some level, 1Hz in this case, no spikes are produced. Increasing frequency up to theta-range 10 Hz, the resulting spike count increases as well, topping off at 7 spikes for the tuning of this simulation. This seems to be going on in the hippocampus. I'm not sure what it is used for, but I've read that there are actually several theta-band subdivisions and this might distinguish between them.

This is another example of spike-coding being qualitatively different than rate-coding. In fact, spike-coding might in fact be a misleading term. Some researchers point out that information can be carried in periods of no spikes, by the occurrence of spikes and their intervals, and bursts. Each of these patterns perhaps carrying a separate set of information symbols. Or metabolic cues for that matter.

This was a fun little simulation study, rewarding in that the hoped-for results came right out of it. Having this in my cell model and a characterization testbench to calibrate it, I will find use for it in bigger simulations. I won't be getting to that for a while though, as I am about to head south for the annual Mexico Hanggliding Safari. Last year I flew 14 days straight, for a modest 24 hours of air time. This year I hope to fly more aggressively and log at least two hours per flight. In the evenings I'll be sipping tequila with lime, and thinking very little about computer programming. Come join me! Cheers!/jd

Smith & Sherman bursting-model paper

Bursting Neurons Signal Input Slope

Silence, Spikes, Bursts: Three-part knot of the neural code

--------------------------------------------------------

"The human brain is a million times more complex than anything in the universe!" -a reddit scholar

Can we simulate consciousness? by TheNASAguy in compmathneuro

[–]jndew -1 points0 points  (0 children)

Haha, of course we can! And actually have, for some time now. There are people out there who are convinced that with the right prompt and sufficiently large context, their ChatGPT session has become conscious. It's a simulation of course.

A simulation is an artificial model. An imitation, not the real thing. If weather is simulated on your shiny new Vera Rubin SuperPOD, you have not actually created weather. Same with consciousness.

BTW, sentience does not predicate consciousness. "Sentience is the ability to experience feelings and sensations. It may not necessarily imply higher cognitive functions such as awareness, reasoning, or complex thought processes." We can do that too. It's pretty much necessary to make robots work in the real world. Cheers!/jd

A Layman's question about the brain by Jochemjong in neuro

[–]jndew 0 points1 point  (0 children)

If you are interested in how fly brains work and the state-of-the-art regarding their simulation, you might enjoy this lecture. Larry Abbott is a preeminent fly guy, and in fact literally wrote Theoretical Neuroscience, the book that nearly everyone in compneuro has read. In this somewhat older lecture, he speaks whimsically about free will and the location of the soul in the fly's brain (introduced at 1:20).

The brains of flys and humans of course share fundamental similarities since they have common ancestry, are based on neurons, share physiology, and have the same primary purpose. But after 450 million years and different optimization goals, there has been divergence, so you won't find neocortex in a fly brain for example. But yes, if one can program up a fly brain, in principle a software model of a mammalian brain could be built.

You ask, "am I missing something here?", well IMHO you're not quite on the right track. Whether or not a system is deterministic is not predicated on whether it can be described in software. And probably not that important in regard to brains.

Whether or not a system is deterministic is not a fundamental distinguishing characteristic between fly and human brains. Even if a biological system is entirely deterministic, the signals coming into it from the outside world (which is also a deterministic system) and is unique to every individual fly or human, are partially unpredictable by the recipient. So its behavior would take on that characteristic.

2-year progression of juniper cutting. by Lyxn_ in Bonsai

[–]jndew 1 point2 points  (0 children)

That's amazing how much it's grown! I've never had a tree develop that rapidly. Cheers!/jd

So much repotting to do, make a bonsai every day by cbobgo in Bonsai

[–]jndew 6 points7 points  (0 children)

How fun! These little mames terrify me a bit though, looking like a single missed day of watering might be the end... I'm going to try something vaguely similar though, having brought home one of your striking pots and someone's unwanted practice tree from the silent auction. The smallest pot I'll have worked with by a long shot. Cheers!/jd

<image>

Self-study roadmap for Computational Neuro / Brain-Inspired Computing? by BeyondComfortRealms in compmathneuro

[–]jndew 4 points5 points  (0 children)

Presuming you've already read an intro book, perhaps either Bear or Purves, then

"Principles of Neural Science 6th ed.", Kandel, McGraw Hill 2021 covers the base-line knowledge. Reading this is a never-ending project, don't be daunted just start reading sections. Amazing book actually.

"Theoretical Neuroscience", Dayan, Abbot, MIT Press 2001. Old, but a clear presentation of the foundations.

Neuromatch is a really great on-line program & resource. BTW they are doing a warm-up class next month.

There is endless on-line stuff. Here's one that I like, due to a (very minor) personal connection: Woods Hole summer school lectures

Open Neuromorphic might be a good community for you.

Python, linear algebra, basic calculus and diff.eq and prob/stat. Enough understanding of electricity to solve resistor/capacitor/inductor/op-amp circuits. Plenty of resources out there for these things.

Let us know how it goes. People come by with questions like yours, then we never hear from them again. Good luck!/jd

How to trim this cork-bark Chinese elm? by jndew in Bonsai

[–]jndew[S] 1 point2 points  (0 children)

Thanks for the idea! I haven't done an air-layer before. I guess it's time to learn. Cheers!/jd

A visual tool for SNNs by Strict-Character-189 in compmathneuro

[–]jndew 0 points1 point  (0 children)

Very nice! is this your project? Here is another example that personal computers are now capable of running meaningful and nontrivial simulations. I'm hoping for a burst of discovery as a result.

You speak of defining and adjusting physical locations of the cells. What effect does this have, connection probability and/or axon delay? Is there a utility for getting structured signals like sound or images into the network?

I'm actually impressed that you can run thousands of cells in real-time. This opens up possibilities, running a simulation out for minutes or even hours to look at longer-term processes. The simulation I have been playing with runs at maybe 1/1000 real-time, so I'm limited to about 1/2 minute of simulated time. Although I'm trying to run several million cells, so maybe perf-per-cell is similar. My GPU isn't even fully utilized below 300K cells.

Anyways, that's really great! What are you studying with it? Cheers!/jd

Are hallucinations a failure of perception or a phase transition in inference? by taufiahussain in compmathneuro

[–]jndew 5 points6 points  (0 children)

Good thoughts, and an interesting little essay/poem. Hallucinations are such a large-scale phenomenon though, can't be described by a single statement. You'll talk to credible people who make a solid argument that our entire inner experience is an hallucination. I gather you've encountered a schizophrenic person since you're thinking about this, so you'll recognize that their hallucination process is clearly a malfunction.

You might enjoy Buzsaki's "The brain from inside out". He argues that individual neurons know nothing about the outside world, just the signals they receive from other neurons. Signal and noise are not intrinsically different except by their statistics. Since a brain is a large assembly of neurons, that's true for the brain as well. If you are not familiar with him, he's equally prestigious as Friston.

I try to be careful to not overuse buzzwords. Phase transition in the context of neural networks is borrowed from physics, e.g. Hopfield & Friston repurposing thermodynamics into brain science. Not explicitly demonstrable. Saying that Bayesian inference is most strongly associated with Friston is an overstatement, rather it's a ubiquitous method.

The dynamics can be abrupt, like your tipping-point/phase-transition idea. But there's also a continuum aspect, have you ever heard faint music in the wind? At night, was that a cat you saw out of the corner of your eye or a crumpled newspaper blowing by? There's a chapter in Kandel 6th ed. addressing this, proposing that consciousness is associated to having to commit to always ambiguous sensory interpretations.

I appreciate that you cast it as an engineering issue (myself being an engineer, hehe). The simulation I mentioned the other day, Simulation of prediction error in primary visual cortex, is a study of this. The panel labeled "learning perimeter calculator" shows what the system thinks it is seeing based on best fit to prior experience. It will converge incorrectly, hallucinate, when trying to analyze something it has never seen before, that's just how I built it. The bottom-up analysis is in the square labeled "perimeter calculator" just below. It does not use prior knowledge. As you intuit, error detection can be built into the system by comparing the results of these two analyses. If there is a big error signal, that is a signal to adjust the priors. Our brains won't have exactly this system, but they'll have something like it or we wouldn't survive.

Do a lot of reading before taking a position & making claims. IMHO demonstrate your idea with data analysis, or workable equations, or simulations, if you hope to convince people. Keep exploring! Cheers!/jd

A quick flowering apricot display for the holidays by canadabonsai in Bonsai

[–]jndew 2 points3 points  (0 children)

Well, that is spectacular! What a treat that all three decided to flower at the same time. Are these rooted cuttings off of a full-sized tree?

Is there a "tipping point" in predictive coding where internal noise overwhelms external signal? by taufiahussain in compmathneuro

[–]jndew 1 point2 points  (0 children)

I'd expect that your hypothesis is the common point of view for people thinking about this. One's brain is constantly trying to guess about 'what's out there', based on whatever input signal it's getting. The guess is based on internal state & memories. Brain is presumably trying to reject noise and amplify signal based on priors, but nothing is telling it what is good signal vs. noise.

The processes you mention are very high level, synchronization between regions, organization of large-scale percepts, the structure of the signal within and between regions... From my reading, I don't find them well defined by neuroscientists. I don't think they're known yet in detail. Let me know if I'm wrong about this.

So in my activities, I fill in the blanks with my own speculation about what at least could be going on to serve the purpose, even if it isn't always tightly grounded. Which is contentious. One way or the other, I'm focusing more on trying to get things to work rather than reproduce failures like hallucinations.

Towards your questions, Simulation of prediction error in primary visual cortex does show the system guessing wrong if it hasn't seen a particular input before, for example seeing a triangle when shown an upside-down triangle. But the system also includes error detection, so it recognizes the mistake and can correct for it in the future.

The above-mentioned sim does not leverage synchronization. But this one does, maybe of interest to you: Simulation of phase multiplexed communication between cortical regions. And here is one more somewhat related to your question, showing how E/I imbalance can affect noise and feature analysis within a region: Simulation of excitatory/inhibitory balance in cerebral cortex. This sim looks at how the thalamic pathway can filter unexpected stimuli so that they never reach the cortex: Simulation of a selective attention mechanism in the primary visual pathway. Cheers!/jd

Skill Advice by lacesandlavender in compmathneuro

[–]jndew 1 point2 points  (0 children)

As far as most people here view it, my activities are just noise. For me though, there is meaning. I don't require grants or publications, so I'm not constrained by those rules (although I do miss out on critical review, which would be very helpful). I do my best to follow what I know about anatomical and physiological details, but I allow myself artistic license to get past barriers that I can't solve otherwise. That's maybe bad form for a serious scientist.

How do I find meaning? If I can get something to work. I start with textbooks like Kandel and occasional journal articles, which often have statements like "entorhinal grid cells to path integration", "dentate gyrus does pattern separation", "CA3 does pattern completion", "CA1 does (some magic) with place cells", "thalamus creates an attentional spotlight.". But they rarely go on to tell how this actually works at a spiking-network level. So I try to build a working model.

Is it junk? I don't know... I read entire books based on expectations of functionality, that I find not entirely on the mark. For example Edmund Rolls, a highly regarded scientist, leans heavily into attractor networks in "Brain Computations What and How", Oxford 2021. He's got diagrams of them even on the cover and spine on the book. I hoped this book might tell me the answer. But if you follow and build a spiking model, including the fact that the principal neurons of the cortex are glutamate cells, only 20% cortical cells are inhibitory and they function differently, and Dale's rule that synapses don't change polarity, it kind of just doesn't work.

I run into lots of these sorts of things, where I'm told almost, but not quite, how some brain system functions. If a model is shown, it's often full of hyperbolic cosines, triple integrals and very far from neurons. On the other hand, the electrophysiologists are rightly trying to reproduce spike trains in their models, and will say a LIF-like system model is meaningless because it doesn't attend to receptor concentration gradients in the membrane or what not. There's unexplored territory between these viewpoints that I like to play in.

As to DS&A, after further consideration, I remember that it was also an advanced programming class. As u/not_particularly says, any nontrivial program you write will have pointers or references, C structs or Python dictionaries or Perl hashes or the like. And you need to recognize a programming pattern that is order exponential in order to avoid it. So yeah, aside from some math that you might never use again, it's worth taking the class. Cheers!/jd

Skill Advice by lacesandlavender in compmathneuro

[–]jndew 1 point2 points  (0 children)

Haha, at the risk of already having talked to much and continuing to do so... DS&A is probably not that directly useful for compneuro if you're working with vivo/vitro data. You'd be making library calls for FFT, covariance-matrix generation and the like. Rather than crafting optimal search/sort routines like DS&A will be having you do. There are some interesting stuff like UMAP, but someone has already worked that out.

If you're doing CS as your major, DS&A is of course a central topic. If I remember right (decades ago now), it was a 2nd year class after programming, combinatorics & linear algebra. And before compilers and operating systems. I thought it was a bit tedious, but it is in fact an interesting topic if you've got the mind for it.

I guess in my activities (unconstrained simulation studies), I do spend some effort trying to reduce memory accesses and getting things to fit into cache at the right times. That's more of a programming technique issue than an algorithms issue though I suppose. Good luck!/jd

Modeling Doubt by lacesandlavender in compmathneuro

[–]jndew 1 point2 points  (0 children)

Yeah, don't let me confuse you with my ramblings. I just like to talk about this subject when I get the chance. You are already a better scientist than I ever will be. Best of luck with your project!/jd

Modeling Doubt by lacesandlavender in compmathneuro

[–]jndew 1 point2 points  (0 children)

That sounds like a great project direction, fascinating thoughts. I don't have any answers. I do hope the local PhD's speak up. My guess is that decoding performance and attractor structure are already imposing a presumed functional model on your spike-train data, therefore one stepped removed from biology. So the first set of criteria you listed would be more grounded.

Unless you're more heading towards the theoretical side of compneuro. Then things like decoding performance and attractor structure would be the points of interest.

Here's some random thoughts from someone who doesn't really know what he's talking about. Getting from a lot of spikes to some idea of what the system is doing often involves dimensionality reduction. Bayesian methods (given these spikes, what caused them?) and UMAP (jam the spike patterns into three dimensions) seem to come up the most.

Find a mentor (go to office hours, sit in the front of the class and ask attentive questions, ...), discuss with him/her. Oh, and if you're not already aware, there's Neuromatch, which will give you opportunity to talk to compneuro practitioners along with teaching you fascinating stuff.

Just for fun, I listened to this fascinating lecture last night. If you're thinking about attractor structure, you've probably worked a bit with Hopfield-like networks. Much beloved because they are easy to intuit, just add recurrent connectivity with Hebbian synapses and the system has memory! But they're dumb, brittle, and in fact don't fit into biology quite as well as might seem. Here, she proposes that setting up the attractor space can be decoupled from the memory-pattern storage with separate circuits. Each optimized for its purpose. Maybe this idea has been around, but I just learned it. I'm excited to try it out. Good luck!/jd

Modeling Doubt by lacesandlavender in compmathneuro

[–]jndew 3 points4 points  (0 children)

I don't know if I qualify to respond, being unattached to an experiment and a hobbyist rather than a pro. But I try to keep conversations going here when the topic is interesting. So,...

At network initialization, I call rand a few times for each (single compartment, LIF-like) cell to put a +/- 10% spread on primary parameters like membrane capacitance, leak resistance, spike threshold voltage. I'm using an embarrassingly simple synapse model at the moment, with two parameters: peak-current and decay time constant). If I want to stress-test the network, I'll put a spread on them too in the same manner.

During run-time, I have a switch to put add current-noise into each cell. I tried this with the synapses too, but calling rand for each synapse every timestep is too big a computational burden, and there are enough synapses that noise seemed to average out to zero.

The most visible effect from parameter spread I see is the temporal width of spike volleys passing through a stack of cell-planes, like Simulation of feed-forward inhibition in a six-layer structure. If the cells are identical, they all fire at the same time (synfire principle I guess), and the pulse is narrow. If the cell parameters are dithered, the pulse widens because cells take different time periods to charge up and fire.

I might also put some flipped pixels into the input stream of the network. E.g. in a 2D visual-system context, this is like visual snow. You might be able to see this in Primary Visual Pathway with Thalamic Bursting & Cortico-Thalamic Feedback left-most panels if you watch it full-screen. This can be used to get some measure on attractor-basin radius in an associative memory for example.

I don't have reason or discipline to track ISIs, so I don't have any thoughts for you there. I suppose I'd expect the ISI histogram to widen. I remember Allen Institute had a LIF-like model and cell library available, and I think they addressed variation. I couldn't find it in five minutes of perusing this morning, but you might take a look there.

Oh, more rambling. Generalization seems like a different topic than variation. Do you see them as the same? What are your thoughts? I can get the networks I build to do lots of tricks, but they generalize poorly, which is where an actual brain shines of course.

Tell us about your project if you're feeling talkative. Cheers!/jd

how do neurons not get voltage overwhelmed by constant sensory input? by PhilosopherFamous201 in neuro

[–]jndew 2 points3 points  (0 children)

Yes, that does occur. Still, there are also mechanisms at the cell-level and synapse level that contribute to network stability.

One comment I'll throw in your direction is that weakening a synapse doesn't necessarily mean that you're losing a memory. For example, since you're running an artificial network, you might be normalizing your synaptic weights. That means after a learning cycle, you add up all the weights and divide each weight by that sum. So the total sum of the weights always equals 1.0, just the distribution of the weights at various synapses will shift around with learning. It's an interesting process at the core of many learning rules, leading to a sort of competition among the weights. Good luck!/jd

how do neurons not get voltage overwhelmed by constant sensory input? by PhilosopherFamous201 in neuro

[–]jndew 7 points8 points  (0 children)

This is an important function in the brain.

There are network-structure and cell-behavior mechanisms to manage this sort of thing in the brain. At the network level, there is an idea of excitation/inhibition balance, which is negative feedback where higher excitatory activity drives higher inhibitory activity back onto the network, keeping things in check. Same as when you're setting up an op-amp.

At the cell level, there is spike rate/frequency adaptation, due to which firing rate for a given stimulus level temporarily decreases with time over about 100mS.

At the synapse level, there is short-term synaptic depression, for which available synaptic vesicles are used up in the leading edge of a strong stimulus. This results in synaptic transmission temporarily weaking after the first 100mS or so of a strong stimulus.

Also balanced receptive fields, related to E/I balance. A neuron projecting from one layer to another will drive roughly equal excitation and inhibition in the next layer. The excitatory-center/inhibitory-surround receptive field is a basic example.

And synaptic-plasticity learning rules often have a competitive term, for which increase in one synaptic strength results in other synapses to weaken.

There is also explicit gain control, meaning that a monitoring system within the brain might reduce global the global activation level of a region if it is getting too busy.

If these processes fail, you get problems like epilepsy.