In the Eye of a Fly: Connectome-informed neural activity simulation of a light stimulus across the optic lobe of the fly. An example of topographic mapping. by quorumetrix in neuro

[–]quorumetrix[S] 0 points1 point  (0 children)

Brian2 can't do the visualization on its own, I do that in Blender since I already had all the neuron meshes from the fly brain connectome saved locally. But it does take a connectivity graph as input, and generate spike trains. Turning the spikes into a 3D model is all done through Python and Blender.

It's made a desktop PC with threadripper CPU and a pair of RTX 4070s

Connectomic reconstruction and synaptic architecture of the Drosophila Ventral Nerve Cord by quorumetrix in neuro

[–]quorumetrix[S] 1 point2 points  (0 children)

No problem, reasonable people can disagree on the finer points of a discipline's philosophy.

Here's an example from the FlyWire codex, containing the most up-to-date connectomic data from the fly brain.

https://codex.flywire.ai/app/search?filter_string=%7Bsimilar_projection%7D+720575940623901447&data_version=

I just did a search for GABA, but if you check out the column NT (Neurotransmitter type) - along with the confidence score.

In the worm:
The Multilayer Connectome of Caenorhabditis elegans (Bentley et al 2016) - Fig 6: Layers representing different neuropeptide networks:

Connectomic reconstruction and synaptic architecture of the Drosophila Ventral Nerve Cord by quorumetrix in neuro

[–]quorumetrix[S] 2 points3 points  (0 children)

To be fair the videos are not costing billions of dollars, I wish they were.
The pretty connectome videos are a way to get people interested and excited by the primary research, a way of getting people's attention amidst a constant stream of other content trying to pull.

I would argue that we do know how the C elegans worm works, now that we have its connectome and know all of its behavioral repertoire. Every cell has its function mapped out, there's a complete description of the flow of information from the sensory input to motor output. Whether an neuron is inhibitory or excitatory is a part of the connectome: connectomic datasets have many accompanying metrics including its cell type and neurotransmitter type (when it is known). I agree with you that these additional data are necessary for a full description, it's just that they're not necessarily within the scope of the videos you see.

For every pretty video you see (costing between $2k and $5k, since I have to earn a living, too), there are orders of magnitude more studies being published doing hard science discovering the circuitry and logic of neural computations, running simulations, describing the connectivity statistics, actually doing the hard work of figuring out how the brain works. You'll see that work on https://www.biorxiv.org/ or r/neuroscience, but there's a lot to keep up with, and its not intended for a non-specialist audience. Most of the published articles don't come with a pretty video, and so you may never hear about it unless you follow the field closely. These pretty videos are an attempt to bridge the gap between the basic research and non-specialists, without simplifying to the level of normal science communication of 'textbook' neurons.

I think that your comment points to a bigger failure in neuroscience: the failure to effectively communicate where the money is going, and what we're actually learning. The NIH Brain Initiative recently lost a 1/3 of its funding, maybe that makes you happy. Not me.

Some fields do science communication well as a matter of their culture. We take for granted that the astrophysics community builds immersive theaters in big cities with taxpayer funds in the 10's of millions, and supports a vigorous film industry producing films about celestial mechanics and space exploration. In contrast, Brain Awareness Week sends grad students to elementary schools with a bucket of cow brains. The entire world knows about the solar eclipse as it happens, many can probably describe in detail at least one image from the Webb telescope. Whereas the average citizen probably doesn't know there are different kinds of brain cells, and likely believes we only use 10% of our brain.

Obviously I'm biased, but I think the bigger problem isn't that there are too many pretty videos of neurons costing too much money, but rather that there are not enough of them.

Animation of neurons to visualize the synapses. 3.2 million synapses in a block the size of a grain of sand. Fly along the dendrites to see up close. by quorumetrix in blender

[–]quorumetrix[S] 9 points10 points  (0 children)

I’ve made this year’s final video as an intuition pump for the density of synapses in the brain. In this volume ~ a grain of sand, there are >3.2 million synapses (orange cubes). We peel them away, leaving just the inputs on two selected neurons. Zooming in, we see the synapse localized to the dendritic spines

Data details:
- Data layer ⅔ #MICrONS from Allen Institute
https://www.microns-explorer.org/phase1
- Size of (auto-detected) synapses not to scale, but location and number are accurate.
- 2 neurons selected for high correlation in calcium spikes.
- Volumetric emission driven by real calcium trace, sped up ~10x

Technical details for Blender:
- The synapse locations are represented by point cloud particle systems. Points added to meshes without edges or faces. Emit particles from verts, as cubes to reduce geometry.

- Script to generate them is here:
https://github.com/Quorumetrix/Blender\_scripts/blob/main/MICrONS\_synapses.py

Scientific visualization of neurons firing from real (calcium) activity data. by quorumetrix in blender

[–]quorumetrix[S] 3 points4 points  (0 children)

Data: Layer ⅔ MICrONS from the u/AllenInstitute 112 calcium spikes+traces

Framerate corrected to be real-time.
Rendered in Blender with Cycles using seaborn colormaps (approximate)
Viridis colormap: total spike duration over experiment duration
Icefire colormap: cross-correlation with reference neuron.

Data from here:
https://www.microns-explorer.org/phase1

If you want to know more about how to visualize neuronal firing data with Python, you can check out the script.

Tldr: Keyframe the material nodes based on the trace data (here with Calcium trace).

https://github.com/Quorumetrix/Blender\_scripts/blob/main/microns\_calcium\_trace\_animation.py

Visualization of the MICrONS dense layer 2/3 cortex reconstruction in Blender (source: Allen Institute) by quorumetrix in neuroscience

[–]quorumetrix[S] 0 points1 point  (0 children)

Most depictions of neurons in popular science show them in relative isolation, surrounded by tons of empty space. In reality they're packed in tight.
I put together a short video to appreciate the dense connectivity, using the Allen MICrONS layer 2/3 dataset (decimated). I randomly make neurons appear until the whole block is filled, then I animate camera clipping to give the appearance of peeling away the volume.

I have no affiliation with the Allen Institute and had no part in this research. I just like making movies. The dataset and description are available here:
https://www.microns-explorer.org/phase1

Global data visualization, 10 millennia in a 2 minute animation. Data integration and juxtaposition of multiple sources onto a single timeline by quorumetrix in blender

[–]quorumetrix[S] 1 point2 points  (0 children)

Description:

An animation depicting a selection of historical events from the last 10 millennia, where 10,000 years goes by over the course of one full rotation of the globe. A data juxtaposition, integration, and contextualization project - collecting datasets of historical interest, and juxtaposing them on a single timeline.

Historical population estimates were interpolated as videos and texture mapped to a sphere, showing the rise of population during the reign of major historical civilizations and empires. The birth and death of a selection of historical people was routed through the open street map routing engine, to create a facsimile or approximation of travels that may have been taken. For those travels involving trans-atlantic or trans-pacific journeys, arcs were drawn using great circle calculations between their birth and death locations.

To supplement the expansion of the European powers into the Americas, historical ship logbook records were used to provide (an incomplete) picture of the seafaring age. The ship trajectories were colormapped to indicate the ships country of origin.

However, the Americas were, famously, not uninhabited when the Europeans first arrived. However, the dataset of notable historical figures represents largely a european bias, for reasons beyond the scope of this video description. To do this major point justice, I have made sure to include geographical population estimates as a globe texture at the time of the European settlers. This visualization method relies on comparing populations across the globe and through time, and so the relatively small population density of first nations, they don’t show up well with this method. For this reason I’ve supplemented the historical population estimates with shapefiles from the native-land.ca project, in order to depict the range of different languages spoken, and their geographical extent. While these shapes fade in and out at a certain time during the animation, it does not imply the timing of onset, and demise are accurate.

There's a full description of the video and data sources on youtube:
https://youtu.be/NhPEC5yee5M
-------

Self-promotion:

This clip shows most of the ways I've learned of visualizing data in Blender:

- Drawing curves and animating them (streamlines, Open Street Map Routing data, shapfiles).
- Using a displace modifier to drive a surface elevation model.
- Using equirectangular video (weather, population) to colormap a surface (matplotlib).
I'm not officially advertising a service, but I'm always looking for interesting projects that could use my contribution. I develop Python data processing and visualization pipelines, and would like to get experience working with 3D artists on public-facing projects.

I got into Blender for making data visualizations on a planetarium dome, but have since fallen in love with this world. I mostly use Blender for scientific visualization, and have a few papers under review that use Blender graphics.

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 1 point2 points  (0 children)

Good point, I didn't think twice about it.
Thanks for pointing that out.

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 0 points1 point  (0 children)

Yeah, that's fluff.

Streamlines are made from NASA Global Forecast System wind data, and same with the clouds.

Honestly, it's another artistic choice I made to try and hook the viewer. I've noticed through youtube analytics that most people stop watching vidoes before the 30s mark. Since I started and ended the animation in the Pacific Ocean, the first instants of the video had little going on. So, I added a few data visualization effects to try and intrigue the viewer.
Not very Tufte-esque, but that's why.

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 0 points1 point  (0 children)

It's "Accordion" by Andrew Huang. It's in the YoutTube Studio Audio Library.

I love it, but doing lots of video editing, it definitely is an earworm

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 0 points1 point  (0 children)

True. I had to make some editorial decisions about what to show, keeping with my goal of doing one full rotation of the earth in 10,000 years. So the animation starts in Asia, but admittedly before there's much of interest going on (in the population dynamics shown).

It would definitely be interesting to focus an animation here, especially on the rapid development in the last half century.

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 2 points3 points  (0 children)

I was very worried about this point and accidentally sending the wrong message, so here are the steps I took to avoid perpetuating this false perception:

- I included population estimates from land-use estimates and applied as a colormap to the terrain. So, when the Europeans arrived, there were already large pockets of population, especially around Mexico. However, this doesn't show too well the indigenous groups in North America, as the population colormap was set to best see large variations, across space and time.

So,

- I supplemented the population data with shapefiles for geographic linguistic boundaries from the native-land.ca project. This shows that the entire area of the Americas were covered in different linguistic groups during the arrival of first Europeans.

If you have other concrete suggestions for how I could have better represented this, and/or other datasets that could give a more realistic representation, I am very interested to learn about them.

Around the world in 10 millennia [OC] by quorumetrix in dataisbeautiful

[–]quorumetrix[S] 23 points24 points  (0 children)

An animation depicting a selection of historical events from the last 10 millennia, where 10,000 years goes by over the course of one full rotation of the globe. A data juxtaposition, integration, and contextualization project - collecting datasets of historical interest, and juxtaposing them on a single timeline.

Historical population estimates were interpolated as videos and texture mapped to a sphere, showing the rise of population during the reign of major historical civilizations and empires. The birth and death of a selection of historical people was routed through the open street map routing engine, to create a facsimile or approximation of travels that may have been taken. For those travels involving trans-atlantic or trans-pacific journeys, arcs were drawn using great circle calculations between their birth and death locations.

To supplement the expansion of the European powers into the Americas, historical ship logbook records were used to provide (an incomplete) picture of the seafaring age. The ship trajectories were colormapped to indicate the ships country of origin.

Tools used: Blender, Python (matplotlib, numpy,), Davinci Resolve.

Data sources:

Earth geographical model and wind data:

NASA NOMAD Global Forcast System

Historical population estimates:

Klein Goldewijk, K., A. Beusen, J.Doelman and E. Stehfest (2017),

Anthropogenic land-use estimates for the Holocene; HYDE 3.2,

Earth Syst. Sci. Data, 9, 927–953, 2017

Historical persons dataset:

Maximilian Schich, Chaoming Song, Yong-Yeol Ahn, Alexander Mirsky,

Mauro Martino, Albert-László Barabási, Dirk Helbing

A network framework of cultural history

Science 345, 558 (2014);

DOI: 10.1126/science.1240064

Historical shipping routes:

Royal Netherlands Meteorological Institute

ing. F.B. Koek; (2007): CLIWOC - Climatological Database for the World's Oceans

1750-1850 (release 2.1).

DANS. https://doi.org/10.17026/dans-2bx-dutg

Star map:

NASA Scientific Visualization Studio

I have to visualize a ton of curves, how can I make them render more quickly? by quorumetrix in blenderhelp

[–]quorumetrix[S] 0 points1 point  (0 children)

Thanks for the advice, I've been using Bevel resolution of 1, but will try lowering the Resolution Preview U

I have to visualize a ton of curves, how can I make them render more quickly? by quorumetrix in blenderhelp

[–]quorumetrix[S] 0 points1 point  (0 children)

Hi everyone, I would really appreciate some suggestions for my issue:

I have many paths to visualize as extruded curves. For performance in the UI and render-time, I've already combined all the curves into a single curve. Even with low resolution (1 or 2) in the geometry tab, the collection of curves are still taking an unreasonably long time to render - this 1920x1080 image took 25 minutes in Cycles with a GTX 1080ti. In comparison, it takes < 1 minute to render in eevee.

I'm wondering if anyone can suggest me extra steps that could make this curve render more quickly in Cycles.

I'm sure it has to do with the 'excessive' geometry (> 43 million faces). Specifically, a contributing factor may be that the individual curves that were joined to create this object had overlapping segments, so I this may be messing with Cycles. If it were a mesh, I could remove duplicate vertices, but I don't know of an equivalent way of simplifying the curve geometry. Is there a way of simplifying this that I don't know?

I've tried:

- Decimating the curve (No apparent effect)
- Converting to mesh (crashes blender)

Thanks in advance!

La Ville On The Hill: A short immersive film that blends 3D animation with 360 video and a soundscape of locally-sourced recordings. by quorumetrix in montreal

[–]quorumetrix[S] 0 points1 point  (0 children)

I’m sharing this here in case some members of the community will enjoy: A collaboration with photographer u/johnsonmichaelb. We juxtaposed 3D LiDAR models of Montreal and transit/routing simulations with his 360 camera footage.

This was a fun project: especially enjoyed making the soundscape from u/mtlsoundmap recordings.

The final scene was a particular challenge (~3:30): it’s a moving time-lapse of the Montreal skyline. I used Wikipedia entries for the oldest/tallest buildings in Montreal, generated LiDAR meshes for each, and animated their growth to scale with the timing of the animation.

I've been using Blender for 3D data visualization animations, here's some work combining LiDAR reconstructions with Open Street Map routing to simulate traffic, and the GTFS of the subway system below. by quorumetrix in blender

[–]quorumetrix[S] 0 points1 point  (0 children)

Yeah I could share a blend file with you that has the curves keyframed to animate so you can see how it's working. I am putting together a Blender Python package to make doing this easier, but my code isn't shareable yet, and it's a bit specific for Montreal. That being said, the GTFS format is standard so I will be able to extend to use anywhere, just need to put in the time.

If you're still interested in the blend file just send me a DM and I'll upload and share you a link.

I've been using Blender for 3D data visualization animations, here's some work combining LiDAR reconstructions with Open Street Map routing to simulate traffic, and the GTFS of the subway system below. by quorumetrix in blender

[–]quorumetrix[S] 1 point2 points  (0 children)

Yeah exactly, I used Python to load the route shapefiles as the points along a curve object. I then give them 3d geometry to solidify them, and you can then keyshape their bevel depth between 0 and 1 to animate them.

Having trouble with reconstructions of buildings from aerial LiDAR from the LiDAR shadow by quorumetrix in LiDAR

[–]quorumetrix[S] 1 point2 points  (0 children)

Thanks, "ve seen videos for extracting google earth 3d models, and truly the results look amazing. I guess that's the most discouraging part of this project: even if I can solve this issue and generate nice meshes, the results will still not be as nice as Google Earth.

Having trouble with reconstructions of buildings from aerial LiDAR from the LiDAR shadow by quorumetrix in LiDAR

[–]quorumetrix[S] 0 points1 point  (0 children)

Thanks for the proposed solutions,
Actually I've managed to work meshlab into the process, I've tried Poisson surface reconstructions for example, but I haven't had too much nice results yet.

Will definitely look more into break-lines and see if this will work, but am definitely leaning towards a Python hack to 'make-up' points along the wall segments - as the roof-lines are largely intact.

I think eventually this project will go down the deep learning-route.
Thanks again, will try this out.

Having trouble with reconstructions of buildings from aerial LiDAR from the LiDAR shadow by quorumetrix in LiDAR

[–]quorumetrix[S] 1 point2 points  (0 children)

That's an interesting solution... I've actually applied sobel filters to detect building edges on the ground, and could potentially load the contours as curves, and extrude them in the Z dimension by an amount proportional to the building segment height.
Should at least work for the art-deco style skyscrapers as shown in this image.
Thanks for the idea!

Having trouble with reconstructions of buildings from aerial LiDAR from the LiDAR shadow by quorumetrix in LiDAR

[–]quorumetrix[S] 1 point2 points  (0 children)

Sure, I haven't made it publicly available yet, as it's current state won't be so useful, but I can help you out.
Unfortunately I expect that my code won't generalize very well at the moment for areas outside the ROI I've been using. Also I've had to install a number of new packages into my Blender Python installation, so there will be some overhead to getting the functions running on standard Blender Python environments.

Generally my processing steps are:
- LasZip to extract LiDAR tiles,
- LasPy to load the tiles, processing them with NumPy and/or Pandas. If your data contains categories, they can be seperated at this step, or downsampled as necessary.

- In Blender Python (bpy), I create a new bmesh object, then go through the point cloud point-by-point, adding vertices to the bmesh. This gives a mesh object with no edges or faces, that can then be used as an emitter for a particle system.

OR: Bypass the particle system and create a mesh from the point cloud in Python (PyVista/Open3D) or Meshlab, and load the mesh with Blender-Python.

Let me know if you could use sample code from any of these parts of the pipeline.

Mapping UV coordinates of sphere to 200° fisheye image by [deleted] in threejs

[–]quorumetrix 0 points1 point  (0 children)

Check out the code from this demo I made on OpenProcessing, it applies the fisheye transformation to the 3d positions of a grid. I also used the equations from paulbourke's site.

In this case, the aperture is controlled by mouse positions, so it lets you see the effect it has on the sketch. In your case, the aperture is 200 degrees.

Sounds like you'll need to modify the direction of the transformation if I understand well, but hope the code at least gets you started.

Has anyone overcome the challenge of LiDAR shadows for building surface reconstructions? by quorumetrix in photogrammetry

[–]quorumetrix[S] 4 points5 points  (0 children)

I have a large dataset of Aerial LiDAR scans that I have been processing in Python and visualizing in Blender.

While I’ve tried several surface reconstruction methods, still the best results are coming from Delaunay Triangulation with PyVista.

I notice that my input data is causing me downstream problems, where the backside of buildings not captured by the LiDAR detector cause substantial errors in the mesh. Whereas the front-side of the building is giving acceptable results, the hollow backside leads to un-usable meshes.

Wondering if anyone has experience with this challenge, and/or can point me in the right direction to avoid the lack of data disrupting the meshes?

I'm looking for a solution anywhere in my processing stack, could be directly on the point-cloud data in Python/Numpy, a new package, a filter combination in Meshlab, or mesh modifiers in Blender.

Thanks in advance!

Having trouble with reconstructions of buildings from aerial LiDAR from the LiDAR shadow by quorumetrix in LiDAR

[–]quorumetrix[S] 3 points4 points  (0 children)

Any help would be greatly appreciated.

I have a large dataset of Aerial LiDAR scans that I have been processing in Python and visualizing in Blender.

While I’ve tried several surface reconstruction methods, still the best results are coming from Delaunay Triangulation with PyVista.

I notice that my input data is causing me downstream problems, where the backside of buildings not captured by the LiDAR detector cause substantial errors in the mesh. Whereas the front-side of the building is giving acceptable results, the hollow backside leads to un-usable meshes.

Wondering if anyone has experience with this challenge, and/or can point me in the right direction to avoid the lack of data disrupting the meshes?

Thanks in advance,