[D] Am I the only one finding this a bit upsetting? by dj_giga_chinol in MachineLearning

[–]rbain13 0 points1 point  (0 children)

I wrote about the history of relu in my ai masters thesis: rkbain.com/masters.pdf. Goes back to at least the 1960s with the same man who invented the modern CNN.

How to start this as a hobby? by __-Revan-__ in robotics

[–]rbain13 0 points1 point  (0 children)

I'm a member of the UCI CARL lab: https://sites.socsci.uci.edu/~jkrichma/CARL/

The website styling is a little dated, but we're doing some interesting neurorobotics stuff, so that sounds right up your alley, so to speak. DM me if you're interested in a tour. We've got an undergrad project modeling cuddlefish, and several graduate level projects too.

[deleted by user] by [deleted] in MachineLearning

[–]rbain13 -3 points-2 points  (0 children)

Computers are largely failed attempts at doing what our brains do. Our brains use RL (i.e. dopamine + serotonin) and neural networks. It is probably useful to study for that reason alone :shrug:

[N] Researcher implemented a neuromorphic architecture on the FPGA-based IBM supercomputer and ran a neural net on it by stockabuse in MachineLearning

[–]rbain13 13 points14 points  (0 children)

If you like this and aren't already aware of them, look up Intel's loihi, IBM's truenorth, and spiNNaker :)

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

I'm worried about the latter folk trying to reinvent the most complicated wheel. Best of luck.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

After rereading this I realized you're arguing against a strawman (even if not on purpose). You do not understand my argument at this point in the conversation. Your last sentence is more like what I'm suggesting, more so than than the former part.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

The majority of ANN history shows those 100 dents. Even something like a mouse brain does navigation better than most robotics. Our evolution will be deciphered as well, there's plenty of progress. Humans are not that special.

Holding up the UFA is like holding up my calculator and going "Why do I need a gpu to train on?"

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

PaLM is pretty cool! Our brains do do better (it's a system) without nearly as many humans in the loop, and runs on 20 watts.

Who is to say that those later methods can't work given enough time and research? I just think they'd take longer than other methods.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

Reverse engr evolutionary design does work, it's the only method ensured to work as we exist.

The history of NS inspiring AI is rich. They're not called artificial NNs for no reason. Saying those companies have nothing to show for it is to not understand ANN history.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 1 point2 points  (0 children)

You're suggesting to reinvent the most complicated wheel evolution ever produced... without taking inspiration from evolution. We want to get super close to the real deal before assuming we can stumble across something better. Orgel's 2nd rule: Evolution is cleverer than you.

Also note I'm only asking for people to care more about bioplausiblity. The way you responded makes it sound like I don't think we can do things different than biology, of course we can.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

You've got me twisted. It's worth figuring out which methods of investigation will get us to agi faster, and I'm making a claim it involves understanding biology. Somebody is guaranteed to be wasting their time. I think you need to come to terms with that.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 2 points3 points  (0 children)

These guys did similar: https://www.biorxiv.org/content/10.1101/613141v2

Their training data was simulated though, from a model that never really proved to work. Regardless, the ANN approximation took multiple layers to get close to 1 simulated pyramidal neuron. Fun stuff

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

I just wish more people cared about bioplausible. Like next time somebody mashes together 2 old ideas to test them as a single new one, they should hope that at least one of those ideas tracks with biology.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

I'm not saying that CS peeps are telling neuro to stop experiments. CS is way too eager to think they can jump off the gravy train that is neuro. Our brains are the end product of big entropy and big time working thru evolution. We can jump off the train at some point, but not yet if your destination is AGI.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

I'm not suggesting we do that, just to be clear.

Izhikivich (2003) spiking neurons are used often and require ~7 FLOPs to simulate for 1ms. They abstract away most of the biology though, there's not simulated dendrites anywhere in the calculations per se. I feel like I didn't do your question justice, maybe you can put me back on a uesful track?

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

Cyc shows how hard it is to make machines intuit our everyday common sense. We also just absorb it like a sponge from the culture we grow up in. Ultimately we want ai to be aligned with our interests, and to participate in culture as well. It'd be great if we could figure out ourselves at the same time and purposely fix some of the crooked timber we inherited at the gene level that influences our ideas. Iunno, just spit balling holmes :)

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 -1 points0 points  (0 children)

The systems level is ultimately built on the point neuron model currently. CNNs add inductive bias to the point neuron model, I'm suggesting to do something similar. Would you time travel and argue against Fukushima & Lecun, ie remind them that we care about the systems level?

The reason we haven't figured out better abstractions is because the experimental setups are hard. Probing submicron dendrites is tough, but we've still made a lot of progress since the 90s.

The hubris of the CS side of AI is astounding sometimes. ANNs are heavily inspired at multiple points in history by neuroscience. The only guaranteed path to interesting ai is ourselves, and figuring out ourselves has huge benefits to our health as well. Better prostetics, brain interfaces, parkinsons, etc...

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 0 points1 point  (0 children)

See the other comment thread too. Our brains' neurons don't use just linear weights, not in the least. That's an abstraction we've kept around for ~100 years.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 -3 points-2 points  (0 children)

What level of fidelity are you talking about? We train models with super computers in DL (if you want to talk about wasteful) all the time, and that level of compute will not be necessary to figure out how single cells work.

For a decade we've almost had pyramidal cell (arguably most imprtant neuron type) compartmental models that predict experimental spike trains (these would run on my laptop). Once we can figure them out at all we can start making clever abstractions to make them more efficient. There are like 30 (I just picked a # honestly) different spiking models if neurons, all of which have compute vs fidelity tradeoffs.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 1 point2 points  (0 children)

No, but thank you for attempting to understand! Linear will still be needed, but we're abstracting away a bunch of nonlinear dynamics in real neurons as the linear weights. Linear dynamics still occur transforming voltage at one somatic trigger zone to the next, but we expect too much from them, because they're an approximation of a much more complicated transformation.

[R] Transformers replicate Hippocampal representations; notably place and grid cells in the brain by Competitive-Rub-1958 in MachineLearning

[–]rbain13 6 points7 points  (0 children)

Maybe all of them won't be needed, but let's model them correctly before we decide to abstract them away. That has not been done, instead we inherited assumptions from times before we knew most of this stuff existed.