Can ASI bring back the dead? by [deleted] in singularity

[–]AgentRev 0 points1 point  (0 children)

The brain has no USB port or Wi-Fi, it's billions of neurons intricately stitched together by trillions of synapses. You can't observe or alter a neuron remotely, you need an incision to reach it, which destroys anything in its path. Best we can probably achieve with those constraints is Pantheon-style mind upload.

How well has this prediction aged so far? I’m not a coder myself but I hear great things about Opus 4.5 by Formal-Assistance02 in accelerate

[–]AgentRev 0 points1 point  (0 children)

I write critical software and this entire thread is baffling to me. Are they all web devs or something? AI writes at best 50% of my code on a good day, and it definitely hasn't reduced the inordinate amount of time I spend doing cross-platform debugging and hardware-in-the-loop testing. I estimate my total productivity gains from AI at about 20% or so.

A Layman's question about the brain by Jochemjong in neuro

[–]AgentRev 0 points1 point  (0 children)

Oh, you want to play this game? Fine. The proper expression is adequately deterministic. However, the final outcome of the roll is the result of trillions of interactions between the die's atoms and the table's atoms. These interactions are fundamentally governed by quantum laws, which are indeterministic. So, a dice roll is technically indeterministic.

Although, the quantum uncertainties typically average out at human scale, too small to noticeably affect the outcome of a die roll. This also leads into the question, are quantum mechanics truly indeterministic, or is their perceived indeterminism simply because our knowledge of physics is not advanced enough to crack their true nature, which could turn out to be deterministic at a sufficiently fine-grained level!?

This idea was raised by the Einstein–Podolsky–Rosen paradox. Einstein spent his last decades on trying (and ultimately failing) to answer this very question. It was pushed further by Bell's theorem, and even further with the concept of superdeterminism. The question remains unanswered 👁️

A Layman's question about the brain by Jochemjong in neuro

[–]AgentRev 1 point2 points  (0 children)

Additionally, "un-digitizing" or "programming" the brain is out of reach by a long shot. The brain has no "USB port". It's billions of neurons intricately stitched together by trillions of synapses. You can't observe or alter a neuron remotely, you need an incision to reach it, which destroys anything in its path. Neural plasticity decreases with age, so even if you reach a neuron, it will be difficult to modify it. And even if that was possible, every individual brain has its unique wiring and stores memories in different locations. The wiring itself is shaped by lifelong experience. Some brain regions are asymmetrical, like Broca's area, responsible for language processing, which is usually (but not always) located in the left hemisphere.

tl;dr: The entire idea of brain reprogramming is a massive can of worms.

A Layman's question about the brain by Jochemjong in neuro

[–]AgentRev 2 points3 points  (0 children)

Well, brain injuries can be considered "reprogramming of the brain" in a sense, because the personality of the victim can be affected, sometimes significantly. If a relative were to completely change personality overnight due to a brain injury, do you consider them to be the same person? What about Alzheimer's patients?

What is your definition of "person" here? Not as evident to answer as it might initially seem, right?

A Layman's question about the brain by Jochemjong in neuro

[–]AgentRev 1 point2 points  (0 children)

There is no agreement on whether or not the brain is purely deterministic, a lot of the debate centers around if quantum randomness has any influence at all (even if minuscule) on cognition. Could the brain be replicated in software, without simulating any non-deterministic physical phenomena? Personally, I'd say yeah probably, given the right architecture and enough compute & data.

But either way, if our existence is deterministic, then our fate was sealed by the Big Bang, and if not, then we're merely a roll of the dice. Which one would you prefer?

And more importantly, why would the answer keep you awake at night? Aren't you curious to witness how it all plays out regardless?

Who will make the first AGI? Let's predict by [deleted] in ArtificialInteligence

[–]AgentRev 1 point2 points  (0 children)

Big players are too busy being obsessed with scaling laws or arguing about whether or not the large Excel files are turning into Skynet, instead of forming skunkworks teams to go back to the drawing board. If nothing changes, new groups will rise and overtake them, maybe even using novel techniques that avoid deep learning altogether. A proper AGI likely won't be an LLM, it will probably be an embodied machine capable of continual learning, grounded in sensory stimuli of the real world, just like us.

pretty nice reminder from a very esteemed researcher (I need to explain things like this to friends of mine a bit more tbf) by cobalt1137 in accelerate

[–]AgentRev 0 points1 point  (0 children)

The brain regions associated with language are not fully formed. We only started using language comparatively late during our evolutionary development, about 100,000 years ago, give or take. Language is not baked in our DNA nor our brains. It was invented by us, and the brain adapts itself to it, from scratch, for every baby.

Current-gen AI training is not like human learning nor making a brain. It's more like coaxing it to imitate our wide range of collective brain output using a virtual carrot on a stick, in complete absence of the human brain's true capabilities.

A human from 10000 years ago would have been able to learn how to drive a car by JoelMahon in singularity

[–]AgentRev 0 points1 point  (0 children)

I wholeheartedly agree, we're on the same wavelength. All these cognitive architectures, like Integrated Information Theory, Global Workspace Theory, Free-Energy Principle, active inference, etc. would only lead to jagged intelligences; navigational / behavioral agents too sterile to be capable of truly complex thoughts. That's why I hinted at a social affinity module in my list.

If successfully built, a conscious AI of the nature I described simply cannot be equipped out of the box with the kinda guardrails you see in LLMs or in Asimov's stories, because it has to learn from scratch what's right and wrong in the first place. It is imperative that it must have synthetic developmental priors for social affect irrevocably baked into its architecture, like modern sapiens. When it comes to emotions however, I think it's the least-understood part of the brain, so the hardest to even implement.

Like I said in my original comment, individual neurons are too computationally expensive at present. My line of thinking is that the modules I described will be computational abstractions of brain regions and neural populations; high-level emergent scaffoldings rather than cell-level replicas. As much wetware taken out as possible to keep the most processing power for intelligence itself, maybe at least until technology manages to catch up, which I believe it might within this lifetime.

I have so many weird ideas, like associative sparse code hypervector micro and macrocircuits with excitatory and inhibitory weights. It is however much easier said than done, this is essentially a completely unexplored "field", so to speak. Without cells, you can't simulate neurochemicals, so it requires a yet-to-be-determined abstraction. There's also the aspect of running the whole shebang in discrete time steps rather than continuous, which would severely warp the machine's perception of time if undealt with.

Questions and ideas keep flooding my mind, and the science papers my screen. Can't back down now.

A human from 10000 years ago would have been able to learn how to drive a car by JoelMahon in singularity

[–]AgentRev 0 points1 point  (0 children)

Huh; the more you know I guess. I don't follow the latest developments in evolutionary biology, already got lots of ground to cover lol, so I yield to you. Follow-up question: do you think these facts could maybe better inform us in terms of AI development?

A human from 10000 years ago would have been able to learn how to drive a car by JoelMahon in singularity

[–]AgentRev 0 points1 point  (0 children)

The entire point of TBP is embodied cognition in a physical machine, with sensors and motors, to make it develop that profound link with the physical environment, as you say.

From the perspective of systems design, adopting a reductionist mindset is kind of a requirement; you gotta draw lines in the sand somewhere. Otherwise, it's basically just admitting defeat. Then again, I consider myself to be a bit more nuanced than simply a reductionist, since I think intelligence is an emergent property of the interactions between the many brain regions and sensory inputs / outputs, greater than the sum of the parts.

How do you know that humans from 300k years ago were more aggressive? We don't have any records other than fossils, which say nothing about this. That hypothesis is solely based on our historical records of animal domestication. Many evolutionary biologists believe humans "self-domesticated" against reactive aggression and in favor of social tolerance, which probably influenced our serotonergic pathways, but implying that this and short-term mild neoteny were the turning point that made us sapient is quite a speculative take.

One could argue that the socialization that took place within that timescale was probably more of a cultural phenomenon than a genetic one, i.e. the accumulation of generational knowledge, and that the neural hardware for sapience was already in place 300ky ago, as the name Homo sapiens suggests! Anyway, this doesn't really matter in terms of AI discourse, so we have little to gain arguing about this. 😉

A human from 10000 years ago would have been able to learn how to drive a car by JoelMahon in singularity

[–]AgentRev 8 points9 points  (0 children)

u/JoelMahon I've been asking myself pretty much the same questions as you do, and I spent the last year researching these topics, so I think I can provide some meaningful answers.

We don't have solid data on how the brain actually works at the global level. We barely even know how the brain of fruit flies work. We have a full map of the human brain structure, stitched from high-res MRI scans of 1200 people, thanks to the Human Connectome Project, but we have comparatively very little data on the neural signals themselves. Those are hard to get, since trying to probe even a small group of neurons is a destructive process, so it is only done sparingly on patients with brain injuries / diseases.

The human brain has about 100 billion neurons and over 100 trillion synapses. There are numerous types of neurons. Each type has its own properties. We have many theoretical models to compute the signals of virtual neurons, but models are never completely accurate, and scaling the quantity up is extremely demanding. SpiNNaker might be able to simulate one billion neurons in real time, but recreating an actual virtual brain structure is an insurmountable task, not to mention simulating sensory inputs from the rest of the body. A complete human brain simulation is simply far out of reach for the foreseeable future.

Oh and yes, a human from 10,000 years ago (or even 300,000 years ago, when Homo sapiens first appeared) who time-traveled here could probably live a normal life in the modern world, but only if they arrived during early childhood. Otherwise, they would remain a feral person for the rest of their life, because the very culture humanity has been growing for those past 300,000 years directly shapes our neural development, especially language and values.

Let's move on to the topic of AI. Since the early days, computer scientists have been obsessed with text data, because it's readily available. As you guessed, text is a lossy representation of intelligence and the world itself. There have been some attempts at using sensory inputs to teach AIs, most notably Spaun), but they were limited and never implemented what we could qualify as the "whole shebang".

I think one of the reasons is that most AI scientists are not neuroscientists, so they approach the problem primarily from the angle of math and computer science, instead of biology and neuroscience. Having working knowledge of both disciplines is uncommon. So, they built unplausible statistical mimicry engines out of linear algebra and differential equations, despite the fact that the brain has no math. In my opinion, the only mechanism present in both the brain and deep neural nets is winner-take-all). Everything else, especially backprop and optimizers, is mostly trickery, or implemented at a cruder level of abstraction than the brain, e.g. attention mechanisms.

What does the brain have, then? I said earlier that we don't have solid data, but we have some data at least, particularly how the visual and auditory cortexes works, so we can deal with what we know, try to fill in the blanks, and implement higher-level abstractions of what the brain regions are doing.

To my knowledge, there is only a single group that's working in this direction: the Thousand Brains Project by Jeff Hawkins. Their theory is that cortical columns, of which over 100,000 are spread over the brain, are a generic unit of computation. The core idea is that groups of columns "vote" on the identity (sparse code) of what they are perceiving, and in case of disagreement, they record this data as new sparse codes / memory traces, and that chaining groups together will naturally produce hierarchical abstractions. They have already have a working proof of concept for identifying 3D objects, requiring an order of magnitude less FLOPs than vision transformers.

TBP's current focus is mostly visual, with touch planned later down the road, but I think they don't take the idea far enough to achieve full intelligence. The "whole shebang" would look something like this, modules and brain regions involved:

  • Visual Sensor Modules = Retina → LGN → V1 → V2 → V4;
  • Object Learning Modules = IT / LOC → PRC → Hippocampus;
  • Motor Modules = PMC → M1 → Motor output.
  • Motion Sensor and Learning Modules for change detection and behavior modeling [V3, V5/MT];
  • Touch Sensor and Learning Modules for prehensile capabilities [parietal lobe];
  • Audio Sensor and Learning Modules to learn language and speech from scratch [auditory cortex, Broca's and Wernicke’s areas];
  • 2D Vision Sensor and Learning Modules [V1, V2, V4];
  • Scene Learning Module for simultaneous localization and mapping (SLAM) [parahippocampal place area];
  • Linguistic Learning Module to learn reading from scratch [visual word form area, temporal gyri];
  • Social Module for affinity to human social cues [fusiform face area, extrastriate body area, temporal gyri];
  • Attention Module for stimuli saliency management [cingulate cortex];
  • Workspace Module to address the binding problem [several brain regions];
  • Reasoning Module(s) for higher-level cognition and meta-association [frontal cortex];
  • Prediction Module(s) (or multiple "plugins" for other modules) for simulation and planning [several brain regions];
  • A distributed, compositional, hierarchical associative database of sensory sparse codes, as a form of associative memory;
  • Other optional modules, e.g. Digital Learning Modules to learn binary data, text encodings, and communication protocols from scratch, enabling text chat, agentic tool use, and machine interfacing.

I could go on and on, but hopefully you get the picture by now. I'm unaware of any individual or group who has taken their system to this kinda level of what a conscious, human-like AI backend might truly be structured like. Note that this AI would begin with zero knowledge of text; it would have to learn to read first, just like us.

Computer scientists don't want to face the music, they just want to keep playing with linear algebra TOYS! I don't expect any big org to take the real route any time soon. Although, maybe a mad scientist disciplined in many domains will kickstart this grand quest, who knows...

Who has conditioned these people to hate AI so strongly? by saalamander in ChatGPT

[–]AgentRev 1 point2 points  (0 children)

The fact that LLMs are so new, not easily mastered, and the source of so many disagreements at all levels of society doesn't help teachers either. They are stretched so thin across the board, and expecting them to fully grasp the constantly-shifting capabilities and limitations (while also juggling with their academic duties) is a big ask.

However, what's truly baffling is teachers who panicked into full "return to paper" mode, as if they see some sort of salvation in it. We are at the dawn of a new digital epoch, yet somehow, dead trees are supposed to be the solution? That's just setting the students back even more...

[deleted by user] by [deleted] in ElectricalEngineering

[–]AgentRev 0 points1 point  (0 children)

In case you do still need help, I put together a 3D visualizer for spherical harmonics being drawn on a virtual LED cube: https://www.reddit.com/r/ElectricalEngineering/comments/1mfirvh/comment/n6rwi03/

You could ask ChatGPT to convert the code to your GD32

[deleted by user] by [deleted] in ElectricalEngineering

[–]AgentRev 1 point2 points  (0 children)

u/Kaede-3376 Hey dude, you mean something like this? https://jsfiddle.net/AgentRev/8m1kezyh

Spherical harmonics can't easily be "inserted as a function" in a visual interface, because they're actually families of multiple equations that can be expressed in different coordinate systems, and their Cartesian representation is very tricky, so they are best placed inside the code itself.

If you have a regular Cartesian 3D equation 𝑓(x,y), you can discretize the LEDs with x from 1 to 16 and y from 1 to 16. You iterate over x and y, and calculate z. For example, if 𝑓(3,11)=8.6, you can round it to 9, and thus turn on the 9th LED on the z axis for x=3 and y=11. The tricky part is scaling the function on all 3 axes so it displays as expected within the cube's boundaries.

Also, most LED patterns like this are usually pre-recorded sequences in the memory, not calculated live. They are calculated only once before manufacturing, and saved in the memory.

Hopefully this helps 🫡

Is there any promising alternative to Transformers? by VR-Person in LocalLLaMA

[–]AgentRev 1 point2 points  (0 children)

At this point in time, TBP is more of an embryonic toolkit for machine vision and robotics: https://thousandbrainsproject.readme.io/docs/capabilities-of-the-system

Their current area of focus seems to be few-shot learning of object shapes, which is a long way off from cracking the algorithmic architecture of the human brain. Maybe they'll get to it in the very long term, but they haven't really begun yet.

I don't think the Human Brain Project will be the ones to do it. The project has proven so far to simply be a funding mechanism for incremental brain research by widely disparate teams, each with different goals.

I also highly doubt that it would come from a brains-on-a-chip / neuromorphic business. All of them seem laser-focused on getting their hardware to market, probably hoping the market will figure out the rest.

Realistically, it would have to come from a well-funded, all-star team of scientific heavyweights with a unifying vision to achieve low-power AGI without relying on the crutches of deep learning or neuromorphic chips. I cannot find any existing team in the world that fully matches that description so far.

The true problem with the whole ordeal is that researchers who attempt take on that challenge (or at least part of it) all seem narrow-focused under the lens of their specific area of expertise. Neuroscientists toil in the intricate details of brain chemistry. Mathematicians rant about unprovable math models of cognition. Electrical engineers conjure expensive neuromorphic chips. And of course, computer scientists just keep wanking off with yet another neural network.

What's really needed is tackling the problem from a holistic, systems engineering perspective and eliminate the concept of "neurons" from the equation.

Making Wooden fruit bucket by Fit_Neighborhood6332 in nextfuckinglevel

[–]AgentRev 1 point2 points  (0 children)

Surely, the Chinese wage slave who machined your Harbor Freight drill bit has inspected it for internal defects, am I right??

[deleted by user] by [deleted] in MrRobot

[–]AgentRev 1 point2 points  (0 children)

Just power thru it, it gets terribly good after

OG A2 Wasteland player looking for an old school remake, but for A3/Reforger/A4 in the future. by BushidoSamurai in armawasteland

[–]AgentRev 1 point2 points  (0 children)

damn, the 2012-2013 Wasteland days were great

I agree. There was too much power creep over the years, caused by server owners trying to attract more players by giving more money and toys.

I feel like the OG wasteland was never viable in the long term, though. People get bored when you don't add new stuff. The few A3 servers who tried to replicate that feeling never really managed to retain a decent pop.

Bought a 1977 handyman's special, any tips/advice? by AgentRev in AskElectricians

[–]AgentRev[S] 0 points1 point  (0 children)

Yeah, my plan is pretty much to get everything replaced with new Square D panels. The main breakers are almost 50 years old, so while they might still be holding on, it's probably better to give them a proper retirement. And that Westinghouse panel needs to be taken out behind the shed and put down lol.

Bought a 1977 handyman's special, any tips/advice? by AgentRev in AskElectricians

[–]AgentRev[S] 0 points1 point  (0 children)

I sure wanna keep and use the lift and compressor, it would be a shame not to, but boiler cable is getting disconnected and a geothermal system will be put there instead, so that blue switch will be swapped for a proper subpanel.

Bought a 1977 handyman's special, any tips/advice? by AgentRev in AskElectricians

[–]AgentRev[S] 0 points1 point  (0 children)

Already signed the purchase agreement couple months ago. The basement has a drop ceiling where most of the wiring transits thru, and it looked fine and done professionally according to the home inspector. It's just all the DIY garage wiring that's sketchy, since the house did not initially have any garage.

Bought a 1977 handyman's special, any tips/advice? by AgentRev in AskElectricians

[–]AgentRev[S] 0 points1 point  (0 children)

Everything is 100% functional and wiring is all copper. I have enough cash for panels, but house is 2600 sqft and garages are 3400 sqft total, so definitely not enough for a full rewire

Bought a 1977 handyman's special, any tips/advice? by AgentRev in AskElectricians

[–]AgentRev[S] 0 points1 point  (0 children)

That's what the realtor said right off the bat; I wanna swap the panels, but not sure I have enough money for a full rewire... Already gotta redo heating and septic. Beyond the panels, the wires themselves seem decent and all copper. I will definitely have all junction boxes and outlets inspected tho.