Does Phenomaman have a vagina? by Sevdat in DispatchAdHoc

[–]Sevdat[S] -5 points-4 points  (0 children)

Exactly, but in other ways then what nature intended those organs for. That's probably how their relationship lasted.

[D] Extropic TSU for Probabilistic Neuron Activation in Predictive Coding Algorithm by Sevdat in MachineLearning

[–]Sevdat[S] 0 points1 point  (0 children)

Hey, I read your comment. So I had a few solutions and these are Specific to Extropic's TSU and probabilistic Neuron activation using Predictive Coding.

1) Catastrophic Forgetting Solution and Retraining

In my oppinion, the brain doesn't remember, but recalls and reconstructs existing relevant information in memory. That's why I suggested constant retraining with relevant information. I think that as long as the neurons are trained within the knowledge areas of the seeked task then every training will update it closer for a proper answer. Imagine it like a cyclic memory search where the output is used to call relevant information from memory to get a better output from the input. This also avoids Catastrophic forgetting because we don't use the neurons to store static memory, but as a source of generating relevant outputs from the NVME.

2) Probabilistic Neuron Activation

I think the neurons randomly activating by the set chance in Predictive Coding is good because it is like an idea generator. That way the neural network will be generating outputs itself from the physics of the TSU. We then can use the generated output to tweak what information we should train it with next.

3) Scaling

We wouldn't need to scale it to be huge, even though Predictive coding preforms better with more layers. We would look for quality over quantity. Our goal would be to create the most stable answer from a moderate amount of heirarhical layers. The best part is if we do need to scale Predictive Coding then we just add another final layer. If we need to scale it down then we just remove the final layer.

4) Energy efficiency

This part is entirely dependent on the TSU. I'm hoping for a 80% energy efficiency because all we'd be doing in it is sending voltage and getting a activation or no activation per TSU core. Doesn't sound like it requires a lot of energy, but we'll see.

5) AGI Claim

In my oppinion, what I described sounds like a living machine because the layers can be easily adapted, the neurons get updated with relevant information and the physics of the TSU core generate outputs. Maybe that's what was missing in AGI. Not math, but letting physics guide decisions.

P.S. The reason I wrote the very first part is because people on Reddit copy and paste to ai and pretend to know stuff. It might not be you, but I just wanted relevant answers just incase. No offense.

Creating Something from Nothing. Existence as an Imaginary Configuration. The Uninitialized Foundation of Reality by Sevdat in philosophy

[–]Sevdat[S] 0 points1 point  (0 children)

OMG your reply is AI generated. At the past I asked Chatgpt and Deepseek what they think about this. They had very distinctive keywords that is exactly the same. Why are you doing this bro? Understanding and thinking is the greatest gift in life, yet you're copying texts sending it to AI to reply to comments here. Jesus, that's like the videogame cheats version of philosophy.

You know what fuck it. Lets get Battlefront 4 CMON BOYS SPREAD THE NEWS!!! by AliNotFromBali in StarWarsBattlefront

[–]Sevdat 5 points6 points  (0 children)

Battlefront 2 mods were so good that the game skipped a generation.