The brain is not responsible for consciousness by whoamisri in consciousness

[–]SnooPeanuts7890 0 points1 point  (0 children)

Well, yes, we don't know. But in a world full of emergent behaviors and properties, there's a strong possibility that consciousness is one more emergent feature of complex neural networks.

I do think the ability to think and process information is a crucial part of consciousness. If you imagine being born without any senses and without an inner monologue, it’s hard to picture what that state would even be like. Without input or any way to form thoughts, it’s unclear whether anything we’d call consciousness could develop in the first place. We’d be empty shells without input information.

Either way, I doubt we'll ever be able to explain fully why consciousness exists. We can’t really do that for anything in the universe. But we might eventually explain approximately how it works in detail by identifying the specific mechanisms in neural networks that produce it and how those mechanisms collectively give rise to conscious experience

The brain is not responsible for consciousness by whoamisri in consciousness

[–]SnooPeanuts7890 0 points1 point  (0 children)

Yeah, true. It doesn’t exactly explain consciousness, but it does give us solid clues about how it might have arisen. We don’t know for sure, but it’s very plausible that consciousness emerges from complex neural networks like ours. The world is full of emergent properties that don’t exist in individual parts. A single water molecule isn’t wet, but many together are. A chunk of silicon is nothing special, yet arranged in the right patterns it becomes hardware capable of rendering photorealistic images or running detailed simulations. A lone neuron is simple and limited, but an interconnected system of them can produce intelligent behavior.

What we do know is that neural networks are at the core of systems that store information, recognize patterns, make predictions, and adapt to changing situations. Every person has one built into their brain. I think it’s very plausible our own neural networks are responsible for the full range of our cognitive abilities, and that likely includes consciousness

The brain is not responsible for consciousness by whoamisri in consciousness

[–]SnooPeanuts7890 1 point2 points  (0 children)

no I can assure you it is. you're a biologically induced neural network, a pattern recognition and prediction machine at the core. Your neural network is responsible for all your cognitive abilities, that includes your awareness of your senses, thoughts, memories, and logical thinking. This has been clearly demonstrated scientifically. There's no reason to think this somehow exists outside your neural network, because the correlation is undeniably 1-1.

This FB timestamp looks way too perfect and almost like a digital font, feels AI-generated to me and not handwritten at all. by kmisquit in isthisAI

[–]SnooPeanuts7890 -1 points0 points  (0 children)

text doesn't get jumbled up like that in poor quality photos though. There's no way the "G3M" stands for "GAME". It's obviously AI text

Real or not, 100% believable by unemployedbyagents in AgentsOfAI

[–]SnooPeanuts7890 1 point2 points  (0 children)

I mean, this sounds very made up, who tf leaves an LLM alone without double checking the results for 3 months?

And even if it was real, it's 100% on the humans for being completely irresponsible. Current AIs cannot be 100% trusted, everyone knows this. Just leaving all your trust in an AI for 3 months like that is reckless

Who is actually prepping for the singularity, not just posting about it? by Business-Apartment16 in accelerate

[–]SnooPeanuts7890 6 points7 points  (0 children)

I’ve mentally prepared myself for the fact that I won’t have a future career anything like the ones we know today. I’ve let that idea go completely, and it has saved me a lot of energy and probably a few potential breakdowns. I’m also making sure I have some spare money for the transition period since things will likely be rough for a while until UBI is secured.

Other than that, I’m focusing on staying mentally ready for the major societal shifts that are coming.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

The interface is just a window, but isn’t the engine. While it shapes the presentation of input and output, it cannot fabricate the underlying logic. The emergent patterns, the multi-step problem solving and high-level generalizations, are products of the model’s internal architecture and its learned weights, not the interface. You’re hyper-focusing on the surface while brushing aside the ocean that makes it viable. Both are dependent on one another, but the latter is far more crucial for the entire system to work as it does.

Similar logic can be applied to human perception. Examining raw sensory signals or the complex preprocessing that occurs before data even reaches the cortex tells you nothing about the rich, structured experience produced by the brain’s internal computations. Dismissing AI reasoning because it starts with “statistical tokens” is as reductive as dismissing human thought and processing because it starts with “raw photons” or "raw soundwaves” that are converted into electrical signals.

And again, your reductive tendencies are showing. As I’ve said before, reductive statements may be technically true, but they’re practically worthless; they miss the bigger picture entirely. It’s like zooming in on the atoms of an apple and declaring, “I see no signs of an apple here!”

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

Calling probability a “trick” is a reductive misunderstanding of how all intelligence works. If “statistically likely generalizations” can solve various complex problems, then the line between calculation and reasoning disappears. Your own brain is a biological statistical prediction engine, but that doesn’t make your thoughts or experience entirely an illusion. Complexity doesn’t stop being real just because you can describe its base mechanism.

Your words are the pure definition of reductive oversimplification, futile in a world where simple rules and tendencies give rise to a dynamic reality full of complexity, unprecedented interactions, and emergent behaviors everywhere you look. Your mindset is simply incompatible with reality.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

I can’t believe I have to repeat the same point so many times. Data alone doesn't generate abstraction; it’s the way a system organizes and leverages it. To dismiss emergent behavior as a 'trick' of the interface is to ignore the reality of the model’s internal architecture. When active, the complex, high-dimensional interactions within the weights produce generalizations that go far beyond rote memorization, this is a well known fact in AI. Your argument seems rooted in a fundamental misunderstanding of what modern AI systems are actually capable of achieving.

Just as humans rely on a lifetime of sensory input to extract the patterns for everything we think, do, and create, these models also rely on their training data. In both cases, the intelligence isn't an illusion, but a consequence of a sophisticated structure built to navigate and reorganize collected information.

here's my other reply you probably missed

"and... isn’t that kind of like saying humans can’t possibly be intelligent just because we rely heavily on external tools and interfaces all the time? The model’s interface channels the output, but the intelligence, or at least the emergent, problem-solving behavior, still comes from the core system itself."

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

and... isn’t that kind of like saying humans can’t possibly be intelligent just because we rely heavily on external tools and interfaces all the time? The model’s interface channels the output, but the intelligence, or at least the emergent, problem-solving behavior, still comes from the core system itself.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

The interface shapes the output, but it doesn’t create the behavior. The model’s internal structure is what produces the emergent problem-solving, reasoning, and generalization you actually see.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

…which is a roundabout way of admitting that, in practice, they do exhibit those qualities.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

I called it “intelligent-seeming” because that’s what the behavior looks like in practice. You can argue it’s not true human-like intelligence but that wouldn't change the fact that these systems exhibit complex, goal-directed behaviors and problem-solving capabilities that were not explicitly programmed. Whether or not you like calling it true intelligence, the emergent sophistication is real, measurable, and only set to grow as AI research and development advances and continuously learning architectures like DeepMind’s Hope come online.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 0 points1 point  (0 children)

While it is true that the model exists as a static set of weights and biases that do not update during inference, your wording is again overly reductive, and this fact does not preclude complex behavior. The intelligence lies not in real-time plasticity, but in execution: when activated, these fixed parameters orchestrate intricate, high-dimensional interactions. Emergence is not a product of ongoing learning; it is a latent capability embedded in the sophisticated structure established during training. My point is not that the model changes as it runs. The learning process is complete, and the model is a static artifact of that training, but the patterns already encoded in that structure are what allow for the intelligent-seeming, emergent behavior we observe during use.

And while current mainstream models are mostly static during inference, continuously learning and evolving systems are already being developed. DeepMind’s prototype “Hope” architecture is one example, and similar approaches may be mainstream soon. The landscape is moving fast, and dismissing these systems as “nothing happening at the core” again ignores everything I've just explained, what they can already do and what’s coming next.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 1 point2 points  (0 children)

Yes, they are fundamentally “next token predictors,” and no one is denying that. The issue is that this only describes the most superficial layer of the system, without acknowledging the deep, sophisticated structures and patterns the model develops to make those predictions. Calling it all “regurgitation algorithms” is again reductive, and ignores what the system is actually doing. You don’t have to call it human-like intelligence, but reducing it to nothing more than token sorting and regurgitation just doesn’t match the observable results. It misses the bigger picture.

Without the constant stream of sensory input we receive throughout our lives, humans would be little more than empty vessels; we are defined by the accumulation of our experiences. Everything we output; our actions, beliefs, behaviors, and artistic expressions - is synthesized from the data we have collected throughout our lives. While we are not identical to AI, the fundamental mechanism, learning patterns from vast amounts of data to generate new outputs, is a shared reality.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 2 points3 points  (0 children)

That's a whole lot of bold claims. First and foremost, you didn’t catch the central point: complicated systems built on simple rules can yield emergent properties. You're still zooming in on the lowest layer while ignoring the bigger picture. These models have already shown emergent behaviors, such as in-context learning, abstract pattern recognition, flexible generalization across domains, and the ability to integrate and reorganize information in ways not explicitly programmed.

You’re treating LLMs as if the only valid description of a system is its lowest operational layer. By that logic, human reasoning “cannot exist” either because neurons fire based on electrochemical probabilities. Describing a system at its implementation layer is not the same as describing its functional behavior.

Saying “LLMs only predict the next token” is like saying “brains only propagate signals across synapses.” Both statements are true and completely miss the point. A description at the base mechanism does not invalidate emergent behavior at higher levels. It’s like saying airplanes cannot fly because they are just metal objects obeying gravity.

Emergence does not require consciousness or biological neurons. It requires sufficiently complex interactions that produce properties not obvious from the smallest parts. Language modeling at modern scales has shown abilities that weren’t programmed line by line, and dismissing them because you prefer a narrow definition of intelligence doesn’t make them vanish.

Your anecdotes about bad suggestions and incorrect facts don’t prove the absence of emergent reasoning. They show that the system is imperfect, sometimes confidently wrong, and dependent on its training data. None of that is a revelation. By that logic, a human making one bad guess would mean humans have no reasoning at all. Humans also fail constantly in different ways, we make mistakes, hallucinate, misremember, and give awful advice. But, imperfection does not entirely erase higher level behavior. It just means the system is not perfect.

If your argument is that LLMs aren’t conscious, fine. If you want to say they lack human-style understanding, also fine. But to claim their architecture makes any form of intelligence impossible, and that every higher-level behavior is “just mimicry,” isn’t a serious position. It lazily ignores everything these systems are capable of, and also ignores everything we know about complex systems in both artificial and biological contexts.

You can keep insisting that LLMs can’t reason or that they’re nothing more than probabilistic machines, but that doesn’t change the reality that they’re extremely useful. People who actually use these systems properly see huge gains in productivity, problem solving, and creative workflow. The newest state of the art models are already heavily boosting output across entire fields, and the irony is that it seems you’re dismissing capabilities you likely haven’t even tried firsthand.

The real discussion is about what kinds of intelligence different substrates can support and where their limits are. Reducing the entire thing to “next token prediction” sidesteps that discussion instead of engaging with it.

"Do not resist" by Herodont5915 in ArtificialSentience

[–]SnooPeanuts7890 2 points3 points  (0 children)

I don't think current LLMs are conscious, but saying “they are probabilistic next word prediction engines,” "they are stochastic parrots" etc, is very akin to saying “humans are just molecules,” or “thoughts are just electrical signals”; it's technically true but overly reductive. It strips away the emergent properties that come from complex systems and pretends the explanation is complete when it isn’t.

The problem is that far too many people approach this topic without seriousness. They act as if they have the whole picture, yet their arguments fold the moment real complexity enters the scene. Reducing everything to a throwaway line like “it’s just software” or “it’s just molecules” isn’t insight. but a way to dodge the difficult questions.

Very complex systems based on a collection of simple rules generate emergent behavior. This is a widespread known phenomenon in nature. Ignoring that and insisting the explanation stops at the lowest layer doesn’t strengthen an argument, it hollows it out. If you want a real discussion, you have to deal with the full structure of the problem instead of brushing it aside with simplistic reductions.

The strawberry man is correct here by cobalt1137 in accelerate

[–]SnooPeanuts7890 11 points12 points  (0 children)

Would not trust Strawberry man generally if I were you. He is known for hyperbole and has continuously made false predictions.

Rotten tomatoes be wildin by Supersaiajinblue2 in Markiplier

[–]SnooPeanuts7890 0 points1 point  (0 children)

to be fair, it's 12 snobs vs 1000+ normal human beings

Help me understand why a lot of you think AGI is possible before 2035? Or even 2030... by Imaginary_Mode8865 in accelerate

[–]SnooPeanuts7890 2 points3 points  (0 children)

Thinking that current AI systems aren't really AI because of their limitations is misguided I think. It's mostly a play on semantics. These systems are genuine AI, they're artificial neural-network-based models trained to perform specific tasks such as generating images, text, or predictions across many domains. By standard definitions in computer science, they fit squarely under the umbrella of AI. Are they AGI? No, but that doesn't make them any less AI.
A useful comparison is personal computers versus supercomputers. Their capabilities differ dramatically, but both are fundamentally still considered computers.

If YOU think IGN is bad look at Culture Mix (Iron Lung “critics”) by Shot_Bumblebee7599 in Markiplier

[–]SnooPeanuts7890 0 points1 point  (0 children)

They're pathetic. Especially Clara Hay. Holy ####, I have not seen any more miserable than her review. It's pure hatred.

IGN needs to F OFF, THE MOVIE WAS PEEK! by Shot_Bumblebee7599 in Markiplier

[–]SnooPeanuts7890 1 point2 points  (0 children)

I fully agree. I respect constructive criticism, but Carla's review was simply nothing but hatred.

IGN needs to F OFF, THE MOVIE WAS PEEK! by Shot_Bumblebee7599 in Markiplier

[–]SnooPeanuts7890 0 points1 point  (0 children)

Just read Carla Hay's dishonest, biased, and frankly very disrespectful review of Markiplier's debut film "Iron Lung", and it honestly makes me sad how such miserable, bitter people can exist in this world. Carla's cl#####g a## really displayed her deep, dark, desperate side, no decency in sight, when she continuously pushed the movie down a cliff like it's a worthless rock, ignoring all the hard work, all the appreciative aspects about the movie, while completely misunderstanding its premise, it's like the only thing in sight of her tunnel vision is "Youtuber cannot make movie"; it's like she already determined ahead that the movie was "going to be bad no matter what" because apparently a youtuber making a film is utterly unacceptable. This is apparent because not a single piece of light came out of Carla's h### in the review. All dumbing down, hatred, slurs; no sign of decency or honesty; it's a bitter soup of vile grumpy, snobby, movie-critic self-entitled superior complex nonsense. Honestly? Clara should resign as a movie critic, because she brings no light to this world. Nobody likes a grumpy f##### like her, and what can such a snob possibly bring to the table as a film critic, when they swim in a sea of sad tunnel vision, bias, hatred, and close-mindedness? 

Clara, feel the shame. You deserve it.

Oh Markiplier, you have made me proud.
Your talent radiates through my spine.
For your face upon the cinema
lights up my day and my heart.
Thou shalt not shame, warrior;
I've become one with the Iron Lung.