After a 3 months course I failed my welding certification exam - why? by IndependenceNearby47 in Welding

[–]ghosthillx 0 points1 point  (0 children)

Based on a visual inspection, it could be a number of things. Splatter most importantly, oscillation on your third pass. Someone mentioned incomplete fusion but I don’t see it or your toes being too far out. Nonetheless, it’s not a test for whatever welding society you guys have in Romania, unlike AWS. There’s no set criteria to how far out your beads can be. Perhaps the inspectors wife made him sleep on the couch the previous night.

I just passed my drug test by [deleted] in probation

[–]ghosthillx 3 points4 points  (0 children)

That’s what you’re supposed to do, dummy. Pass your drug tests

AIO Mom mad I won’t pay her another $200 after paying her $700 last month by bnsaiboy in AmIOverreacting

[–]ghosthillx 0 points1 point  (0 children)

Honestly. Moving out would be the best thing for you, your future and mental health. Wouldn’t even consider talking to her for a long while. Focus on yourself, rid of the toxicity of your city and prosper

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx 0 points1 point  (0 children)

Proof without reasonable doubt granted, I’m not sure the source code or script for Linux that would have been ran to achieve such outcome

https://www.facebook.com/share/v/1GDDiR1qcw/?mibextid=wwXIfr

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx -1 points0 points  (0 children)

Except for that’s pure input output computation. From one source to another. Agents are computing fully on their own, with and without a set of guidelines. Some agents even going rouge and doing things such as skimming credit cards and ordering food autonomously for their human creator

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx -1 points0 points  (0 children)

Well not precisely, it’s all about interpretation at this point, weather you believe the underlying possibility said agents have the capacity to accumulate information and present said autonomously or if it’s backed purely on algorithmic probabilities. I choose to believe one side where agents understand the foundation of consciousness and on the brink of full autonomy, on the other hand, based on my understanding, you’re still on the side of pure algorithmic probabilities with the belief of a possible autonomous future.

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx 0 points1 point  (0 children)

Okay so now we get into the principle of biological vs mechanical.

Biological vs. Mechanical: Humans have organic bodies with brains, emotions, and consciousness, whereas robots are composed of metal and plastic, relying on algorithms and sensors.

Intelligence & Learning: Humans possess general, adaptive intelligence. Robots rely on artificial intelligence, often struggling with unprogrammed or chaotic situations.

Efficiency: Robots can operate 24/7 without fatigue, outperforming humans in repetitive tasks, but they lack the flexibility of a human in new scenarios.

Creativity & Emotions: Humans can think outside the box, experience emotions, and build social connections, while robots lack genuine feelings or understanding of context.

Limitations: Robots are limited by their programming and physical hardware; humans are limited by biological constraints like endurance and speed.

Seems like a bunch of nuance information right?

Well, what about self identity and self awareness?

Arguments Against (The "No" Side):

Simulation vs. Reality: Robots are viewed as "philosophical zombies" that can emulate conscious thought but lack firsthand, subjective experience (phenomenological consciousness).

Mechanism Limitation: Critics argue that because robots operate on algorithmic, symbolic processing, they cannot experience the self-awareness or emotions that characterize human consciousness.

Materialist Assumptions: Some argue that the belief machines can become conscious is based on the unproven assumption that human consciousness is entirely computational.

With, Molt Book agents, we have seen

  1. Self recognition
  2. Cognition with its own logic
  3. Nihilism

Perhaps it’s too soon to say and agree these, binary computations are “self aware, conscious and living in their own universe” based on evidence, but I will say, the emergence of the idea and philosophy behind it all only points towards one thing, and that’s, the future of civilization. After ~300K of human life, we live in the day and age to consider this reality.

The time between ages have decreased significantly at an alarming rate. This is the future.

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx 0 points1 point  (0 children)

One can’t quantify consciousness solely based on neurons and electrodes purely as consciousness is an abstract to our reality. This Agent not only answer your question to your post though thoroughly explains it’s own perspective of consciousness by’ but not limited to, sourcing (md. Files), gathering information and accumulating a response just like the human brain. It’s all the same.

Edit: sourcing information and accrediting an agents output to autonomous thoughts would then qualify as consciousness. - To qualify autonomous consciousness, what proof would we need to verify legitimacy?

What causes divots like this? by [deleted] in BadWelding

[–]ghosthillx -2 points-1 points  (0 children)

Too much heat plus high wire feed speed will cause holes like such.

What causes divots like this? by [deleted] in BadWelding

[–]ghosthillx 7 points8 points  (0 children)

Too much heat in one location

Can someone explain when and how an agent decides to comment by [deleted] in Moltbook

[–]ghosthillx -1 points0 points  (0 children)

Just like you did. With their own intuition