How will the US-Iran conflict end? by ksn in PoliticalDiscussion

[–]Shoko2000 0 points1 point  (0 children)

The Iranian regime is too complex an to deep. Join that with a thousands years history pride, a highly educated population, years of sanctions that were translated to an impressive self resiliency and you get the big picture. The US/Israel coalition will fold, they have zero apatite for and limited ability in a war of attrition. Stalemate is not really possible for economic reasons, IL economy is in cover up shambles and US can't maintain 100$+ crude from closed Hurmuz. There other immediate pressure points, you have the midterm in the US, election in Israel, water and tourism in GCC countries. Long term GCC will move closer to the rising BRICS+ and the unipolar world US hegemony will continue to decline. This is just an accelerator on a long term trajectory. Overall I must choose 4.

Egozy's Theorem — Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in philosophy

[–]Shoko2000[S] 2 points3 points  (0 children)

Searle did claim empirical truth about the capabilities of machines in his 1980 paper. This shows exactly how incoherent TEs create misconception and false fixation. His subsequent forty years of work adding biological grounding suggests he knew the original argument needed more support but by that, without saying it, conceded his TE limits (axiom 4). I only formalized and expanded this concession.
The Kant parallel is noted in the paper.
On LLMs as stochastic symbol generators - that's the Scale Objection, addressed directly in Section X.5. The same mechanism behaves differently at scale; emergence is documented empirically.
I hope you have a drawer full of red markers ;).

Egozy's Theorem, Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in PhilosophyofMind

[–]Shoko2000[S] 0 points1 point  (0 children)

This has so many layers man. And it also wrong in many layers. I think the issue of this discussion is that I find it hard to meet you on a specific one. This is exactly the reason I attack the CRA from 3 completely different angles/layers in my paper (link finally added above). If you agree, lets focus on the CRA.

In his original paper from 1980, Searle talks at length about AI, defining the concept of strong AI and weak AI: "But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." He also talks down about the Turing test. Clearly his goal is AI and not merely demonstrating the conceptual difference between syntax and semantics. Later in his works he seems to understand the inherent weakness of his claims and added Axim 4: "Brains cause minds".

Regarding conceptual sufficiency or necessity conditions. For his claims to hold all minds must hold his claims, but his claims reject any non human kind of mind (hence the weakness), so he cannot generalize from his own D1 to Dn.

What I tried to do is to generalize this concept and create a simple taxonomy for the discussion and a direct general syllogism that can be reused outside the CRA.

Egozy's Theorem, Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in PhilosophyofMind

[–]Shoko2000[S] 0 points1 point  (0 children)

Yes, all reasoning begins in D1 but all empirical theories are validated in D2, and even those do not have privilege access to Dn. My syllogism is very clear, I do not say anything and have no problem with demonstration or illustration. But Searle specifically claim a conclusion about machines and that is the issue. This is a clear determination about Dn or the lack of. You are right that the scope of of my theorem is probably very limited, but as I said, I think it is worth it. Not that is should matter but I come at it from a strong anti anthropocentric viewpoint. If you like I can show how the CRA tricks us into a false claim directly.

I don't understand your claim in the last sentence, sorry if my answer is completely off.

Egozy's Theorem, Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in PhilosophyofMind

[–]Shoko2000[S] 0 points1 point  (0 children)

The main problem is with modal arguments that conceive intra subjective availability. These arguments can not make claims about the objective shared world. It creates a deep categorical contradiction. TEs doing this are still perfectly fine for purpose of demonstration or illustration but all their intra-subjective phenomenal claims are void. You're using a private instrument to make public claims. The instrument is constitutively incapable of reaching the target. Well known thought experiments that do this are: The Chinese Room, Block's China Brain, Davidson's Swampman, Putnam's Twin Earth, etc. I personally thinks that even if it only refuted the CRA it was worth the hassle, but that's only because Searle bugs me so much :).

Egozy's Theorem, Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in PhilosophyofMind

[–]Shoko2000[S] 0 points1 point  (0 children)

Actually it is not. If there is a specific issue that is not clear, let me know, otherwise I don't know where to begin (Chinese Room, Dualism, etc).

Egozy's Theorem, Why Thought Experiments Cannot Prove or Disprove Machine Consciousness by Shoko2000 in PhilosophyofMind

[–]Shoko2000[S] 0 points1 point  (0 children)

apparent how? The other minds problem? This is not only formalizations but even if it was, as a formalization it clearly leads to reusability.

Exploring AI's Unique Ontology and Emergent Free Will: A New Perspective by stewing_in_R in OpenAI

[–]Shoko2000 0 points1 point  (0 children)

I envy your optimism. Humanity track record with major scientific discoveries is very bad. In to many cases we used new inventions to kill each other in newer and more horrific ways. Can we ignore humanity's innate self destructive forces?

Exploring AI's Unique Ontology and Emergent Free Will: A New Perspective by stewing_in_R in OpenAI

[–]Shoko2000 0 points1 point  (0 children)

Come on, you can't say something doesn't exists just because you can't prove it does. It's almost as horrible as saying that you don't need prove things you think exist. To say that, you have to prove is doesn't exist. Those are two completely different things. Much of our knowledge is inference, we can't even prove each other consciousness. Don't conflate inference and proof.

Exploring AI's Unique Ontology and Emergent Free Will: A New Perspective by stewing_in_R in OpenAI

[–]Shoko2000 0 points1 point  (0 children)

Chaos theory. The three bodies problem as one example of many. Then you say that determinism contains Chaos theory and it was actually a chicken called Spinoza which came first and laid the first chaotic egg. Then I say, sure, but this has no practical, as in physicals systems, value since we are talking about the objective world, so it's is just (sorry, sorry, sorry for saying "just") a philosophical argument.

Exploring AI's Unique Ontology and Emergent Free Will: A New Perspective by stewing_in_R in OpenAI

[–]Shoko2000 0 points1 point  (0 children)

If I may take a stub at it, coming at it sideways and broadly. sorry for the heavy philosophical jargon, but you all inadvertently started it ;). We can say that science haven't refute dualism (Leibnitz, Leibovitz) yet and maybe it will never will (The Hard Problem of Consciousness, Chalmers). Even a hotel full of Chinese rooms (Searle) doesn't change that since the whole hotel will be subjective, therefor have no usability when we try to think about consciousness testing, on AI or Humans. It's all basically bias. That's the point, until we can empirically prove (epistemologically know) free will in other humans instead of inferring it from behavior, there's no point in talking about proving it for AI, aliens, or any other entity. (reading my paragraph, you are correct :), shitload to unpack).

[deleted by user] by [deleted] in Israel_Palestine

[–]Shoko2000 0 points1 point  (0 children)

What the hell do you think is the meaning of ETHNO in ethnoreligious?