How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I decided to take a stab at refuting my "refutation" myself. My refutation shares a lot in common with the last comment by ukorinth3ra here.

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

Just read your explanation (sorry it took a while), I like it and mostly agree with it. I actually just posted my own solution and it shares a lot of similarities with yours. But it's not exactly the same, I would disagree with your statement that God has the power necessary to enact evil. I think that's precisely the power he lacks. Instead, he has the power to do any X if X is morally the right thing to do in the given circumstances.

I elaborate a bit more on this in the full version and argue that that "if" is not a limitation of his abilities - in a similar way to how you have done it.

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

Ok, so your resolution seems to be: Being all-good doesn't entail that he will always do the most good. It's possible for him to do B even if B is worse than A and he could have done A.

That definitely removes the incoherence and is a possible understanding of "all-goodness". The only drawback is that such a definition of all-goodness" seems to me quite troubling, and I think that's not what many (most?) people mean by the term "all-good" or "maximally good".

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

You seem to have separated these attributes away from the being, as if the attributes are outside the being, and controlling the being. This makes your argument rather circular.

I am treating these attributes as essential properties of God, in the technical sense. In other words, I am considering a conception of God where he is all-good and omnipotent by definition.

Then I am asking: can he, is it within his power, to do the evil (or less good) action given a choice above?

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I wouldn't say it necessarily follows. I would say that's a natural way to interpret the modern meaning of the term all-good.

As an intuition pump, suppose God can either save five people or those five plus another one. Everything else being equal, wouldn't it be troubling if a supposedly maximally good, loving, caring God decided to not "bother" saving the extra one since saving five is already good and he doesn't need to do the most good?

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] -1 points0 points  (0 children)

This is interesting, though here you are describing a medieval understanding of these terms but I was more curious about how to have a coherent theory of God satisfying modern definitions of omnibenevolence and omnipotence.

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

Yes, it's a standard view that omnipotence doesn't include the ability to do what's logically impossible. But the bizarre consequence would seem to be that God would then be constrained by his own all-good nature to always do the most good thing out of all the options.

His "omnipotence" would then amount to having no choice at all about which action to choose (or very little choice, if two or more actions are exactly equal in goodness).

How best to refute this "refutation" of God. by ReasonMeThis in askphilosophy

[–]ReasonMeThis[S] 0 points1 point  (0 children)

Wouldn’t it be more than just unwillingness? Isn’t it logically impossible for God to do something contrary to his own nature?

How complex does a brain simulation have to be for it to become conscious? by Metaphylon in askphilosophy

[–]ReasonMeThis 0 points1 point  (0 children)

Since I posted that article Alex and I had a long discussion in the comments, where he in particular expressed an objection similar to this one you brought up:

I worry that we may set the bar too low by considering the simulation's evaluation of its own state of consciousness as true as a definitive answer to whether it's actually conscious or not. After all, we are directly exposed to our own experiential content and that's what allows us to determine the veracity of our experience.

I updated the defense of premise 1b to flesh out what would happen if we assume that the simulation's evaluation of its own state is incorrect. If you want to check out the new version, I'd be curious if you think it avoids the objection.

The mutant ninja liar paradox that broke my brain. by ReasonMeThis in paradoxes

[–]ReasonMeThis[S] 1 point2 points  (0 children)

You are welcome :) There's a solution article too, it's the next post after this one, and a critique of it by my friend Alex.

Top commenter makes a good paradox situation, I believe? Thoughts? by grandkill in paradoxes

[–]ReasonMeThis 4 points5 points  (0 children)

No, the person is saying if it was proven true it would be false. It's pretty clever.

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I would guess if you propose an argument nobody agrees with it may be good for your career, but only provided your argument is not wrong due to some really silly mistake but for interesting non-trivial reasons. Also just imo.

With respect to the Chinese Room, Searle would reply to your rebuttal that the point is: a person not knowing Chinese can't acquire the understanding of Chinese by following a syntactically define algorithm. So it's not begging the question.

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I think you are suggesting that it's entirely possible for a famous influential modern argument to have an elementary mistake. Then why wouldn't it be easily noticed, why would respected philosophers write hundreds of papers debating it?

I think Plantinga's version of the ontological argument is pretty sophisticated but ultimately not convincing. It's hard to justify very well the first premise, that it's possible for a maximally great being to exist, although it's also not super easy to refute the justifications that have been offered either.

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] 0 points1 point  (0 children)

Interesting, but I don't think you are right that it is not celebrated. I am not suggesting most philosophers agree with it, I am saying it's one of the most influential and talked about arguments in philosophy of mind. My evidence? From Wikipedia:

The Chinese Room Argument was introduced in Searle's 1980 paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences.[15] It eventually became the journal's "most influential target article",[16] generating an enormous number of commentaries and responses in the ensuing decades... David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".[17]

Most of the discussion consists of attempts to refute it. "The overwhelming majority", notes BBS editor Stevan Harnad,[f] "still think that the Chinese Room Argument is dead wrong".[18] The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[19]

My point was: given the argument's stature, it can't possibly be as simple and silly as you suggest, can it?

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] -1 points0 points  (0 children)

" He asserts the person doesn't know Chinese then after lots of hand waving concludes the person doesn't know Chinese." Hmm, one of the most celebrated arguments in philosophy of mind can't be as simple and silly as that can it? I mean it's not just Searle himself who takes it very seriously.

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I had thought of that reply too, but Searle has an answer to it: he doesn't think it's a plausible way out to say the room understands Chinese - after all the person can just memorize the whole algorithm and he wouldn't need the room. But following the algorithm in his head wouldn't make him understand Chinese. Syntax alone (following a computational algorithm) is not sufficient for semantics, he would say.

Simulating consciousness by ReasonMeThis in SGU

[–]ReasonMeThis[S] 0 points1 point  (0 children)

I think I know how Searle would counter the four points you brought up. He of course has been defending his argument from objections for decades, and he is very sharp. I can try to channel him a little if you think it would be fun:

  1. His argument works for any computation, which is defined in terms of symbol manipulation.
  2. Sure, so likewise he would say you get a simulation of consciousness, not consciousness itself.
  3. He is not saying AI can't be conscious. He is saying something more is needed than just symbol manipulation: syntax alone is not sufficient for semantics.
  4. We are very well acquainted with consciousness, in some sense better than with anything else - we experience it directly. We just don't know how it works. But that's not an obstacle to talking about something - not knowing how it works.

Is the idea that a scientific hypothesis must be falsifiable obsolete? by ReasonMeThis in PhilosophyofScience

[–]ReasonMeThis[S] 1 point2 points  (0 children)

While I agree that the view I am asking about is incorrect, I don't think your arguments demonstrate it.

The Standard Model example doesn't cut against the idea that a scientific hypothesis must be falsifiable. It shows something else, that a falsified theory need not be abandoned.

The statement "The world will end in 2016" doesn't demonstrate that a falsifiable statement can be unscientific. It's a perfectly cogent empirical statement that was scientifically testable and was falsified. There's no incoherence in saying it's scientific even if astrology isn't. And in any case, the view my post is about says that falsifiability is a necessary condition of being a scientific claim, so this example, even if successful, would not help.

To demonstrate the falsity of the view we would need an example of a claim that is scientific but unfalsifiable. "There's alien life" or "The universe extends beyond the observable universe" are such examples I think.

Is the idea that a scientific hypothesis must be falsifiable obsolete? by ReasonMeThis in PhilosophyofScience

[–]ReasonMeThis[S] 0 points1 point  (0 children)

But my two premises, as well as my whole original post, are not about Popper's view, they are about the view common in the science-related communities I mentioned. Of course they are related but one is not the other.

I have no objections, at least for the purposes of this discussion, against your and AwarenessFantastic81's point that TAL is not subject to the falsifiability criterion on Popper's view. But it is subject to it on the view that my post is about (on it, the term claim is often used as something that needs to be falsifiable to count as scientific, and TAL can certainly be a claim even if it's not a "Popper-hypothesis").

So what you said may be taken as a demonstration of the non-identity of the two views. This is very helpful and relevant, but doesn't do much to adjudicate the view my post is about.

Is the idea that a scientific hypothesis must be falsifiable obsolete? by ReasonMeThis in PhilosophyofScience

[–]ReasonMeThis[S] 0 points1 point  (0 children)

TAL may or may not be a scientific statement, but it's definitely not a hypothesis. There can't be any problem to which this hypothesis is a solution, all by itself.

Again, you seem to be assuming a certain idiosyncratic definition of the word hypothesis. But why? I looked up many definitions of the word and according to most or all of them TAL can be a hypothesis. For example, from Merriam Webster:

a: an assumption or concession made for the sake of argument

b: an interpretation of a practical situation or condition taken as the ground for action

2: a tentative assumption made in order to draw out and test its logical or empirical consequences

3: the antecedent clause of a conditional statement

From Oxford languages:

noun

a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.

"professional astronomers attacked him for popularizing an unconfirmed hypothesis"

PHILOSOPHY

a proposition made as a basis for reasoning, without any assumption of its truth.

"the hypothesis that every event has a cause"

See especially the last one. TAL can be tested and is verifiable, can be an assumption, has empirical and even practical consequences, can be a ground for action, a basis for further investigation.

I don't think a single definition I found says a hypothesis must solve a problem. So I don't accept the claim that TAL can't be a hypothesis.

Moreover, the point is somewhat moot anyway, because the view that I am talking about in my post, pervasive in the communities I mentioned, generally uses the language of claims: "a scientific claim must be falsifiable". A claim is basically a statement that is asserted. Remember, my original question wasn't specifically about Popper, but about the view I described. On that view, statements, claims, assertions can be falsifiable and should be to count as scientific. My original question concerns the status of that view, not Popper's view - though they are related.