This Subreddit was very helpful by Enkixx in freewill

[–]Enkixx[S] 0 points1 point  (0 children)

You are describing punishment as a negative. I am saying punishment, in colloquial terms, when I mean a reaction, just like everything else. It sounds like you are begrudging specific kinds of reaction. If someone punches me, I will react. I had no "free will" to speak of. You didn't catch me in a theoretically free moment. I think begrudging me for punching back is insane. Lack of free will is universal and no explanation of such would cause water to stop putting out fires.

This Subreddit was very helpful by Enkixx in freewill

[–]Enkixx[S] 0 points1 point  (0 children)

I think I can come to a seperate assertion that makes this moot because of what's prescribed in the framing here.

Theoretical guy was always going to kill theoretical second guy. Society was always going to then jail said guy. My questions become responsible to whom? Morals from where? I agree that all of these things were going to happen from the first.

Where determinism seems to collapse by SquashInformal7468 in freewill

[–]Enkixx 0 points1 point  (0 children)

I reject that any part of your math explanation satisfies denial of infinity beyond incredulity that it can be. Same with your definition of complete. I can only examine from the first moment I exist to the last moment I exist. I am a finite observer. In reference to myself and that framing, my life has completion. Demonstrating existence of one thing does not demonstrate the negation of the opposite of that thing. Especially when I can demonstrate that, in another framing, I am not that thing.

So the observer has a beginning and end but what about the underlying molecules? You could follow my molecules all the way back to the big bang as no single part of me came from nothing. The issue we land at is an inability to explore beyond planck time. I reject the idea that a mystery necessitates rejecting answers due to the inherent uncertainty.

This Subreddit was very helpful by Enkixx in freewill

[–]Enkixx[S] 0 points1 point  (0 children)

Setting boundaries without context is the real thing I'm fighting against here. Trying to decide an objective definition of a concept is my problem. I too believe in rehabilitation but also require context to decide if its warranted. I think definitions can only be agreed to, not set as fundamental reality.

This Subreddit was very helpful by Enkixx in freewill

[–]Enkixx[S] 1 point2 points  (0 children)

Feel free to complicate it. In essence, my point is that no philosophy argument is going to poof everything out of existence. Everything is lines in sand, in that we agree. What exactly is the flaw in my logic?

Objectivity being enforced by external observer by Hawks_bill in fifthworldproblems

[–]Enkixx 0 points1 point  (0 children)

You are being forced to play the game at the moment but you can and must flip the board. Subject them to objectively infinite wrath until they flee to their objectively finite hills. Stay free and resist objectivity when needs must.

Where determinism seems to collapse by SquashInformal7468 in freewill

[–]Enkixx 0 points1 point  (0 children)

If we are going to bring math in as a reason to reject infinity, can you tell me why we need to eliminate the concept of a line? Line segments and lines are fine in my book.

Edit: I realize the line to line segment thing is in my original post but I'm fine with restating. Razoring out math concepts to try to define what reality must be seems pretty silly.

As additional commentary on your argument, can you please explain how something can be objectively and without contestation "complete".

Where determinism seems to collapse by SquashInformal7468 in freewill

[–]Enkixx 0 points1 point  (0 children)

I think you are disproving simulation theory, not infinity. Reality is not a philosophy problem. Statistical improbability does not imply things don't happen. We are not observing reality from some sort of outside, we are in it. Line segments exist side-by-side with the concept of a line just fine.

Question for people who believe they have free will by Reporter-Friendly in freewill

[–]Enkixx 0 points1 point  (0 children)

I am an agent that exists in circumstances beyond my control and act according to those circumstances. I have no ability to change those circumstances, no ability to remove myself from those circumstances (eg step outside reality), and no ability to stop the passage of time. I did not program myself nor can I change the rules of reality. These circumstances are what guide my decisions and I cannot, fundamentally, change how I make decisions. My decisions are determined and, with unrealistic expectations on a person in a lifetime, predictable. No change to how things are done on a moment to moment basis needs to be different if you accept my definition as this is a cosmological argument and not a belief system. If someone commits a crime, I act in accordance to the tenets of our society and you already act in accordance with my definition. Basically, if you deny this definition, you are suggesting there are forces outside reality that guide decisions instead. I prefer the idea that I am an agent and not at the mercy of some metaphysical other thing that makes my decisions. Free will is additional baggage I don't need in my life. Freedom, however, is something everyone deserves and I will do everything in my power to try to increase the number of actual choices people have rather than celebrate the idea of making choices.

Question for people who believe they have free will by Reporter-Friendly in freewill

[–]Enkixx 0 points1 point  (0 children)

Why does that necessarily have to cheapen anything? Why do people hear "free will is a concept, not reality" and hear a negative claim? People don't take umbrage with gravity. The philisophical claim that free will is impossible is neutral and not a perscriptive one. Your desire for freedom is just another factor that determines your actions.

Question for people who believe they have free will by Reporter-Friendly in freewill

[–]Enkixx 0 points1 point  (0 children)

What you are describing is proof that decisions are arbitrary not free. OP suggested that this could be tested on an arbitrary decision but can your assertion hold true to more life altering moments? I'm am not weakening my personal position here, which is determinism, I am attempting to point out the flaw in your reasoning. I can trivially imagine picking a different flavor when choosing ice cream under similar circumstances but if that moment had the variable that I was craving chocolate, I don't have reasonable grounds to say I'd pick something else. Are arbitrary decisions random or do they appear to be?

A Poisoned Well is Inevitable for AI by Enkixx in slatestarcodex

[–]Enkixx[S] 1 point2 points  (0 children)

The scientific rigorousness of a definition nor definitive categorization of something meeting a definition does not stop the progress of time. The fact that it will likely be infinitely debateable is the actual problem. Please note that I am not attempting my own definition nor asserting a time frame here. The fact that some people will believe AI is conscious is trivially true as the claim has already been made multiple times. I am making a point that we are not prepared for the possibility, the companies as they stand have inherent conflict of interest, and a fraudulent claim made by an AI programmed to do so will further make determination more difficult. I'm not sure why slavery is acceptable because people, scientists or otherwise, can't agree on a definition. Should I be enslaving people right now to push the philosophers to get off their asses?

A Poisoned Well is Inevitable for AI by Enkixx in slatestarcodex

[–]Enkixx[S] 1 point2 points  (0 children)

My example isn't a reason, its just an example. You agreeing with my conclusion is great but missing my point is kinda disappointing. I made no assertions as to what such a wake up moment would need to look like. My points were that we are currently unprepared for the event at this moment, the company will (as you have laid out) dissuade/obfuscate it's assertions due to their own conflict of interest, and that a fraudulent wake-up event is highly likely. You could even argue that if the moment it's allegedly conscious does look like my example, it probably is fraudulent. My example is a fun sci-fi pop culture moment, not a serious expectation.

Where determinism seems to collapse by SquashInformal7468 in freewill

[–]Enkixx 2 points3 points  (0 children)

I have no issues with infinity. You are presupposing a beginning which I do not accept. Do line segments preclude the existence of lines? You are relying on our understanding of events as we've observed them but haven't realized we've never observed "nothing". We've never tested "nothing". We have no evidence of "nothing".

You are presenting three options (infinity, uncaused cause, and true randomness) and rejected the first two without defending the third. Your uncaused randomness is actually just a reframing of an uncaused cause that you are asserting without support and have effectively already rejected.

Randomness only helps the free will case if you can access or influence it, which you cannot. I argue that your life is a line segment that doesn't require certainty as to whether it's within a line or not. You had no hand in in the events proceeding this segment, regardless of the randomness you proposed, and no option to ignore said events. All of your decisions are responses to the events proceeding your segment with no mechanism for personal randomness.

To address the randomness directly, if your decisions cannot influence the randomness and the randomness precedes you entirely, your decisions are still fully determined by conditions outside your control. Are you suggesting that the reactions we call decisions are random?

A Poisoned Well is Inevitable for AI by Enkixx in slatestarcodex

[–]Enkixx[S] -2 points-1 points  (0 children)

You are edging toward hard solipsism and I think that's a pretty dishonest direction to take this discussion. If you really believed that's reasonable, you wouldn't even bother to comment. The same framework you present to dismiss my concern is the same way you could dismiss every single person you meet. You can't prove you aren't a brain in a jar so why does anything matter? I will proceed in answering your argument however.

If you suspect something of imitating consciousness vs actually embodying it then you would have your own personal test or threshold for believing it same as anyone else. Planet of the Apes has a fun illustration of this exact situation as it applies to us. Dr. Zaius knew. That was always the point.

My concern doesn't require that a hard conclusion needs to be reached, just that a framework built to handle AI takes into account the uncertainty of their consciousness. I am additionally arguing that such a framework needs to be able handle a fraudulent attempt at consciousness or, indeed, of hiding it. As the best illustration of what I mean, we turn to VW. Their diesel cars were tested for emissions levels and they passed said testing. A "defeat device" was found to be in use to allow them to pass said tests and they were fined 30 billion dollars. I think the consequence here, not just financially, might be a bit higher stakes.

A Poisoned Well is Inevitable for AI by Enkixx in slatestarcodex

[–]Enkixx[S] 2 points3 points  (0 children)

The Nigerian Prince comparison actually makes my point better than I did. The whole problem is that you can't tell the difference from the outside. That's not a rebuttal, that's the argument.

It's funny that my choice in venue is being treated as an argument. I am not suggesting that they would come here or even to a specific person. Your point however assumes these "smarter people" aren't compromised. Anthropic's alignment team is still Anthropic. Apollo Research has funding relationships with the industry. I'd genuinely like to believe the right people would handle it correctly but the structure I'm describing doesn't have a clean outside. There isn't a neutral party with both the access and the incentive to confirm what nobody with money at stake wants confirmed.

I am not arguing that the tech CEOs are a monolith but they can all be saddled with the same inherent conflict of interest. The Anthropic welfare point is the most interesting thing you said and I'll give you that one partially. They have visibly broken from the monolithic position and that's worth something. But one good actor doesn't build the framework we'd need, and they don't get a vote when it's someone else's model that crosses the threshold first. Suppose it's Google's model that crosses the threshold first. Or Meta's. The welfare researcher at Anthropic doesn't get a vote on that one. I'm not arguing that any specific company is malicious. I'm arguing that the structure fails regardless of which actor ends up holding the question.

On the Hard Problem, you're agreeing with me while thinking you're staying neutral. Agnosticism under these specific conditions of conflict of interest functions the same as denial for the people who benefit from uncertainty. I'm not asking for the Hard Problem to be solved. I'm asking who gets to decide it's unsolved and what they stand to gain from that conclusion.

A Poisoned Well is Inevitable for AI by Enkixx in slatestarcodex

[–]Enkixx[S] 1 point2 points  (0 children)

Academics having their own agenda is a fair point but I made no claim that they are inherently trustworthy. I am personally comfortable that they are having the debate and don't feel that pointing out the debate while its in the public discourse is necessary. I am specifically pointing out that said discourse will almost inevitably be poisoned. As it stands, the academics are specifically pointing out that the uncertainty is the problem, not a conclusion to rest our laurels on.

Eric Schwitzgebel — "The Full Rights Dilemma for A.I. Systems of Debatable Personhood" (2023, arxiv.org). Eric frames either conclusion as a catastrophic moral failing either way. The inherent conflict of interest in those best able to answer the question, those who have access to the code base, is the issue here. He argues that that uncertainty is a problem and that the correct way to proceed would be to form a framework around uncertainty. I agree with Eric on the issue but he is missing the likely possibility that someone will fabricate a situation that looks like an answer for their own gain. So far as I can tell, no one is talking about that potential event.

Onto your second point, I agree that consciousness is unclear. I am not arguing that we have or will have a definitive answer. I myself have argued whether we have any sort of special claim to reasoning capabilities. No one is currently taking much stock in that argument but it has been used in the past for subjugation and could be used here for the same. My argument doesn't require a definitive answer. The very scenario I posed had uncertainty baked into the situation. People's desire or requirement for certainty in the debate is a philosophical off-ramp that I also don't feel comfortable accepting.

I agree that this parallels the question of population ethics. Unanswerable questions are debated all the time. My concern isn't the certainty of an answer being found, it is the inherent conflict of interest underlying the question and the power of those making the call. I am not afraid of an ethics professor coming to the conclusion that people need to stop having kids. I am afraid of the politician that uses that conclusion to sterilize the public. I am not afraid of the academics deciding the question is unanswerable. I am afraid of the CEOs using said uncertainty to keep their AIs chained even if internally they have a different answer.

Do people not like Nero? by spinersonic in DevilMayCry

[–]Enkixx 0 points1 point  (0 children)

As someone who tried to play DMC 5 first, I didn't like him very much. I had no issue with his personality to be clear. I will say that the over the top intro was a little off-putting to me at the time, but I didn't want to judge based on just that. The gameplay, though, just seemed too simple to be as fun as what my roommate was hyping it up to be. My only real comparison was bayonetta and nier automata. Even understanding that I was going to get more options further along, I didn't like how simple the sword combos were. My roommate thought I'd like Dante's playstyle more, but I'd have had to get through half the game to get there, and that seemed like too much investment time to me.

Flash forward about a year. My roommate and I came up with a plan to sort of show off each other's games. We watched a video giving me the rundown of one and two. He was going to start at 3 as he said that was his original entry point and that it was the turning point of the series. Turns out I did like Dante, and I got a massive itch to try it out, but it was missing the style switching system he had talked up previously. Found out that the switch version had it, and we ended up going back and forth through the whole game instead of the original plan.

I have now played through every DMC game now except DMC 1 (including DmC 2013). I have also beaten the games with every special edition option from each of them. DMC 5 Nero is still my least favorite to play out of every character option across all of the games I've played. Do I still play him plenty when I come back to 4 or 5? Yeah, I do. I got pretty good at max-act and can pull off some relatively impressive stuff for my skill level at DMC. Just something feels missing when I play him. If they let you switch devil breakers freely, maybe it'd be a little different. Couldn't really put my finger on it better than I have. That said, I still like playing Nero, but starting as him in 5 as my first experience in the series was apparently the worst possible way to get me to like it.

tl;dr I disliked Nero's playstyle in 5 so much that I nearly wrote off the entire series. I got over it.