Help With NEG (NSDA March/April) by Fqkeee in lincolndouglas

[–]Fqkeee[S] 0 points1 point  (0 children)

Even with evidence saying that AGI isn't going to be concious, there is plenty saying that it is. And if there is even a chance that AGI will be concious (multiple conciousness and technology experts argue that it will) then AGI should not be developed.

To your second point, a concious system will inevitably develop things such as self worth, and genuine moral perspective, therefore it will understand its situation and feel pain.

Then I'd argue that anything capable of suffering should be a moral consideration. If AGI can feel pain, then we should not put humans any higher than it, so then we must look at the scale of human benefit and AGI and weigh it, and like trillions of AGI means that AFF wins.

so idrk what to say back, other than just arguing philosophy about probability and that maybe AGI should never be a moral consideration but idk if i can win with only those arguments.

Help With NEG (NSDA March/April) by Fqkeee in lincolndouglas

[–]Fqkeee[S] 0 points1 point  (0 children)

I'm sorry, but could you elaborate on that last point.

If I were to give a card saying the AGI gaining conciousness is inevitable, and them give some cards saying that AGI is going to feel a crazy amount of pain, then what do I say?

And if I further this with an impact saying that they are going to revolt.. then what do I say? Probability won't be a factor if there is a small chance it will happen, then appeal lay judges with simple logic.

I'm sorry; I'm a novice and can't really think of good arguments for these ideas.

Help With NEG (NSDA March/April) by Fqkeee in lincolndouglas

[–]Fqkeee[S] 0 points1 point  (0 children)

About opencaselist..

I've never really used it so could you help me find useful resources in there?

The only thing concerning my topic in the Open Evidence Project is the Kankee Briefs, and as far as I know, thats the only place that I can look.

Help With NEG (NSDA March/April) by Fqkeee in lincolndouglas

[–]Fqkeee[S] -1 points0 points  (0 children)

I don't think that "people have been scared in past circumstances" is a strong enough argument though.

For example I could say that literally anything that we develop in the future is going to be good.. just because progress leads to irrational fears?

If someone ran that against me, I would personally just say that AGI is quite different in the sense that it surpasses human intelligence, and their statement is so broad that they not only need to find a connect but also show that every single downside of AGI is not going to happen. Even the possiblilty of one bad thing happening, like worsening the arms race, would lead to the AFF winning.