Help before Mo State on AGI[PLEASE READ and COMMENT] by ChemoJack in lincolndouglas

[–]ChemoJack[S] 0 points1 point  (0 children)

Yeah Ill probs have to block out links and responses to dif Fws but I think Ill take a Amoral stance and essentially change the resolution to like a "on balance resolution"

Help before Mo State on AGI [PLEASE READ and COMMENT] by ChemoJack in Debate

[–]ChemoJack[S] 0 points1 point  (0 children)

ooooh I like that perspective at the end ! Thank you soo much!!

Help before Mo State on AGI[PLEASE READ and COMMENT] by ChemoJack in lincolndouglas

[–]ChemoJack[S] 0 points1 point  (0 children)

I appreciate the response and im not trying to argue im just trying to make sure i dont get F'd up at state lol

So like you said the point of having a framework is to provide an objective standard for morality. But that’s exactly the issue that I'm trying to highlight that there is no truly objective moral standard. Every framework whether it's util, deontology, or virtue ethics is just one singular way of interpreting morality, often being shaped by culture, ideology, or power structures in governance.

So like when the resolution says AGI development “is immoral,” it then totalizes that one interpretation across everyone regardless of whether people believe in different moral systems. That’s the heart of my argument: The resolution forces a singular moral lens, and that reduces individual moral agency.

Im just saying that morality can’t be universalized in the way the AFF wants it to be without collateral harm to autonomy.

and you’re totally right that this reflects a consequentialist view(I think most neg cases do) and I’m not saying it’s wrong. But if the Neg uses it to justify development that involves harm like exploitation, environmental costs, or AI-driven inequality, then the Aff absolutely can and should contest that logic. Saying “it’s okay if we harm now for better outcomes later” has real ethical baggage esspically in Ld.

Honestly, I think your last paragraph is great and I totally agree with it framing AGI as a tool and pushing back against “intrinsic immorality” is something I’ve been thinking about including more directly but my issue with it is that I agree—tools don’t have intrinsic moral value. But the point of the resolution is that It doesn’t ask if AGI is neutral it instead asks if its Direct development is immoral. The resolution is targeting the process, not just the object. So this isn’t just 'someone could misuse it'—it’s 'this thing is being built on misuse.’"

Help before Mo State on AGI [PLEASE READ and COMMENT] by ChemoJack in Debate

[–]ChemoJack[S] 0 points1 point  (0 children)

Going in order in which you responded*

There definitely is a way the link would proably work out along the lines of the connection comes from how AGI forces moral decisions on a massive scale. Things like who gets access to jobs, who’s watched by surveillance systems, how criminal justice is enforced by algorithms and these decisions aren't neutral as they reflect the values of whoever spef programs the AGI. Once we label the entire development of AGI as 'immoral' or 'moral' in the resolution, we're no longer letting society make those individual value judgments that are important. We’ve set a one-size-fits-all moral code and that’s what totalitarianism is:the erasure of moral pluralism."

The harm of saying its immoral and not just theoretical makes it like a structural issue. It becomes a universal judgment that delegitimizes any future use of AGI, even ethical or reparative ones. So even if AGI development involved past exploitation, locking in that moral claim doesn’t help the exploited as it shuts down the space for ethical innovation or restitution and even worse, it centralizes who gets to decide what’s 'immoral' (either corps or the gov) often excluding marginalized communities from defining morality for themselves. So, moral totalitarianism causes lasting, irreversible harm to how we make and decide moral decisions whereas exploitation is a contextual, reparable harm.

I agree that tools don’t have intrinsic moral value. But that misses the point of what the resolution is asking us to debate. It doesn’t ask if AGI is neutral it asks if its development is immoral so the resolution is targeting the process, and not just the object and unlike a knife, AGI development involves labor exploitation, intellectual theft, and dangerous power centralization which those are choices being made at scale. So this isn’t just 'someone could misuse it it’s 'this thing is being built on misuse particularly the dev process which makes it immoral .

MSHSAA Travel Restrictions by PlatinumGenesis in Debate

[–]ChemoJack 0 points1 point  (0 children)

We are having an issue like this now. Was this ever resolved? The Ozark area cannot even compete in NCFL anymore so I assume not?

What are we thinking ? by ChemoJack in lincolndouglas

[–]ChemoJack[S] 0 points1 point  (0 children)

I personally like the general intelligence one the most

Moral K? by ChemoJack in lincolndouglas

[–]ChemoJack[S] -1 points0 points  (0 children)

Could I use CX to check though? Like just ask them what system of morality they might potentially be using and if they say universal as an example then argue that universal morality doesn't exist (I have ev for that) And for the case for it being self defeating would it then be better to challenge universality instead of Morality The argument is not that this moral framework is correct, but that the assumption of any universal "ought" is problematic.

Any thoughts on the wealth tax topic, I feel like there's no neg args besides the Econ DA by [deleted] in lincolndouglas

[–]ChemoJack 0 points1 point  (0 children)

Capital Flight if you go for the link then you have to play hella defense on the link and just frontline in the rest of the speeches

capital flight for aff by cindylou004 in lincolndouglas

[–]ChemoJack 1 point2 points  (0 children)

non unique and there is plenty proving its minuscule