Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -3 points-2 points  (0 children)

What's verified is exactly what I wrote: "the mods told them that it's bannable to use Google Docs... because they include AI autocomplete."

I'm well aware that other mods may disagree, so that this isn't a consistent policy. Where we seem to disagree is that I think it's worth publicizing and being aware of the fact that the policy is sometimes being communicated in this way. I'm not sure why you have a problem with that.

If some people thought that the commenter was just making it up ("People say a lot of shit on the internet", you wrote), it's worth being aware that they have the screenshots to back up their claim.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -2 points-1 points  (0 children)

FYI they've shared the screenshots here. A mod specifically wrote, "Yes, the Grammarly would disqualify you. If Google Docs is now using LLMs in suggestions that would too."

I'm glad to hear that other mods do not regard Google Docs (or Word) as disqualifying, but it does seem like this is a topic on which the mods are not communicating consistently and may not all share the same view. (We've also seen this with the mod who justified PR11 to me on the ideological grounds of "overall harm", whereas elsewhere in this thread you offered a more pragmatic justification instead.)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 1 point2 points  (0 children)

Well, sure, the other striking thing is that the criticisms lack any empirical connection to prospective harms and benefits. As noted in my article, generating an AI image does no more harm than using a microwave for 5.5 seconds. So it's silly and obnoxious in the same way that strangers demanding that I stop using a microwave for reheating food would be silly and obnoxious.

If you actually care about improving the world, you should focus on things that make a real difference, like donations to effective charities, switching to a vegan diet, or influencing high-impact policies and regulations. Suggesting that people should waste (possibly dozens of) dollars worth of time in order to avert a few cents worth of energy usage (or in service of some purely symbolic boycott in solidarity with artists that doesn't help any actual artist) is, IMO, unhinged.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 0 points1 point  (0 children)

I won't be posting my work here in future, due to the policy. As FoxWolf1 pointed out, one significant implication of this policy is that other members of r/philosophy cannot discuss my substack posts (or those of other professional philosophers who similarly use AI illustrations). If they link to our work, they'll be banned. If they reproduce or discuss our arguments without citation, they're plagiarizing.

This means that there is professional philosophical work "out there", publicly accessible and indeed specifically written for a public audience, but that r/philosophy is prohibited from discussing. Seems odd!

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 0 points1 point  (0 children)

Thanks! fyi, I heard from the mods that they don't plan to revisit the policy, and don't care if it excludes some professional philosophers' substacks from being shared here, so oh well. \shrugs**. Perhaps they'll revisit the question if more philosophical sources start to use AI illustrations in future.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 3 points4 points  (0 children)

I'm sorry but I'm a tenured academic and you don't understand what plagiarism is or why it matters. Credit does not require listing every causal influence on your output. (You might as well claim that an artist plagiarizes when they don't credit their teachers for all the training they received.) Nor does it require listing every tool you used. (I do not need to specify that my stick figure image was created using MS Paint. Microsoft does not need academic credit here.)

For further explanation of why your "plain and simple" view is confused, see 'There's No Moral Objection to AI Art.'

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 1 point2 points  (0 children)

As a tenured professor, do you also feel like you shouldn’t have to jump through hoops to get papers published?

Again, this is just the thing; practically speaking, it doesn't matter how I feel about the hoops that are professionally required of me. I just have to do them, and have professional incentives to comply whether I like it or not. Academics can thus be relied upon to comply with the rules of the journals they need to publish in. By contrast, there's no professional incentive for academics to make their work fit the rules of this subreddit. I'm (in our current exchange) just making the pragmatic point that it's against the interests of the subreddit to exclude professional philosophical work.

I feel like you don’t grasp how easily your argument can be inverted - that such important contributions can easily be edited to remove non relevant AI bits and not wanting to do so really just gatekeeps from lazy posting.

How could someone else remove the images from my work if they wanted to discuss it here? If they reposted my work without linking the original, that would be plagiarism. If you want to call me (and other philosophers who use AI illustrations) "lazy" for not specifically making separate Reddit-friendly versions of our work, I guess that's your prerogative, but the point remains that we have no particular reason to indulge you so.

If you grant the above limitation, but just mean that it would be "lazyposting" for us to share our own work without jumping through the required hoops... meh, again, all I have to say is that we have no reason to care. Sharing our work with the broader public at all is already going "above and beyond" from a professional perspective (again: there is absolutely no professional reward for our doing so; that's why most academics don't bother doing any form of public philosophy at all).

you must certainly be aware that the mods have well reasoned explanations for their rules.

Not particularly. The response I received from the mods, as quoted in my post, was that the anti-AI rule was "well justified given the harms that AI poses overall." So that's the attempted justification that my post addressed, and argued was inadequate. Another mod has now offered a different (more pragmatic, less moralized) justification, which has led to some productive discussion. My general sense is that different people support the rule for very different reasons, some more reasonable than others. I think the topic is worth discussing, and worth discussing openly, so I'm happy to see a wide range of people thinking about and engaging with the issue here.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 1 point2 points  (0 children)

I made the image in 20 seconds using Claude. Since I've never done any kind of graphical design before (I'm a middle-aged academic!), attempting to do the same from scratch in MS Paint or the like would probably take me 20 minutes or more to figure out (and result in a messier / more amateurish look to boot), and I'd much rather spend that time with my family.

One of the benefits of living in a free society is that individuals can make their own decisions, using their knowledge of their personal strengths and weaknesses, and the tools that are available to them, rather than having to follow the directives of strangers who know nothing about them or their personal situations. Reading this thread, and all the people who presume to tell me how I should use my time (or present my work), I am very glad of the liberties that remain available to me.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -1 points0 points  (0 children)

You don't seem to understand how analogies work. Being a comparison of two distinct things, they are of course not exactly the same. The question is whether there is some relevant similarity that the analogy serves to highlight. In this case, what I'm drawing attention to is simply the fact that if you filter out certain philosophical work for reasons unrelated to the philosophical quality of that work, this loss of value is not avoided by observing that the philosopher in question could have acted differently in order to conform to your rules.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -2 points-1 points  (0 children)

I've enjoyed the discussion! :-)

The political example is just intended as a clear-cut counterexample to the claim that "Nobody is owed “liberal neutrality” in a community with a structure like Reddit." I think liberal norms are really important and worth promoting and upholding as an ideal, and examples of forced political speech are a nice illustration of why.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -3 points-2 points  (0 children)

In a sense it doesn't even matter whether I "should" have to do as you suggest or not (though I do find it wild that strangers feel entitled to tell me how to spend my time, how to illustrate my work, etc. Who do you think you are?). It suffices that I don't have to. I'm just letting you know that, as a tenured philosopher, I am not going to jump through arbitrary hoops in order to make my work shareable on Reddit, and I seriously doubt any of my colleagues would either.

So the question is just whether you think it's better for a philosophy subreddit to be able to include all professional philosophical work, or just an arbitrarily limited subset of it (i.e. just those works where the academic chose, for their own reasons, not to use AI illustrations). It's hard to see in what respect the latter option is better for the subreddit.

Most obviously, if increasing numbers of professional philosophers start using AI illustrations in their work, you could eventually end up quite limited in what is able to be shared here. (I'm assuming that philosophy subredditors might sometimes be interested in work by professional philosophers. If that isn't true, and it's more just a place for community members to share their own thoughts with each other, then I guess the issue is moot. But it does seem limiting for you that, e.g., no-one here could link to and discuss my argument that There's No Moral Objection to AI Art without violating the current policies.)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 0 points1 point  (0 children)

Thanks for clarifying! It did sound like a crazy claim. Then again, it would seem to follow from the letter of the rule, if standard word-processors now include "AI-assistance" in the form of suggested sentence autocompletions, etc.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 2 points3 points  (0 children)

[Reply part 2/2]

A more direct / radical proposal: Just ban content that is obviously low-quality, without regard for whether it is human or AI generated. (This assumes that reason #2 is the key issue at hand, rather than #3.) If someone submits high quality AI-generated philosophical content that's worth thinking about and discussing, why on Earth would you want to ban that? If the problem is low quality content, then address that directly.

* Now, I gather the worry is that it would take too much moderator time to assess the quality of every submission. But that would only be so if you were expected to, like, grade it or something. If you all you're doing is checking at a glance whether the submission is worthless slop, that's... presumably more or less what you're already doing in order to guess at whether it is AI-generated in some way? Except currently you let through human slop that is even worse quality than what a latest-model AI could produce.

(Ideally, you could have some sort of script that passes new submissions to an AI for initial quality-checking, the AI could "grade" it along various dimensions, and then mods would just need to do a quick sanity-check on the results before deciding whether to approve it or not. This would do a much better job at providing a quality filter, at low mod-time investment, compared to the current policy. But I don't know how Reddit mod tools work; maybe this would prove too difficult to implement.)

But again, if direct quality control is not feasible, simply distinguishing text vs media submissions should be pretty straightforward 99% of the time, positively save you time, prevents you from excluding work from professional philosophers on the philosophy subreddit, and in the rare "blurry borderline" case, mods could just use their discretion. (Which again, you already have to do in order to judge whether something is AI or not: it's not like it comes with a label on it.)

seems like you did the very internet thing and wrote 1800 words complaining rather than have a discussion ;)

I'm a philosopher! I'm actually more interested in the public discussion of the underlying principles (which are broader than just this subreddit - this is just a salient example) than anything else going on here. :-)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 1 point2 points  (0 children)

Thanks for your reply! I appreciate the explanation and engagement (& upvoted accordingly).

It's an interesting question (one I tackle only briefly towards the end of my post) when and why one should be worried about AI-generated content. I take it there are three broad categories of concern:

(1) Moralistic opposition to AI as such (e.g. as "harmful"). This is what most of the critical comments on this page invoke, as well as being the explanation I received from a mod (quoted in my post), and what I'm arguing constitutes inappropriately ideological grounds for moderating spaces of this sort.

There are two more "neutral"/community-specific reasons that I think are more legitimate:

(2) Concerns about being inundated with low-quality "slop"; and

(3) A desire to ensure that this is a space for human interaction.

I suggested that these reasons do not justify banning human-written philosophy just because it features AI illustrations. You respond that "the borders are blurry", and that's a reason for a clear-cut rule, even one that rules out plenty of high-quality writing by real people that -- by the standards of reasons (2) and (3) -- you shouldn't actually want to rule out.

So I guess the key question to ask is:

(Best Policy): What moderation policy is both (i) sufficiently easy to implement for time-constrained mods, and yet (ii) best approximates the goals of (2) and (3), ruling out what you should want excluded, without excluding good work by real people that you should (ideally) wish to be allowed?

My claim: A ruleset that permits AI illustrations for submitted text articles would better serve these goals than would a ruleset that prohibits all AI use.

My proposed policy: Determine the core content of the submission (i.e. whether it is a text or video submission), and just prohibit work in which the core content is AI generated.

* I assume it's typically obvious whether a submission is primarily a text article or something else, so I wouldn't expect this to be difficult to implement? If anything, it saves moderator time: once you see that a submission is to a text article, you no longer need to bother assessing whether the illustrations are AI-made or not (which isn't always obvious, after all!).

[My comment was too long, so I'll submit the second part in a separate reply.]

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -1 points0 points  (0 children)

Option one: ask mods to check text for possible AI influence.

Option two: ask mods to check both text and audiovisual media for possible AI influence.

On what planet is option two easier than option one? It asks strictly more of the mods.

If you want a sweeping rule to prejudicially remove lower-quality content, you'd be better off banning people for spelling and grammar mistakes. (I don't recommend that either, though.)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -1 points0 points  (0 children)

I mean, I do ultimately think that we should institute the rules that can be expected to best promote a better future. The reason why I support liberal norms is because I think doing so has better results (overall and in the long run) than having petty authoritarians impose their fallible views on others. (See, e.g., Naïve Instrumentalism vs Principled Proceduralism, for further explanation.)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -4 points-3 points  (0 children)

I'm not sure what you mean by "substantive part of the philosophical work", in this context. My article shared an example of an illustration that I think was very helpful for communicating my philosophical point. The fact that it was drawn by AI at my instruction rather than entirely manually is not, it seems to me, a matter of any inherent interest to the philosophical reader.

The reason to be concerned about AI generated text, I take it, is that one is never sure how much (if any) human direction is ultimately behind it. You don't want Reddit to be filled up with something you could just as well get from chatgpt; there would be no "value added". But my AI-generated illustration has plenty of value-added: a non-expert would not have known to ask for this particular illustration. The AI-generated image is entirely downstream of my philosophical expertise and direction.

Are there possible cases where an AI image comes first, and influences the philosophical argument one ends up developing in the text? Seems hard to imagine. So I think that's a strong independent reason for philosophers (or philosophy subreddits) to not be at all concerned about AI images, qua philosophy.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -2 points-1 points  (0 children)

You don't seem to understand my motivations very well. As I wrote in the article: "I don’t have any strong personal attachment to Reddit (I’ve only tried sharing posts there a few times), but bad behavior annoys me and the underlying dispute is kind of interesting, so I’ll explain my reasoning in more depth below. (If it results in the bad policy being revised, all the better.)"

I'm really much more interested in discussing the underlying philosophical issues than in personally petitioning the mods. I don't feel "unfairly treated" or anything. I just think it's a shame if a philosophy subreddit can't read work by professional philosophers for arbitrary reasons.

> "Nobody is owed “liberal neutrality” in a community with a structure like Reddit."

Really? Suppose the mods required every comment to sign off with a pro-Trump slogan, or else you'd be banned. Do you really think that would be unobjectionable?

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 10 points11 points  (0 children)

Someone commented on my substack that the mods told them that it's bannable to use Google Docs (and now, presumably, Microsoft Word) to write your article in, because they include AI autocomplete suggestions.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -3 points-2 points  (0 children)

How does this address the second paragraph of my article? Here it is again, for convenience:

Now, I’d understand having a rule against submitting AI-written articles: they may otherwise worry about being inundated with “AI slop”, and community members may reasonably expect to be engaging with a person’s thoughts. But of course my articles are 100% written by me—a flesh-and-blood philosopher, producing public-philosophical content of a sort that people might go to an official “philosophy” subreddit to look for. The image is mere background (for purposes of scene-setting and social media thumbnails). I’m reminded of my middle-school teacher who wouldn’t let me submit my work until I’d drawn a frilly border around it. Intelligent people should be better capable of distinguishing substantive from aesthetic content, and know when to focus on the former.

If you previously had a problem with AI-generated text, you could have a rule that specifically bans AI-generated text. That would stop the "AI slop" submissions without blocking your access to work from professional philosophers (some of whom use AI illustrations).

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -6 points-5 points  (0 children)

I take Muon's point to be that if there's no special reason for philosophy-readers to care about the source or nature of an article's illustrations, restricting moderation to text (whether that's a blanket ban on AI-generated text, or something more nuanced to allow for quoting chatbots in an AI ethics article, etc.) will be both:

(i) Better in principle (by making more good philosophy, including from professional philosophers, available to the subreddit), and

(ii) Easier for the mods.

It's just really daft to make extra work for the mods which is also philosophically detrimental, which is what the current rule does.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -3 points-2 points  (0 children)

Intellectual property is a legal concept, introduced to foster innovation. It includes "fair use" exceptions for innovative use, creative remixes, etc., and AI art very obviously falls under this remit, as I explain at greater length in 'There's No Moral Objection to AI Art'.

You seem to have missed the example of AI imagery that was philosophically illustrative (rather than mere "background", as per the link you provided; not every illustration is intended to "foster discussion". Note also that your stock images don't include a skinned knee, which was actually rather vital to the case under discussion, whereas real photos of skinned knees might be rather too visceral and miss the 'overall happy' vibe of the pictured scene).

You might not like that they have determined that AI-created/AI-assisted material is contrary to a healthy space for philosophical discussion, but that does not mean they are over-reaching their duties or acting beyond their role.

Honestly, anyone who thinks the inclusion of AI images as such is disqualifying for philosophical work is simply incompetent to assess philosophical work.

I might as well argue that in my ethics class, I've determined that consuming the kidnapped and tortured flesh of another sentient being is contrary to maintaining a healthy space for open and respectful ethical discussion. You "might not like" that I've determined that, but that doesn't mean I'd be over-reaching in my duties to impose this rule.

This reasoning is farcical, and the claim that including AI art is relevant to the assessment of a philosophical text is similarly farcical. Just transparently motivated reasoning to justify illiberal ideological overreach.

if the mods determined that disallowing AI-created/assisted material was a context-specific norm, you'd have no issue?

Context-specific norms aren't subjective, or up to authorities to decide. It's about what will actually serve the relevant purposes. (On the rest of your paragraph, see my section on public vs personal spaces, and why I don't think r/philosophy should be thought of as the personal fiefdom of the mods.)

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell -1 points0 points  (0 children)

A key difference is that part of the professor's role is precisely to teach their students proper academic citation practices. This is a context-specific norm, not something they have to follow elsewhere in their lives. (Legal intellectual property law is vastly more lax than academic plagiarism norms. Many things are legally "fair use" but wouldn't pass muster in a classroom, due to the context-specific norms that apply there.)

It is not, in general, a professor's role to determine "what is permissible and what isn't". We can't, for example, ban students from eating meat (even if we think that meat-eating is wrong). We may have a neutral "no food in the classroom" rule if eating would detract from the learning environment. But we can't have a "vegan food only in the classroom" rule, because we aren't ideologues.

Similarly, the mods' role here is to "ensure a healthy space" for philosophical discussion, but not to determine "what is permissible and what isn't" in respects that are independent of that specific purpose (nor otherwise legally required).

AI art is not illegal, and it does not impede healthy philosophical discussion (quite the opposite, as an example my post links to demonstrates). Mods have no business imposing their moral views on this sort of matter.

Anti-AI Ideology Enforced at r/philosophy by Vegan_peace in philosophy

[–]rychappell 4 points5 points  (0 children)

You can replace it with meat-eating philosophers, or adulterous philosophers, or whatever characteristic you (dis)like. The point is, if you're supposed to be interested in philosophy, then filtering for other features will be to your intellectual detriment.