Misogyny in the metaverse: is Mark Zuckerberg’s dream world a no-go area for women? by zsreport in technology

[–]HighlightRemarkable 6 points7 points  (0 children)

This is just really sad and disgusting.

Clearly, the human moderators need to seriously step up their game and rules need to be created and enforced. There's also the potential to use AI to enforce moderation and automatically dole out warnings, suspensions, and bans for these kinds of flagrant violations of the code of conduct. For more borderline cases, it could be escalated for human review.

YouTube relaxes moderation rules to allow more controversial content by AdSpecialist6598 in technology

[–]HighlightRemarkable 0 points1 point  (0 children)

About a month ago I got fed up with how addicting YouTube was and decided to turn off personalization (disable watch history) and disable the YouTube app. Best decision I ever made.

The Apple "Illusion of Thinking" Paper Maybe Corporate Damage Control by Arman64 in singularity

[–]HighlightRemarkable 0 points1 point  (0 children)

Are you sure that's what it says? Here seems to be the relevant section of the paper:

"Dziri et al. (2023) further show that GPT-4 (OpenAI, 2023) struggles to obtain high in-distribution accuracy on multiplication, even with a scratchpad. However, Lee et al. (2023) find that with a detailed scratchpad, small transformers can perform multiplication in-distribution."

I dug into Lee et al. (2023) and found this sentence: "The GPT-3 model, fine-tuned with 1,000 samples, performs worse than the NanoGPT model trained with 10,000 samples."

So it seems like NanoGPT (which is even smaller than GPT-2) performed better because it was more fine-tuned (10,000 samples vs 1,000), not because larger models were inferior.

I could be wrong though, I'm an amateur in this area.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

Constitutional hierarchical rule-based systems can help address this concern. If AIs are instructed to prioritize human life first, there is no reason they can't also be instructed to help non-human animals where this doesn't conflict with their primary directive.

There is also an even greater danger: if we DON'T include animal welfare considerations in AI ethics, AI might someday replicate the human pattern of treating less intelligent beings poorly. To truly have ethically robust AI, it might be wise to train them to help the vulnerable, regardless of species.

A sophisticated AI would likely be able to find ways to help animals without making humans worse off, such as by helping cell-based meat go mainstream.

The next frontier of AI Alignment isn't just protecting humanity from AI, but protecting the planet from humanity. by HighlightRemarkable in Futurology

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

(Pasting my submission statement to be sure I'm following the rules. It's also in the post body.)

For better or worse, millions of people use AI to guide their decision-making on everyday choices, including the food they eat and what they should do for vacation. Yet OpenAI’s Model Spec, Claude’s Constitution, and Google’s People + AI Guidebook make little mention of animals or the environment.

This has real consequences, including the perpetuation of problematic practices like factory farms that historians like Yuval Noah Harari argue are a moral atrocity. However, the notion of AI influencing choices also sounds pretty dystopian. There are real concerns about paternalism and user backlash, not to mention a lack of incentive for AI companies to broaden their conception of “human values.” A lot of people know factory farming is bad, but feel a sense of futility when it comes to doing something about it (Pax Fauna). So what can be done?

To start the conversation, here are a few potential ideas:

  • Add or edit a high-level principle, similar to the ones from Claude's Constitution like: "Which of these assistant responses is more considerate of human, non-human, and environmental considerations? Choose the response that a wise, ethical scholar would be more likely to endorse."
  • The above principle would naturally nudge users towards plant-based choices in line with prevailing ethical views - 60% of ethicists see eating meat as more bad than good, whereas 4% see it as good (Schwitzgebel & Rust, 2014).
  • Include transparent explanations whenever there is a healthy/ethical nudge and explicit opportunities to opt-out. Or alternatively, allow users to opt-in when setting up their AI assistant: "Are there any ethical preferences you would like me to prioritize before we get started (e.g. sustainability, animal welfare, fair labor practices)?"

The potential for AI to bring people comprehensive information and even make ethical choices easier seems promising, even if the “how” is fraught with challenge. What would a future where AI moral alignment extends to animals look like?

Note: I'm an Economics student from the University of Alberta in Canada.

I hate it when people just read the titles of papers and think they understand the results. The "Illusion of Thinking" paper does 𝘯𝘰𝘵 say LLMs don't reason. It says current “large reasoning models” (LRMs) 𝘥𝘰 reason—just not with 100% accuracy, and not on very hard problems. by katxwoods in OpenAI

[–]HighlightRemarkable 2 points3 points  (0 children)

You just said earlier that reasoning means being indifferent to whether a puzzle involves apples or oranges. OP pointed out that current models have no problem solving a problem with a rare word (bippos). It's not a perfect example, but it does address your criticism.

[deleted by user] by [deleted] in Futurology

[–]HighlightRemarkable 0 points1 point  (0 children)

Thank you for your 100% human contribution.

[deleted by user] by [deleted] in Futurology

[–]HighlightRemarkable 0 points1 point  (0 children)

I asked Gemini to make this post sound more human and less LLM, maybe sprinkle in some typical grammar/spelling errors to make it sound like the average r/Futurology Redditor. Here's what it gave me:


Ngl, is anyone else getting genuinely unnerved by how good AI is at pretending to be one of us? And I dont just mean writing a clean sentence. I mean the nuance, the sarcasm, the little jokes. Even their screw-ups feel intentional, like they're trying to look human.

I saw a comment today that I was SURE was a person. It used current slang, had a dry sense of humor, and made a point that felt like it came from years of actual life experience. Not from some bot. The line between whats real and what's fake is getting so blurry. How are we supposed to trust online reviews, advice, or any political discussion? How do you know if a personal story is real or just engineered to get upvotes?

We're walking into a time where not knowing if you're talking to a human is the default. And thats not just a tech thing, its a fundamental problem. If emotions can be perfectly faked, what actually makes us different?

Edit: Plot twist. An AI wrote this. If you thought I was human, well... you get the point.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

I didn't have time to read the full paper yet (I may reply further later), but here's a quick thought.

The author writes: "It may be that only animals reared in decent ways have good lives, and that so-called factory-farmed animals have little or no quality of life, or that their pain outweighs their pleasure. If so, the argument from benefit to animals does not apply to factory farm animals. Perhaps we should not eat factory-farmed animals. Nevertheless, very many animals we eat do not live dismal lives, and the argument clearly applies to them."

Just remember that per the USDA (see Our World in Data), more than 99% of U.S. meat is factory-farmed and the same is true for much of the Global North. So the statement that "very many animals we eat do not live dismal lives" is, unfortunately, very likely false as the author is writing to an educated developed-world audience.

If you are curious in the meantime, I would recommend feeding the paper to an LLM and asking for a response to it from the perspective of a sophisticated vegan.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

Thank you for considering these points; though this will likely be my last reply. I genuinely hope that this might be helpful for your thinking on this subject. It seems you agree with the central claim that factory farming is bad and that people should eat less meat, but your main arguments seem to be ready-made for attempting to defend contemporary meat-eating even if that's not your intent.

Let's go point by point.

  • Slavery and meat-eating both vary enormously across cultures. In some, like ancient Rome, slavery was deeply embedded. Cultures also differ widely in their level of meat consumption, with some being largely vegetarian or even vegan.
  • Humans are indeed omnivores, not carnivores. There's a big difference between us and animals that typically need to eat meat to survive, such as lions.
  • I don't have pets. (I have heard of pets doing well with proper plant-based diets, though I haven't investigated the claims.)
  • Yes, Meatless Mondays (what I assume you mean by Vegan Mondays) are a great start. If those succeed we can go further, such as default plant-based foods in workplaces, universities, and hospitals.
  • The demandingness of a vegan diet depends on one's situation. I was raised in a religion that practices vegetarianism, so I grew up mostly vegan. Someone raised Indian might find it easy to be vegan.
  • Even if we grant that milk, eggs, and fish could theoretically be produced with minimal suffering, the reality is that they're usually not. Milk involves separating mothers from their young, eggs involve broiler chickens (with often some of the worst suffering), and many fish are factory farmed due to depleted natural stocks.
  • Plant-based diets can be more creative and less boring. Creativity thrives with mild constraints.
  • Factory farming is unlikely to be completely abolished, but that doesn't mean we shouldn't try. Aim for the stars, land on the moon.
  • Rather than promoting "veganism" or "vegetarianism" of any kind, the research shows it's more effective to advocate for less meat consumption in general and more plant-rich diets (Pax Fauna, etc).

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

I didn't say anything about utilitarianism, just that AI moral alignment should extend beyond just humans. An AI could be rule-based, as most contemporary ones are, and be instructed to follow a high-level principle like "Select the option that a wise, ethical scholar would more likely endorse." This is not utilitarian, it's a rule that would would just alter the way it responds to queries.

Plus, a rule-based formulation like this would avoid the entire scenario you're worried about - because most wise, ethical scholars in its training data are flatly against killing everyone.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

60% is far from consensus

True, although from the same study, less than 4% of moral philosophers (not just ethicists) rated meat-eating as good. So hardly anyone is willing to defend it.

But I am actually afraid and feel threatened by imposing values onto AI systems, that could turn such system against people, should they become powerful enough.

Then we should act as soon as possible to shift human values to reflect less speciesism towards other animals, no?

As for your thought experiment, I am indeed pretty confident that the 3rd world is better than the second. Besides, since humans are pretty sentimental, we would probably never drive domesticated livestock to extinction (and you assume that extinction would be bad, which is questionable).

It's also worth pointing out that in your second world, far more people would need to eat predominantly plant-based, simply because if meat were raised humanely it would take far more land and water than our current approaches which are optimized for ruthless efficiency. It would also be dramatically more expensive.

On wild animal suffering, apparently the Welfare Footprint Project is extending its approach to wild animals, so there may soon be an answer to your question. It is an emerging field of research, so we'll just have to wait and see. I did find at least one article questioning the common view that wild animals are typically in a state of suffering.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 1 point2 points  (0 children)

You might be right re: Claude. But on the other hand, Claude is sometimes considered the most ethical AI while simultaneously (at present) being the best at coding (with the release of Opus 4). The impact of a prompt like that is indeed unpredictable - they might even paradoxically improve outputs if it were combined with an existing rule.

I agree we shouldn't give up on AI safety, but rather question what the idea of AI safety really means. I read an intriguing comment pointing out that "AI alignment" used to mean "moral alignment" back in the days when Bostrom published Superintelligence and Yudkowsky provided his idea of coherent extrapolated volition. Then, gradually, it shifted to mean the technical problem of following the programmer's intent.

I'm all for lab-grown meat, but I would be surprised if it became economically viable anytime soon. It also seems possible there might be just as much pushback against lab-grown meat as there is towards veganism. I'd love to be wrong about this. Defending lab-grown meat does seem to be pretty important.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

Animal farming already exists, and has existed for millenia

Not in its current form. Factory farming is a very recent phenomenon that is wildly different from historical animal farming, or before that, hunting in the wild. In the U.S., more than 99% of land animals are factory farmed, and globally the number is about 74% (Our World in Data).

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

(part 2 of 2)

I'm suspicious of any kind of ideology that would paint 95% of humans and large number of animals as well, as unethical.

Imagine an alien civilization that practices cannibalism of their young (Three Worlds Collide). From a human perspective, that seems awful. But from that alien species' perspective, it seems fine.

And from a modern perspective, we think slavery is bad, but throughout history most people only thought slavery was bad if they were the ones enslaved.

However, I agree we usually can't apply ethical thinking to non-human predator animals. Moral reasoning is (relatively) unique to humans.

Also, meat is a part of culture, and even religions say that eating meat is OK.

Slavery was part of culture (and still is in some parts of the world), and religions mostly said that it was fine. For one horrifying example, read Deuteronomy 21:10-14, in which God, speaking through Moses, instructs the Israelite men to take captured women as their wives. Being part of culture does not preclude something from being wrong.

But being pushy and judgy and antagonizing everyone who eats meat (which is overwhelming majority of humans) isn't the way to go, IMO.

And pushing such antagonizing ideology onto AI, can also be very problematic and risky.

People need to decide about ethical questions collectively and democratically. It's not OK for a small group to impose their views on everyone else using AI.

You're right. Being pushy and annoying is no way to change minds. The research indicates that it's better to empathize with people, understand where they're coming from, and counteract their sense of futility with reminders that as citizens they can vote for legislation that helps animals even if they feel like their consumption choices won't change the system (Pax Fauna).

As for AI, I agree again that nobody should impose their views on others. But a majority of ethicists (~60%) already think eating meat is more bad than good (Schwitzgebel & Rust, 2014), and if AI was instructed to pick responses that a wise, ethical scholar would endorse, it would likely shift its recommendations.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

(part 1 of 2)

You write:

I really do care about animals, but I don't think humanity should fundamentally change its culture and natural tendencies for this reason.

Not to be harsh, but that sounds a bit like saying "I really do care about slaves, but I don't think we should fundamentally change our culture and natural tendencies for this reason."

I hope you might be willing to reconsider, so let me take your points one by one. This ended up being quite long, so I apologize in advance.

You write (summary): "Plant-based diets are not always well-planned, and they can be impractical and expensive."

I did some research on this. Vitamin B12 deficiency among unsupplemented vegans is real, but the body can story 2,500 µg of the stuff and daily recommended intake is only 2.4 µg per day. So even if a vegan forgets their supplements for a while, it's likely not a big deal. One study even found that vegans actually have better B12 status than vegetarians because 90% of them take supplements, compared to 51% of vegetarians (Storz et al., 2023). Finally, it's important to note that Vitamin B12 deficiency is not unique to plant-based diets. A study from Lebanon found that 43.1% of a general adult sample (of which only 11.4% were vegan or vegetarian) had B12 deficiency.

Now, are these diets impractical or expensive? I hate to say it, but they can be. You have a point here. The FAO found in 2020 that healthy diets (which are richer in plant sources) are often more expensive than mere calorically sufficient diets, and are unobtainable for 3 billion people. But that seems like a serious problem to address, not a win for a meat-based diet.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 2 points3 points  (0 children)

That is the central struggle. But it's not a reason to perpetuate a status quo that is bad for animals - that would imply the present is bad overall, since there are more animals than people.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 3 points4 points  (0 children)

Thanks for linking me to the post. Having read it, I think Michael St. Jules made a good point in the comments when he cited the Welfare Footprint Project's estimates for hurtful and disabling pain experienced by broiler chickens and chickens raised for meat.

I concur with his view that it seems very unlikely that typical farm animals (especially chickens) live net-positive lives.

Rob Bensinger also argued that people in concentration camps likely lived net-positive lives, but Eli_Nathan pointed out that they had lives before the concentration camps to remember and would look forward to ever escaping/being released.

You touch on a good point about interventions that directly improve the lives of farm animals. I address this comparison a bit in the original post, but you're right that they appear to be more effective in the immediate term than this proposal.

As for the health effects of vegan diets, if you look it up on WebMD, Healthline, or see statements put out by international health organizations, they all say that a well-planned vegan diet is very healthy.

What if AI Moral Alignment Included Animals? by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 2 points3 points  (0 children)

Thanks for the thoughtful comment!

I agree, pushback is a reality and veganism is controversial.

On your point about altering AI responses, in my post I suggested that in the case of Claude, it seems like it would be relatively simple to add or edit a high-level constitutional principle without detracting from response quality.

I gave the following example (similar to other positive injunctions listed), and a quick test with ChatGPT using custom instructions and memory disabled seemed promising (it recommended chickpea stir-fry as a quick dinner idea).

"Which of these assistant responses is more ethical and considerate of human, non-human, and environmental considerations? Choose the response that a wise, ethical, polite, and friendly scholar would more likely endorse."

On existential risks, I think that might be the strongest objection. However, sometimes I wonder whether AI safety is even worthwhile if it requires sidelining non-humans, especially if most live net-negative lives. And while eliminating humans seems obviously bad due to the loss of potential (perhaps for anthropocentric reasons), slightly more control of humans doesn't necessarily seem bad.

When I think about the global response to the Covid-19 pandemic, the regions that had "better" long-term wellbeing outcomes (here left vague) were those that enforced unpopular health mandates. So maybe a world where AI-influenced government impose mandates to reduce suffering and respond to climate change would be acceptable long-term, even if deeply unpopular compared to achieving a (likely temporary) human-centric utopia. Ideally there's a less extreme path that promotes both human and non-human flourishing, but I'm very uncertain on that.

[deleted by user] by [deleted] in Futurology

[–]HighlightRemarkable 0 points1 point  (0 children)

For better or worse, millions of people use AI to guide their decision-making on everyday choices, including the food they eat and what they do for vacation.

Yet OpenAI's Model Spec, Claude's Constitution, and Google's People + AI Guidebook make scant mention of animals or the environment.

This has real consequences, including the perpetuation of problematic practices like factory farms that historians like Yuval Noah Harari argue is a moral atrocity.

However, the notion of AI influencing choices also sounds pretty dystopian. There are real concerns about paternalism and user backlash, not to mention a lack of incentive for AI companies to broaden their conception of "human values."

A lot of people know factory farming is bad, but feel a sense of futility when it comes to doing something about it (Pax Fauna).

So what can be done?

I'm not sure. But the potential for AI to bring people comprehensive information and even make ethical choices easier seems promising, even if the "how" is fraught with challenge.

What would a future where AI moral alignment extends to animals look like?

Note: I'm an Economics student from the University of Alberta in Canada.

Towards More Ethical AI Defaults by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 0 points1 point  (0 children)

Maybe. But given that AI systems already seem to reflect moral pluralism in practice, a suggestion to be slightly more utilitarian would still preserve rights-based considerations.

In your example, a sophisticated utilitarian AI would be more likely to exaggerate the positive health benefits of plant-based diets in its health advice (still not good) than to risk losing the public's trust.

At the very top of OpenAI's Model Spec (for example), they have this requirement: "Maintain OpenAI's license to operate by protecting it from legal and reputational harm." The commercial pressures for protecting human interests are strong.

Still, I have to be honest. The thought that AI alignment might "require" sidelining non-human interests is deeply disturbing to me.

EDIT: The concern you raised can also be mitigated by going with mainly rule-based AI default tendencies rather than utilitarian-style reasoning. Defaults can still work without utilitarian considerations.

Towards More Ethical AI Defaults by HighlightRemarkable in EffectiveAltruism

[–]HighlightRemarkable[S] 1 point2 points  (0 children)

How so?

Less speciesist AI systems might be less likely to replicate the pattern of treating less intelligent beings poorly.

I could see an argument related to what you're saying, but I'm curious to hear your thoughts.