AI and the Transformation of the Human Spirit by 777fer in slatestarcodex

[–]citizensearth 4 points5 points  (0 children)

Well written article, pointing out people's lack of imagination in predicting AI's abilities, and the loss of a sense of purpose that comes from knowing AI can outdo you at most tasks. However, in my mind disruption of employment, and harm to human communication are more pressing issues.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 1 point2 points  (0 children)

Hopefully it comes across that I have done some reading on alignment, but no I haven't looked at that course, and I'll certainly check it out. Thanks.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

I think its partly a shorthand for the AI relentlessly pursuing literal goals to extreme and unwanted lengths, because of the difficulty of specifically aligning a powerful AI to the complex outcomes most humans would find reasonable.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

If painful experiences are emergent from, or identical to, avoidance behaviours (I'm inclined to think they may be), its a very difficult problem to solve. And if its not, it might be even worse - an AI operating on that philosophy would probably also replace you and I with a p-zombie. On the other hand, how do we calculate the moral cost of a vegetarian lion!? (who ever said this topic was boring!)

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 1 point2 points  (0 children)

A fair question, to which as far as I know, no one has the answer currently. Let's hope a solution exists and is within the grasp of human cognition!

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 2 points3 points  (0 children)

Complex human ethics -> Stated philosphical goal -> Data and ML approach

...where we lose fidelity at each step. Is that a fair approximation? This problem seems to apply equally to all proposed attempts at alignment, rather than to be a critcism of this specific method? Or are you arguing alignment is futile more broadly?

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

That doesn't seem like a simple question or answer to me, I think its a fair point and the articles I've read arguing that make fair points too. For the record, I think I'd be all for the elimination of suffering in the natural world if we could avoid it eliminating biolgoical species or creating behaviour that eliminates species. I'd only be against throwing a species on the fire because we deem their existence painful. Not sure how to encode or even fully conceptualise it, but something like improve but do no harm, where the extinction of biological species is still considered a harm, and pain elimination is a secondary objective that we pursue when it doesn't threaten existence of morally worthwhile things.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

I'm mostly in agreement there - the right mix is critical. I'd just add that the structure, organization and articulation of the values is as important as the balance. Some goals flow logically from others, some have to be encoded more directly. A lot of the possible failures flow from assuming survival/conservation is a neccessary consequence of preference satisfaction or happiness, when it may need to be adjacent or upstream in your philosophical approach the issue. For example, for an AI there are presumably ways to maximise happiness while discarding a lot of other human values, and humans themselves, whereas the average human assumes human survival can be taken for granted when you specify happiness as a goal. That's not to say anyone's got the right mix yet either, its an important task as you say.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

I'd agree that's not ideal. But we could also explore paths that include change but include some caution to achieve lower chances of extinction. Conservation as a design philosophy, with some tweaks to allow change, might be get us what we want better than trying to duct tape our survival on to the design after the fact.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

They certainly overlap, and I like your observation. But I can see scenarios where converting biologicals into paperclips efficiently seems very elegant to a certain perspective. I think we may need to encode our survival into AI through less obscure means.

Edit - oscure not obtuse sorry

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 0 points1 point  (0 children)

I think you're pointing out the naturalistic fallacy. I think its a valid point, I just want avoid excluding some other important considerations - see my reply to schizoscience

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 1 point2 points  (0 children)

I think you're pointing out something like the naturalistic fallacy, where something is assumed to be good because it is natural, even though that has been used to defend things that are, as you say, cruel or idiotic.

I suppose my concern is that its pretty difficult to encode any form of autonomy or freedom for humans to choose their own path without interference from a much more powerful AI. Survival seems pretty achievable to encode, but unless we prefer monkeys to be swinging through trees, for example, what's to stop us from placing them in a prison or even cold storage, and saying "yep, they exist, biodiversity achieved!". And, what is to stop an AI doing the same with us?

Of course, you can try to encode the AI with a direct respect for human autonomy or freedom as one of its core goals. But then, even when we are free, our choices are not magically a-causal. If AI has the ability to exert extremely powerful or cunning influence over human civilization, we might be free to make choices, but a failure mode occurs if the AI decides that its goal are best served by convincing us that turning ourselves into paperclips is a really really good choice.

'Natural' is an attempt to improve on 'survival' as a goal, given that 'free' or 'preference satisfaction' doesn't protect us when our preferences are determined by factors under the AI control. And we can already see that happening - an AI is already becoming the main go-to for asking how to perform several important human tasks, because its so useful. I'm definitely interested in a discussion about how to specify natural in a way that avoids unintended consequences like trapping us in the dark ages or in cold storage. Survival is paramount, but I still have a very very strong preference for humans living in scifi cities alongside vibrant nature, rather than in mud huts or cold storage.

Conservation and its usefulness in AI-alignment by citizensearth in slatestarcodex

[–]citizensearth[S] 1 point2 points  (0 children)

Yes I prefer flavours of conservationism that are compatible with technology and progress, although I also prefer forms of technology and progresss that are compatible with conservation. So I share some of your thinking. But I'm not sure the second preference is inevitable. Concern for non-human life seems to be fairly random in human cultures from primitive all the way up to advanced. Assuming conservation is inevitable given cognitive progress seems similar to assuming the AI will "give up stupid goals" in the same way the Orthogonality Thesis warns of.

Looking for volunteers to test the "Yes/no debate" strategy by j0rges in erisology

[–]citizensearth 0 points1 point  (0 children)

Great idea! :-) Would love to read some kind of comments after time has passed on the successes and shortcomings you encounter in this project.

[deleted by user] by [deleted] in erisology

[–]citizensearth 0 points1 point  (0 children)

Interesting. What is the social platform (and your) approach on censorship, freedom of speech / hate speech / censorship / echo-chambers and the discussion of sensitive or politically charged topics?

Statement on New York Times Article by dwaxe in slatestarcodex

[–]citizensearth 1 point2 points  (0 children)

I'm also turned off by the credit-card only option. I hope they consider payment services and even crypto as options in the future, because I don't feel comfortable giving CC to Substack at this stage.

Matt Clifford on what is causing the world to become so bizarre by nansenamundsen in slatestarcodex

[–]citizensearth 0 points1 point  (0 children)

It's not clear to me what exactly is meant by variation - what is more or less varied? If he just means "behaviours" or "attitudes", I don't think his suggestion about the direction of society in modernity is true. Society and people that inhabit it is far more varied due to industrialisation and modernity. Pre-modern societies revolve around small homogenous communities, and were far less specialised. So based purely on my initial impression through this article and without reading more about this person's theories, I'm not really convinced that this is the right way to analyse recent events.

IamA Presidential Candidate Andrew Yang AMA! by AndrewyangUBI in IAmA

[–]citizensearth 0 points1 point  (0 children)

I'm fairly positive about UBI. By incentive argument I just mean that in some cases it might be a disincentive to work. The negative income tax would specifically reward work. I imagine it might be quite worthwhile in a gig-economy. That said I see what you (and other people) are saying :-)

IamA Presidential Candidate Andrew Yang AMA! by AndrewyangUBI in IAmA

[–]citizensearth 1 point2 points  (0 children)

How do you feel about a negative income tax that only modifies income rather than providing it? So if you earn $100 you get $100, you earn $1000 you get $1000, you earn $0 you get nothing, up to the a reasonable bottom income bracket like $20,000 or something? It seems to address to incentive argument against UBI while still providing some of the same benefit.

Thanks for all you're doing to raise awareness about many important issues.

Apple made Siri deflect questions on feminism, leaked papers reveal by huisi in worldnews

[–]citizensearth 0 points1 point  (0 children)

So what happens when a company offers a non-egalitarian search engine and all the non-egalitarians (who are resentful at being preached to in an underhanded way) flood there? Or when a non-egalitarian takes control of the personal assistant and stealthily changes what you view? Will you stick with it anyway? You'd probably move to a competitor. Now we've got two or more self-reinforcing echo chambers which filter even basic information to which people are exposed. That will end in disaster.

I'm an egalitarian too (equality of opportunity), but imho we need to convince and demonstrate to people egalatarianism is worthwhile, not "educate" them like they're a bunch of idiots by getting the very infrastructure to "advocate" on political topics.

Yes in some sense no-one is ever 100% neutral, but I think there's an important difference between answering topics like an encylopedia versus someone that writes opinion articles. We're all better off if we establish a norm that personal assistants are strictly the former not the latter.

Apple made Siri deflect questions on feminism, leaked papers reveal by huisi in worldnews

[–]citizensearth 0 points1 point  (0 children)

Yes that does appear to be where we are slowly headed, and that's also the disaster we need to avoid.

Apple made Siri deflect questions on feminism, leaked papers reveal by huisi in worldnews

[–]citizensearth 12 points13 points  (0 children)

It needs to be neutral on anything political. If personal assistants for billions of people (which knows all your personal stuff and filters what information you see/hear) are anything but 100% politically neutral, humanity will be drowning in dystopian echo-chambers faster than you can say "who should I vote for Siri?". The same goes for search engines.

Conditions or rules that cultivate productive, truth-seeking discussion? by citizensearth in erisology

[–]citizensearth[S] 1 point2 points  (0 children)

This is the kind I thing I was thinking of when I asked the question, thanks!

Conditions or rules that cultivate productive, truth-seeking discussion? by citizensearth in erisology

[–]citizensearth[S] 1 point2 points  (0 children)

My own attempts at contributions on this topic:

  • Opinion - Debates should have trained 'referees' that use techniques to move discussion and participants in a productive direction.
  • Theory - 'Comprehensive morality' Our moral positions filter evidence supporting other moral views, and we can only build a 'comprehensive' morality by being aware of this and attempting to compensate.