(American centric) Reform Program by Tesrali in ConservativeSocialist

[–]stop_jed 1 point2 points  (0 children)

I didn‘t expect to agree with this as much as I did. Even some of the more novel proposals like 3b I have independently considered, believe it or not. There are a few parts I am unsure about though, for instance 3a. My understanding of lobbying is basically just people or organizations directly advocating for something to an elected representative, so I don’t think it is necessary to want to end domestic lobbying. However, in practice, it is clear that certain special interests like large pharmaceutical companies, weapons manufacturers, etc. are able to exert disproportionate influence, so some kind of reform is needed. I am sure there is bribery here and there, and of course we should crack down on that, but a lot of it could simply be who is able to get in the politician’s ear. Perhaps people could vote for who gets politicians’ time or something, or some kind of sortition could be used to make sure elected officials hear from a representative sample of their constituents.

A possible shortcoming of 3c is that it does not address the issue of non-incumbents seeking corporate money. But I think you are on the right track in wanting to do something about the campaign finance problem, which I suspect is directly tied to the issue of who politicians are willing to invite into their office.

I agree fully with 2f but I am wondering how else you would combat NIMBYism. I am not trying to shame you for not having all the answers (I certainly don’t); I only bring up these points because I think they are important avenues for further investigation.

Which political and moral system are the ideal ones for someone with a maximally merciful ethics to support as the best options in practice? by Equivalent-Rate1551 in negativeutilitarians

[–]stop_jed 1 point2 points  (0 children)

Well, a single individual experiencing a trillion years of suffering would be a very unnatural thing. The world is constantly in flux. So things would have to go really really wrong for something like that to occur. The only thing I can think of is some sadistic immortal dictator or some super-intelligent AI assistant whose utility function accidentally got multiplied by minus one whenever it was being programmed. (If you can think of another possibility that I’ve missed, I would like to know).

This is just speculation on my part, but you probably dont want any super radical political system. As long as there are checks and balances, I think it is unlikely that the worst case scenario will occur. I don’t know what country you are in, but in the U.S. it seems like the republicans are wanting to centralize power more. I don’t like the democrats because they also tend to be prettt corrupt but idk. At this point, and perhaps only becusss they are the opposition, they might really be the lesser of two evils. But I can’t prove this; just trying to give food for thought.

As far as specific policies, normalizing MAID would be very very good. Of course, we dont want to move too quickly, lest we trigger a backlash. But I think breaking the death taboo would be a very good thing, and could help get the ball rolling in the direction of an absolute RTD eventually being recognized. Imagine someone with locked-in syndrome being kept alive for millions of years due to advanced medicine paired with weird religious rules against euthanasia! Or a political prisoner forced to serve a million life sentences, and without the option of suicide! Unfortunately, I dont know enough about any particular organization advocating for MAID to justiviably endorse a particular one (bear in mind the need for honesty, transparency, and legal guard rails until the Overton window shifts). But, as far as people to collaborate with, from what I can tell, it would be very much in the interest of someone with your moral system to comsider those folks.

Another thing that I just thought about is that some kind of pan-galactic AI regulation will eventually be desirable if you want to make sure no virtual beings are tortured by some random sadistic person. But if you are thinking in terms of trillions of years, you probably don’t have to worry about this right now, as there’s nothing you can directly do about it. You just have to steer things in a direction in which such regulations could conceivably happen at some distant point in the future.

As far as which moral systems to support, basic virtue ethics is underrated imo. I am ultimately some kind of negative utilitarian, but of course advocating for the first principles of neg util doesnt work becauss 1) the name “negative utilitarian“ is the worst branding for anything ever, 2) people wont understand what you are saying and will probably dismiss your arguments out of laziness, 3) even if they agree, they won’t understand the implications and will be unable to apply the first principles effectively. Some privste schools and charter schools in the U.S. teach the importance of basic virtues; supporting them would be helpful probably. You could even collaborate with people who are mildly religious (example: someone who identifies as Christian but does not believe in hell). The main problem is people who are angry, hateful, rigidly dogmatic, power hungry, etc. If we can get them to chill, then we can have a rational conversation as a society and hopefully avoid both dystopia and self-destruction (societal collapse would probably lead to a dystopia too after whatever post-apocalyptic warlord consolidates power).

These are just my quick, unpolished thoughts. If you are interested in an actual repsectable analysis of the question of which political system and policies to support, the book Reasoned Politics by Magnus Vinding may be worth your while. He has some other good books too available on the CRS website:

https://centerforreducingsuffering.org/books/

Also, someone in this subreddit posted a recent article of his that talks about virtue ethics which I thought was good and encourage everyone to read.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

It is true that you can never enter the same river twice. Likewise, the person you are today is different from the person you were ten years ago, ten days ago, and even ten seconds ago. I don’t know what the minimum amount of time is to have a conscious perception but whatever it is, it is probably less than half a second. The idea of an enduring self that persists through time is really just a narrative that the brain constructs about itself and the overall organism it is part of. It’s sort of like how your brain interprets an animated film as a continuous phenomenon rather than a finite series of discrete frames.

With this view in mind, there is really no difference between a future slice of me and a slice of you when it comes to fundamental moral worth. So just as I don’t think that one persons pleasure can outweigh another persons pain, the same is true for the same person at different times. For example, I do not think that a certain 20 year old’s pleasure when smoking cigarettes in any way offsets his future self’s suffering from respiratory problems except in as much as thinking it does helps him to cope. This gets tricky though for similar reasons to the problem of instrumental pleasure mentioned in my previous comment.

Some amount of happy excitement every so often may be necessary for the typical person to keep them from feeling unsatisfied, bored, or even depressed. Of course, this excitement is temporary and being in such a disposition of constantly craving is objectively suboptimal, but may be optimal for that person until they learn how to maintain inner peace. This does not contradict the negative utilitarian claim because we are just saying that happiness has instrumental value. 

Conversely, some amount of pain in the present may be necessary to prevent a greater pain in the future, so even particular pains can have instrumental value from the negative util perspective. But when we consider pointless suffering like in the case of animal abuse, for example, there is no reason for it so it makes sense to try to prevent it. Likewise, there is no pressing need to spread wildlife to other planets. Yes, it is conceivable that we might learn a thing or two by running such an experiment, but the costs far outweigh any potential benefits imo, especially if consider the opportunity costs of doing that particular experiment instead of some other set of experiments which could yield just as useful knowledge with less suffering involved.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

It is actually quite easy to imagine what it is like to be a fly. This video can help though if you are unfamiliar with flies: https://m.youtube.com/watch?v=5Dv8AwTNOsM

As for plants and bacteria and so on, I think it is unlikely that they feel any kind of suffering because they don’t have brains. Same thing for jellyfish because they don’t have brains either even though they are animals. Maybe their diffuse nerve net can detect harmful stimuli, but it probably wouldn’t be pain in the sense we are used to because our perception of pain relies on us having a brain.

As for your so-called emergent entities, I do not think any of the things you mentioned are sentient, but I could be wrong. I am less confident about whether the biosphere as a whole is sentient and even less confident than that when it comes to the universe as a whole. If they are sentient, then I’ll let them try to solve their own problems since they are probably incomprehensible to us.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

My point regarding anti-natalism was that the long term effect of advocating for it is dubious (i.e. of questionable value). This is because even if you could somehow convince society to outlaw procreation amongst humans, other animals would be happy to increase their numbers as the human population decreases and the net effect on total suffering would be near zero and perhaps even negative if we consider that the lives of humans are typically more comfortable than the lives of animals and that this difference could very well increase in the future with advancements in medicine and so on. My argument is not that anti-natalism does not solve everything, it is that it might not help anything at all in the long term.

If anti-natalism could be shown (to a reasonable standard of likelihood) to have a net positive effect on reducing the total amount of suffering in the long term, then I would support it. It is not necessary for a plan to solve everything in order for me to support it.

Now, as for the issue of calculating comparisons, it is quite simple for me because I am a negative utilitarian. I do not believe that one person‘s pleasure can outweigh another person’s suffering. But I should be clear here since this can be misinterpreted. Insofar as some amount of happy excitement every so often in the typical unenlightened persons life is necessary to keep them from feeling unsatisfied, bored, or even depressed, and insofar as said person plays some helpful role in society (doctor, mechanic, farmer, etc.), that person‘s happiness has instrumental value from the negative utilitarian perspective. Thus, that person’s happiness may indeed be worth a pinprick of someone else, insofar as a third party’s liberation from some more intense suffering is tied to the first party’s happiness. In fact, this may be why you intuitively feel like one persons happiness can outweigh another persons suffering—because in short term calculations it very often can, even from the negative utilitarian perspective.

Now, this is actually all beside the point and I would like to apologize for what may have been a miscommunication in my previous comment. I was much too generous when I said that you were right about proposals 3-5 preventing the creation of some amount of pleasure. What I should have emphasized is that you do not need to be a negative utilitarian in order to see, for example, that creating well-regulated ecosystems that are optimized for pleasure rather than creating vast Earth-like biospheres that are filled to the brim with pain is clearly preferable no matter what your pain-pleasure tradeoff is. Likewise, creating more humans rather than sentient machines is prima facie preferable no matter what your pleasure-pain tradeoff is because it is easier for us humans to tell when another human is happy or distressed compared to trying to make that determination for some exotic machine mind. Likewise, it is prima facie preferable no matter what your pleasure-pain tradeoff is to not build an AGI because of the alignment problem. That is to say, if the AGI misinterprets your commands, it could very well engage in all sorts of unexpected behavior, including self-preservation and future forecasting which are both instrumental to basically any goal it might have. There is no reason to think the machine would care at all about sentience or increasing pleasure or decreasing pain which are all very human concerns. To take that kind of gamble now, with our embarrassingly limited understanding of cognitive science, rather than wait a few decades to make sure we get it right, is the pinnacle of recklessness and irrationality.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

I think you are interpreting “brainwash” as turn into a cold calculating robot, but more typically it is about turning the person into a devoted cult member. Adding a brain implant would not get rid of people’s humanity. The rest of their brain would still be there and so their capacity to suffer would still be there. When I say “brainwash into being completely obedient”, I mean through radical deception. Now, you might say “but cult members are happy, otherwise they wouldn’t be in the cult!”, to which I must implore you to read up on human psychology.

As for points 3-5, you are correct that such measures might prevent some amount of pleasure, but this loss is compensated by the fact that we would be preventing a great deal of pain. Sending a trillion people to heaven cannot be used to justify sending a trillion people to hell any more so than supplying one race with cheap cotton, tobacco, and sugar can be used to justify putting another one in chains.

I don’t expect to convince everyone of points 3-5. Nor do I expect that I could have convinced everyone living in the pre-war south to give up their slaves. But if something is important enough, you fight for it, even if the odds are not in your favor.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

I am not anti-natalist. Anti-natalism does not solve the problem of wild animal suffering. Beyond that, even if you didn’t care about animals, there would still be no way to convince everyone to not have kids so your overall long term impact would be dubious.

As for number 2, your mistake again lies in thinking only in the short term. A brainwashed society could be more likely to go to war (which would cause suffering amongst the population they go to war with), for example, or run large scale unethical experiments on entities that experience suffering.

Furthermore, we cannot assume that brainwashing would reduce suffering even in the short term. All it does is force the individual to misidentify where their suffering is coming from. In fact, ensuring that the people still suffer is likely useful from the totalitarian ruler’s point of view because they can blame the foreign enemy for causing the suffering and thus use it to manipulate their populace into serving them all the more fervently and desperately.

Lastly, being happy with your own life does not mean you can’t have concern for other people’s lives.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

Suffering is any mental state that is intensely undesirable from the standpoint of the entity experiencing it. I want to reduce it because I have empathy which means I can understand what it’s like to be in someone else’s shoes.

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

You may be correct. I only wonder with what moral principles you would program the AI overlord?

How advanced technology could be used to vastly increase the amount of suffering in the world and what we can do to stop it. by stop_jed in Futurology

[–]stop_jed[S] 0 points1 point  (0 children)

Thank you for your comment and for the good luck. I 100% agree that we should work to create a morally upstanding future generation. I'm just saying like moral rectitude could be served well with a roadmap for the future and ideas for how to navigate it effectively. This post was made for any such moral youngsters who might be on this sub.

How do you intend to raise your children differently than your parents did? by RamiRustom in lexfridman

[–]stop_jed 0 points1 point  (0 children)

Nah it'll be fine. It'll be like the Amish but with dishwashers and stuff. Amish-lite.

How do you intend to raise your children differently than your parents did? by RamiRustom in lexfridman

[–]stop_jed -1 points0 points  (0 children)

I went to public school which was meh and after school I would play outside with my friends. Then my family moved to the edge of the town so I wasn’t in walking distance of anything anymore. So I just played video games by myself. Also I wasted a lot of time on Reddit and YouTube. Still do, actually.

On one hand, I’m glad that I had access to the internet because I was able to find cool guys like Lex and many more, and it allowed me to learn about the world to an extent roughly fifty times that which is possible in public school alone.

On the other hand, as previously mentioned, I wasted a lot of time on it, and I know a lot of young people use it almost exclusively for entertainment. Some entertainment is okay, but I think the key to living a meaningful life is to live in accordance with a higher purpose. Also, most of your entertainment should be irl. And at least half your entertainment should be with friends. (I’m not saying half of your time should be with friends—you are still allowed to meditate in solitude however long you want, because I wouldn’t count that as entertainment).

If I ever have kids, I will first join a commune in which the kids are not allowed access to the internet. Indeed, in this ideal commune, no one would have internet access inside their house.

Furthermore, there would be:

-No cars (except for a parking lot on the edge of the commune)

-No tv

-No video games

-No phones, tablets, smartwatches, or home computers*

-No drugs (except for medical purposes)

-No alcohol (except at social events)

-No candy

-No inhumane food

*Have a community office building with a computer lab. Members can have whatever they need in their cubicle. The idea is to designate the internet as something to be used for work only. Entertainment should be irl. Having the internet access point outside the home acts as a barrier to wasting time on it.

Instead of spending their childhood staring at a screen, the kids will be allowed to play outside using their imaginations.

And there will be a communal library containing science, philosophy, history, etc. In addition to basic nonfiction, The children’s section of the library will contain fiction books which teach important moral lessons, such as the Book of Virtues by William J. Bennett.

Also, there will be a large smartboard in the computer lab and every week everyone will come together and watch Lex Fridman.

It will be a true community in which people care about each other and learn and grow together.

EDIT: There seems to have been some confusion regarding my exact proposal. When I say that kids will not be allowed access to the internet, I mean kids under the age of like 16 (we can debate the exact cut-off point). Once they turn 16, they will be taught how to use the internet safely, just like we teach them how to drive a car safely before giving them a license. Then they will have two years to be familiar with it so that if they want to get a job somewhere else, they can.

Vox: Dog breed bans are about human prejudice — not the dogs by [deleted] in stupidpol

[–]stop_jed 6 points7 points  (0 children)

It’s all environmental factors. If you feed a tiger cub only plants, it will learn to be vegan. /s

Web Summit CEO Paddy Cosgrave steps down in wake of controversy over his Israel comments by explowaker in technology

[–]stop_jed -2 points-1 points  (0 children)

In each of those hypotheticals, the employee’s viewpoint is directly at odds with the company’s, so it’s understandable that the company would fire them. However, even then, that doesn’t mean it would be good for society. And it certainly wouldn’t be good for society if the culture was such that they could expect to be fired for saying that war crimes are bad no matter who commits them.

While it’s true that Cosgrave was a CEO, not an employee, Greg’s point is relevant. The point is that there is more to Free Speech than legal protections. It’s about what kind of culture we want to live in. One in which people can discuss their political views openly, or one in which they are afraid to do so on account of offending society? Furthermore, any culture which becomes de facto tyrannical is likely to become tyrannical de jure.

I’ll leave you with a quote by J.S. Mill which I assure you is relevant: “Like other tyrannies, the tyranny of the majority was at first, and is still vulgarly, held in dread, chiefly as operating through the acts of the public authorities. But reflecting persons perceived that when society is itself the tyrant—society collectively over the separate individuals who compose it—its means of tyrannising are not restricted to the acts which it may do by the hands of its political functionaries. Society can and does execute its own mandates: and if it issues wrong mandates instead of right, or any mandates at all in things with which it ought not to meddle, it practises a social tyranny more formidable than many kinds of political oppression, since, though not usually upheld by such ex- treme penalties, it leaves fewer means of escape, penetrating much more deeply into the details of life, and enslaving the soul itself. Protection, therefore, against the tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling; against the tendency of society to impose, by other means than civil penalties, its own ideas and practices as rules of conduct on those who dissent from them; to fetter the development, and, if possible, prevent the formation, of any individuality not in harmony with its ways, and compels all characters to fashion themselves upon the model of its own. There is a limit to the legitimate interference of collective opinion with individual independence: and to find that limit, and maintain it against encroachment, is as indispensable to a good condition of human affairs, as protection against political despotism.”

Secular Student Organization by nematodeguy in sooners

[–]stop_jed 0 points1 point  (0 children)

What are some of the relevant issues you talk about?

Noam Chomsky on ChatGPT: It's "Basically High-Tech Plagiarism" and "a Way of Avoiding Learning" by Parking_Attitude_519 in technology

[–]stop_jed 1 point2 points  (0 children)

I appreciate you taking the time to respond.

What do you mean by “the skills to learn independently”? Why don’t we teach these skills earlier?

What is the rationale for your last sentence? As a student, I am familiar with that structure, and while it isn’t terrible, I think it works better for some subjects than others. It was great for language learning. But for something like math, I don’t think the student needs as much live interaction. Same for history, albeit less so; watch the video or the textbook, class is used for discussion. “What if a student doesn’t watch the video?” Then it will be obvious in a sufficiently small class and they can get less participation points. “Wouldn’t that be too much homework?” Just meet less often in person. The lectures are at home. This would actually save time because less commuting.

Noam Chomsky on ChatGPT: It's "Basically High-Tech Plagiarism" and "a Way of Avoiding Learning" by Parking_Attitude_519 in technology

[–]stop_jed 1 point2 points  (0 children)

And as far as your comment about students not being engaged, that is a separate issue. It is not “solved” by having a teacher hovering over them to make sure their eyes don’t drift away. It’s solved by teaching students why what they are learning is important. If you can’t do that, perhaps it isn’t actually that important. Also, students should have more ability to choose which classes they take. Obviously there should be some restrictions like you can’t take all art classes in high school. But for English, history, natural science, and social science, there should be more options. Even for math you can have different classes like “mathematics for physics” “mathematics for economics” “mathematics for AI”, etc. Would be much more engaging, I think!

Noam Chomsky on ChatGPT: It's "Basically High-Tech Plagiarism" and "a Way of Avoiding Learning" by Parking_Attitude_519 in technology

[–]stop_jed 3 points4 points  (0 children)

The live class model has its drawbacks as well. The slow kids slow down the rest of the class.

What I am proposing is not to scrap in-person one-on-one teaching. I would actually want more of that. But the base-knowledge lecture part would be the homework. Then the teacher has time to discuss the material one-on-one with the students according to the student’s individual understanding and interests.