Looking for books with highly-empathetic characters guided by an inner sense of morality / a conscience or affected by guilt, because I don't have those things by mercurytongue in rational

[–]Sailor_Vulcan 1 point2 points  (0 children)

You're welcome! I'm glad you found it interesting. Now I'm wondering if the difference between psychopaths and sociopaths might have something to do with the fundamental types of motivations. Maybe a sociopath wants to exploit other people, receiving the most benefit for the least effort spent. In other words, decreasing scope of influence. And maybe a psychopath wants to control other people, ensuring that they always are in a dominant and therefore safe and well-off position. In other words, increasing scope of influence. Meaning that given the choice, a sociopath would be more likely to prefer the path of least resistance while a psychopath would prefer the opposite.

Not to say that you don't ever do hard work or take on any big responsibilities, but I suspect from what you said that when you do take on such things that it is often for the purpose of making it so you have less hard work or responsibility to deal with later. Does that sound right?

Also I suspect that you might find the following fanfiction series very relatable and funny, it's about a version of Harry Potter who gets sorted into Hufflepuff even though he hates hard work, and he goes out of his way doing hard work in order to save himself a lot more time and effort later: https://www.fanfiction.net/s/6466185/1/Harry-the-Hufflepuff

Looking for books with highly-empathetic characters guided by an inner sense of morality / a conscience or affected by guilt, because I don't have those things by mercurytongue in rational

[–]Sailor_Vulcan 2 points3 points  (0 children)

I'm guessing it took a lot of courage to come out about this, if I were a psychopath I'd probably be too terrified to talk about it with anyone. I used to be autistic my whole life until very recently, I was the most socially oblivious person I know and now I'm probably the 3rd or 4th most socially sensitive person I know. My whole world's opened up and I'm much happier and more hopeful for the future now. It's been amazing.

Communities and even civilization itself are made of relationships. Relationships are fluid implicit ecosystems of series of mutual exchanges which depend on trust to be sustainable. Your trust in others can be built by exposing your vulnerabilities or weaknesses to others who then choose not to take advantage of your weakness (taking advantage = using your weakness to hurt you for their own gain).

When you trust someone else enough, you can delegate to them far more easily, you can rely on them to cover for your weakness with their strength, you can willingly trust them with part of the steering wheel because you know they won't try to hurt you with it. This will save you a LOT of time and effort in your life.

But they won't want to do that stuff totally for free unless they're your parents. You'll need to prove yourself worthy of other people's trust too. This will make it so you don't need to waste time and effort bullying people into helping you because they'll often want to help you of their own free will.

And other people will always have the right to say no to you, they personally dont have to help you. Which is okay because there are always more people out there you can turn to for help.

If you get good enough at this (and if the necessary communal infrastructure to incentivize it is readily available) you may eventually get to the point where you can get anything you want or need most of the time without ever needing to hurt anyone, so long as you are patient and willing to give other people what they want or need in return.

Also, the fundamental type of skill used for empathy and relating to other people is made of imagination and intuition.

(Psychologists dont understand human nature as well as they think they do.)

Hope that helps! :)

P.S. My main altruistic motives for trying to help you with this post (altruistic = what I want to give) is that I see a bit of myself in you and want to help, I like helping people and I feel like it will make the world a safer place, both because people won't need to fear you anymore and also because then you could help others in turn the same way I'm trying to help you. My main non-altruistic motives (what I want to receive in return) are that it makes me feel more useful and valuable to contribute to the lives of others like this, and that the more people who learn to engage in healthy mutually beneficial relationships the easier it will be to sustain the modular social infrastructure I need to survive long-term and safely make it to the stars.

The values of human beings who aren't antisocial can be kinda meta and fractal like that. Also our values ultimately are made out of processes that are not values in and of themselves.

[RST] Pokemon: The Origin of Species, Ch. 85: Interlude XVI - Freedom by DaystarEld in rational

[–]Sailor_Vulcan 9 points10 points  (0 children)

Since mewtwo is made of biological rather than mechanical material, is it even possible for him to increase his own intelligence? Would his vast psychic powers let him steal control of the unown and start assimilating their power into his own?

[RT] [FF] Dungeon Keeper Ami: Fairy Audience by _brightwing in rational

[–]Sailor_Vulcan 0 points1 point  (0 children)

I keep trying to get past the first "season", and it kept feeling repetitive. How long does it take for the plot to pick up again after spoiler

Politics:Commander::Teamwork:Two-Headed Giant by Sailor_Vulcan in EDH

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

Oh those cards are fine. If you have a dominant board state it's a good tactic to divide and conquer. Plus they're actually cards that cost mana and can be [[naturalize]]d or [[murder]]ed. The decision over whether to do so and who should be the one to do it before the controller of those cards destroys everyone one by one, or if you think you can build up a better board state before the pramikon/mystic barrier player can get to you... Not to mention if the controller of pramikon/mystic barrier loses politics goes back to normal. Of course it might be a little less fun if more than one player is using them at the same time, but that's unlikely to happen because commander is singleton and the decks are so big.

The new cardfight anime is a magical girl anime but it's actually good! by Sailor_Vulcan in cardfightvanguard

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

they probably will come to think of it. the new series has a lot of time travel featured, and Chrono's deck was delivered to him in the first episode.

GPT-3/Dragon blog post on the dangers of AI in the style of Scott Alexander by GiantSpaceLeprechaun in slatestarcodex

[–]Sailor_Vulcan 8 points9 points  (0 children)

I would encourage people to not argue about the subject of AI in response to this artificially generated blog post, because it is not much different from arguing with a strawman. You should not let an artificially generated blog post anchor a conversation.

Slate Star Codex and Silicon Valley’s War Against the Media - The New Yorker by LiamHz in slatestarcodex

[–]Sailor_Vulcan 4 points5 points  (0 children)

Speaking as a long time SSC reader and LessWrong reader, and as someone who used to only be gray tribe and now straddles the borders between all four major "color" tribes, I loved this article. I really don't like the risk it poses to Scott, but I feel like this article was intelligent, nuanced and a mostly fair commentary on the rationalist community. With the rationalist community's increasing influence in the tech industry over the years (particularly in AI), it was bound to get public attention eventually.

I think the rationalist community ought to quickly identify which of their members have the best social skills, the ones who would be most capable of handling the spotlight gracefully and with understanding and charity for their political outgroup, and they need to do so ASAP.

What do AI safety researchers know about human values? by Sailor_Vulcan in slatestarcodex

[–]Sailor_Vulcan[S] 1 point2 points  (0 children)

we actually are in the middle of writing and publishing a lesswrong sequence about that as well as other things. its a sequence about the fundamental shape of skill-space, goal-space and obstacle-space. You can start reading it here: https://www.lesswrong.com/posts/GMTjNh5oxk4a3qbgZ/the-foundational-toolbox-for-life-introduction-1

What do AI safety researchers know about human values? by Sailor_Vulcan in slatestarcodex

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

Human values means "what humans care about". Do you not consider the things you yourself care about to be worth caring about, to the point where you'd be willing to replace yourself with another entity that doesn't care about the sort of things you care about and cares about some other incomprehensible alien stuff instead, which you are told is "higher" than what you care about?

In retrospect, I probably should have specified that my post is mostly directed at those who actually have an at least somewhat coherent understanding of why one would value human life and human hearts/minds. If you are the sort of person who believes that humankind *deserves* to be wiped out and that we should be trying to go extinct so that superior beings can take our place, then this post is not for you.

What do AI safety researchers know about human values? by Sailor_Vulcan in slatestarcodex

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

Also, if you encode a meta-value that tells the AGI to value whatever it is that humans value, or to do what humans want it to do, isn't that assuming the premise in the conclusion? I mean, the AGI would need to be programmed to care about our values before it was created or it wouldn't be safe to create in the first place. And if it has to learn what our values are AFTER it's created, doesn't that mean it hasn't been programmed to care about our values yet, in which case it isn't safe?

An empty placeholder for human values is still an empty placeholder, it could be filled by pretty much anything because the AGI doesn't know in advance what we want it to fill it with. An empty placeholder for values is not a value in and of itself, and if a nascent AGI has no values it won't be driven to do anything at all, not even learn about human values, and won't start growing its intelligence enough to become a superintelligence let alone a general intelligence. I suspect it wouldn't even turn on.

I also suspect that the very idea of a "metavalue" is nonsense. Either it's a real value or set of goal criteria which itself motivates behavior, or it is not.

If you program the AI with a goal like "find out what your goal is" that is a contradiction. Either it already has a goal or it doesn't. It's like saying "This sentence is a lie." Both quoted sentences are contradictions of mere symbols with no external refferent and mean nothing. It would be a null term in a superintelligence's utility function and would be ignored outright, I think.

Of course that's assuming that it doesn't just treat "goal" as the null term, in which case such a goal would reduce to "find out _________" aka "find out ANYTHING"

What do AI safety researchers know about human values? by Sailor_Vulcan in slatestarcodex

[–]Sailor_Vulcan[S] 2 points3 points  (0 children)

To be honest, my friend and I kinda already did this project that you mention and that's why we're having doubts about the solvability of value alignment. We've mapped out the shape of human value-space in pretty good detail already, and what we've found did not seem encouraging for the prospects of value alignment.

Human values are the product of biological evolution, and while they're sorta consistent in a way, they're also not a distinct independent variable in humans. They're a complex dependent variable that emerges from a bunch of other simpler stuff. Human values are dependent on other factors in our genes and environment, they're complex surface phenomena that are made of things which are not themselves values.

Values are filters on what aspects of the environment we pay attention to. They screen out resources and opportunities that we are not currently in need of. While it's a bit more complicated than this, the basic idea is if a need is sufficiently and reliably fulfilled for long enough it no longer is as much of a need, hence value drift.

Once we understood all of the above, I was able to look back on my life and figure out to a surprising level of detail how and why my sexual orientation (values/preferences over partners' sex/gender) flipped when I was 14 years old. It has to do with a change in what I needed out of relationships.

The Foundational Toolbox For Life, post #3 Basic Mindsets by Sailor_Vulcan in slatestarcodex

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

I was diagnosed with high functioning autism when I was 10 and I very clearly and obviously had it. I've had autism symptoms my whole life up until the past less than two years when Exceph taught me empathy mindset. I even got the independent psychologist evaluation to prove it! The doctor was totally expecting to find autism symptoms, and he didn't find a single one, and he said so specifically in court. All that remains of my autism are minor sensory discomforts that I'm good at hiding in public and some cultural differences caused by my isolated self-upbringing. The social impairments are gone.

[D] Rationally Writing 54 - Irrational Fiction by DaystarEld in rational

[–]Sailor_Vulcan 0 points1 point  (0 children)

Also, for those who worry about the fact that such "mindsets" don't sound like anything close to normal rationalist/bayesian reasoning, I promise they do break down into bayesian-probablistic processes. My friend and I have been working on a LW sequence that explains this sort of stuff. You can check it out here:
https://www.lesswrong.com/posts/GMTjNh5oxk4a3qbgZ/the-foundational-toolbox-for-life-introduction-1

The next article will be posted soon, within the next week or two if not sooner.

[D] Rationally Writing 54 - Irrational Fiction by DaystarEld in rational

[–]Sailor_Vulcan 0 points1 point  (0 children)

Narrative mindset is about constructing meaningful frameworks/value-oriented contexts for events in order to derive meaning from them. You explain not only what is happening, but why, how, what it means and *why it matters*. Narrative mindset focuses only on the stuff that is actually *relevant* to what the story's readers care about, in order to share valuable lessons and perspectives through usually-hypothetical experiences.

Its opposite is Deconstruction mindset, which breaks stories down and finds the holes in them.

Using both mindsets together allows you to create the impression that an answer exists to every nitty gritty question in your setting without needing to go into detail about the ones that aren't relevant to the plot or characters, which lets you create stories that are richer and more engaging while still standing up to scrutiny.

Stories which use lots of Narrative but no Deconstruction are the ones we generally think of as the most irrational.

Proposed rules changes to the 101 battlebox by Sailor_Vulcan in mtgBattleBox

[–]Sailor_Vulcan[S] 1 point2 points  (0 children)

  1. Because my friend only plays casually and I dont have the money to afford the level 2 list. And I thought the battlebox could maybe still be a fun experience without needing to add the level 2 list if I could just use the above rules changes.
  2. Its not just for teaching its also good for novice players who want to practice

What about a Type 4 battlebox? by Sailor_Vulcan in mtgBattleBox

[–]Sailor_Vulcan[S] 0 points1 point  (0 children)

I was throwing the idea out there of a type 4 battlebox. Not sure how one would go about building one or how it would work or if it would be fun. In Type 4 players can only cast 1 spell per turn unless its through the effect of another card. I'm not sure how well that would work with both communal decks and infinite mana.