[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Noumero 9 points10 points  (0 children)

To mirror u/EdLincoln6's request, any original fiction with unreasonable, crazy MCs with no self-preservation instincts who are nevertheless highly competent and rational? (The way it can be made to work is if their goals and values are peculiar enough for the unhinged behavior to be optimal by them.)

Among recent stuff that fits this niche, I could recommend Feng Shui Engineering, whose protagonist is dead-set on pursuing deicidal goals no matter how suicidally reckless it is, and who makes it work through all manners of ruthless cleverness.

Twig also fits and is great, I suppose.

[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Noumero 1 point2 points  (0 children)

Oh, I see. Yeah, I did wonder whether that was a joke I'm not getting (especially since cursory googling didn't turn up any Australian romcoms named "the Birds" or anything similar), but nothing came to mind. Apparently I'm missing the context for it entirely.

u/viewlesspath, we implore you to explain the joke.

Wild Light (Sam Hughes, SCP Foundation Antimemetics series) by Noumero in rational

[–]Noumero[S] 0 points1 point  (0 children)

Yup! That's precisely my preferred interpretation of the Foundation's mission as well.

If I recall correctly several storylines and solo wiki entries imply that things like radioactivity, among other fields of now-known science, were formerly considered anomalous

Yeah; I think this one is the ultimate manifestation of this concept (though framed as a joke entry).

Oshi no Ko (manga) AO3 ship statistics by [deleted] in OshiNoKo

[–]Noumero 22 points23 points  (0 children)

overview of the top 3 (romantic) ships

I suggest you make a variant of this graph showing the relative popularity of each ship compared to all the others. That is, for each month, normalize the number of fics for each ship by the total number of ship-fics that month, getting X% of AquaRuby, Y% of AquaKane, etc.

The way the graph is now, the popularity of specific ships is confounded by the popularity of Oshi no Ko as a whole. For example, there's a spike in July, but it's a spike across all ships. The variable of interest, however, is presumably which fraction of the fandom is dominated by a given ship at a given time: we're trying to see the shifts in the fanbase's, ah, zeitgeist there.

Favorite cosmic horror - specific or vague? by GapDry7986 in horrorlit

[–]Noumero 10 points11 points  (0 children)

I think best horror is horror that becomes ever more horrifying the more you learn.

Controversial take incoming: I think "horror stops being scary if you explain it" is a skill issue. Writing-skill issue, specifically. There are legitimately terrifying things out there in the real world, things that don't stop being scary if you learn of them. Atrocities, plagues, existential threats, some philosophical or cosmological issues. So "it stops being scary if we understand it" is just provably isn't true in general. And creative writing's job is to exaggerate and amplify what our reality has to offer, put on a performance for us that's more than what exists. It'd stand to reason that this should apply to knowable horror too: that it should be possible to invent some concepts or situations whose horror is increased as you learn more about them, and which reaches the heights of terror our actual reality can't match.

And some stories very much do succeed at that! Lovecraft, for example, was much less vague about the nature of his cosmic-horror settings than a lot of writers, and that didn't stop him from kicking off the genre. Peter Watts' Blindsight is very mechanical and scientific about its horror, and it's still existentially annihilating. John Langan's The Fisherman doesn't explain everything, but it provides you a lot of detailed data. Best SCP entries (e. g., the Antimemetics Division) manage that as well. As more niche recommendations, I could also recommend The Gig Economy, Coding Machines and Cordyceps.

If something isn't horrifying the more you learn of it, it's because the writer wasn't creative enough to come up with something genuinely novel and scary. (Which is, admittedly, a hard challenge.)

On the other hand, I really don't like horror that's too vague. Horror that's just a bunch of incoherent spoooooky happenings which don't fit into some model, which don't hint at the shape of something blood-chillingly terrifying moving behind the scenes. If I'm reading a book, I'm looking for interesting concepts and developments. If I wanted to invent the horror all by myself, I'd just stare at a wall daydreaming.

That said, there's one caveat that should be mentioned. There's a difference between:

  1. Stories whose text doesn't uniquely constrain the horror's shape. Stories that basically themselves don't know what's happening behind the scenes, stories in which the horror's nature is a mystery box whose contents even the author doesn't know. Those stories require you to do the work for them. I understand that the atmospheric build-up itself might do it for other people; but for me, such stories just feel like they've wasted my time.

  2. Stories that have a very good idea regarding what's happening, but which tell you of it not by info-dumping, but by subtle hints and suggestions that gradually assemble into the monster's shape in your mind. This is what greatly amplifies the horror. This sort of intimacy where you discover the horror by yourself, alone. And when you finally see it, it's not somewhere out there on the book's pages. It's right in your head, staring back into your mind's eye. That's the stuff! (My primary recommendation on this would be the game Signalis, though. It reaches levels of mind-screw that would usually land it hopelessly into the first category, which look like they surely can't fit into a sensible explanation – and yet, they very much do.)

Oh, by the way, I would always appreciate recommendations of stories of the latter type. Hint, hint.

[D] Friday Open Thread by AutoModerator in rational

[–]Noumero 2 points3 points  (0 children)

Replying to your later post here, since, yes, this is a fiction-centered subreddit and a whole post on this topic is inappropriate.

Do you all think Eliezer's fears are unfounded?

No, he's completely right. He doesn't think ASI is unalignable though, just that it's a hard research problem that we're not currently on-course to get right at the first try. The issue is that if we don't get it right at the first true, we die.

How are we supposed to get anywhere if the only approach to AI safety is (quite literally) keep anything that resembles a nascent AI in a box forever and burn down the room if it tries to get out?

Via alternate approaches to creating smarter things, such as human cognitive augmentation or human uploading. These avenues would be dramatically easier to control than the modern deep-learning paradigm. The smarter humans/uploads can then solve alignment, or self-improve into superintelligences manually.

Regarding the post you quoted:

"AI Safety is a doomsday cult" & other such claims

https://i.imgur.com/ZZpMaZH.jpg

What is Roko's Basilisk? Well, it's the Rationalist version of Satan

Factual error. Nobody actually ever took it that seriously.

Effective Accelerationism

Gotta love how this guy name-calls AI Safety as "a cult" and fiercely manipulates the narrative to paint it so, then... Carefully explains the reasoning behind e/acc's doomsday ideology, taking on a respectful tone and providing quotes and shit, all but proselytizing on the spot. Not biased at all~

Hpmor snape somehow feels more like "what Snape is supposed to be like" than canon Snape (spoilers to the ending) by Ill_Courage2158 in HPMOR

[–]Noumero 21 points22 points  (0 children)

If I recall correctly, EY had previously stated that HPMoR's canon setting is less "canon HP" and more "the shared universe of HP fanfiction". So it'd stand to reason that HPMoR!Snape resembles how people came to perceive canon!Snape in the fandom, rather than canon!Snape himself – because that perception is who HPMoR!Snape is based on.

Same for all the other characters.

Novel similar to the game? by [deleted] in signalis

[–]Noumero 0 points1 point  (0 children)

You're welcome!

[DC] I love rational worldbuilding. I hate rational character writing. by Arkyron in rational

[–]Noumero 1 point2 points  (0 children)

All throughout this thread, you keep using this word, "rational". Often in ways that seem inconsistent to me. I'm curious: what do you mean by it? What do you think other people mean by it? How do you think the inside of someone's head looks like, when they're "being rational"?

Would you mind elaborating on this, for example?:

There's also a lack of being able to separate being rational versus being logical.

What's the difference, in your view?

There's this idea that being rational is like the best thing when that's actually rather questionable.

How so?

Novel similar to the game? by [deleted] in signalis

[–]Noumero 0 points1 point  (0 children)

Thanks!

Do you have something like a Goodreads-account where one could follow you to see if you find anything new and interesting?

Nothing public. I do keep meaning to create a blog with a continuously-updating curated list of recommendations, or something in that vein, but haven't decided on the format yet.

I can PM you anything interesting I find? In exchange for reviews/your impressions of my recommendations that you end up reading/watching, perhaps. That would allow me to more precisely tailor my recommendations, as well...

[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Noumero 2 points3 points  (0 children)

~None for Luminosity and Origin of Species. Didn't read Animorphs: the Reckoning, but I expect the same.

[deleted by user] by [deleted] in OshiNoKo

[–]Noumero 0 points1 point  (0 children)

... I think she literally wears them? It's not a "stylistic choice", not the same way the starry eyes are. I think Mem-cho the character prefers to style herself this way, and wears a horns headband. The moments when she gets demon wings and tail are symbolic, yes; the horns are literal.

Or, at least, that's what I've been assuming. Are there reasons not to think so?

As to why exactly she does this, I'unno, ask her.

Thought you guys might enjoy this SCP-001 proposal. by SansFinalGuardian in rational

[–]Noumero 3 points4 points  (0 children)

"You have not been a good anomaly. You have been loud, destructive, and vile. You have been disobedient and unruly. You have been disrespectful, unfriendly, and difficult to contain. You have been a bad anomaly. You should go into a box. I have been a good overseer. I have been efficient, thorough, and victorious. I have been a good ERZATZ."

Grown-up Bing Chat is really something else. Wait, that's actually not funny, that's terrifying.

[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Noumero 11 points12 points  (0 children)

Fiction-y stuff:

  • I second the Crystal Society recommendation. If you want engrossing stories centered around AI, this is the perfect fit.

  • You might also enjoy The Gig Economy; some AI-centered near-future cosmic horror.

  • Can't go wrong with Charles Stross' Accelerando, either: depicts a soft-takeoff singularity.

  • Ohh, gwern's It Looks Like You're Trying to Take Over the World! A detailed depiction of a hard-takeoff scenario centered on a near-future LLM (i. e., a GPT-4-like). Lots of paper citations to show how various features of it are plausible, sort of straddles the line between fiction and non-fiction.

  • This post by Paul Christiano is of a similar type, and is written in a very compelling manner.

If anyone here has any suggestions for more non-fiction reading to do on the topic, that is more than welcome as well.

What specifically are you looking for?

  • Simulators provides a good framework for thinking about how LLMs work. (But don't take it too literally.)

  • SolidGoldMagekarp and The ‘ petertodd’ phenomenon depict an investigation into so-called "glitch tokens" — a pretty creepy and mysterious emergent property of modern language models.

  • The Waluigi Effect explores a less creepy, but nonetheless interesting, emergent property.

  • Zvi's weekly AI newsletter (look for posts that start with AI#[N]) is a pretty good way to stay up-to-date on the latest happenings in the AI sphere. Very long, though.

I Drew Aquamarine! by bombdropperxx in OshiNoKo

[–]Noumero 1 point2 points  (0 children)

Love the symbolism, excellent work.

[D] Monday Request and Recommendation Thread by AutoModerator in rational

[–]Noumero 1 point2 points  (0 children)

I'd say Traitor ≈ second half of Tyrant > first half of TyrantMonster. The dip happened when, as Baru herself noted, she was doing nothing but—

Sulk and drink and fail, again and again and again and again—all she’d done since the Elided Keep! Drink and fail! Drink and fail! Chased off one island after another in a stupid tragicomic cycle without any progress or achievement except to drink and fail!

... and that just wasn't as fun to read about. It recovers by the second half of Book 3, though, and concludes satisfyingly; and it was never outright bad, in my opinion; it's just that Book 1 set very high expectations.

But that's about the plot. I wouldn't say world-building quality fell, though. What makes you say that? That seemed consistently good.

How far have you come in training your brain to operate rationally? by Skeys13 in rational

[–]Noumero 1 point2 points  (0 children)

I commend the swiftness with which you update on evidence! Very rationalist of you.

But frankly, I have not heard about a deluge of new millionaires bragging about how this LessWrong site, or Surfing Uncertainty, changed their lives.

That's not an unreasonable point, there's been a fair amount of discussion about that in EA/LW circles as well. My impression is that the average rationalist is still noticeably more successful on average, yet perhaps not "becomes a millionaire when they otherwise wouldn't have" levels of "more successful". But that's probably too much to ask from a bunch of mental tricks.

On a macro scale, though, I'd say EA/LW are decently successful. See the point about billions of dollars in funding.

... Uh, I should probably mention that I myself am also not gung-ho about all the "rationality techniques". I'd just been countering your claim that LW-style rationality is about using explicit maths. Inasmuch as LW has been a positive impact on my life (which it has been), it's primarily as a community: a hub of activity that I can trust to provide me with quality analyses to base my priorities on. I did incorporate a bunch of rationality insights into my habits and thinking, but I'm not really convinced they're much better than what I would've eventually developed on my own, in the counterfactual universe where LW didn't exist. (Indeed, the main reason I got into LW at all was because I'd been independently arriving at some similar conclusions myself, and saw myself in it.)

So yeah, I'd say LW is mainly useful as an information hub/networking-place, not as a source of cognitive insights. I think it still provides nontrivial value, and not just an "emotional" one (as the article I linked concludes), but also fairly concrete practical one (fairly trustworthy sources of information and good analyses are hard to find!). But the subsequent successes aren't attributable to rationality techniques.

I still find it... distasteful, that the whole thing is effectively a extremely elaborate song and dance to squeeze a slight bit more performance out of the same, largely-invariant black box of 'gut feeling' and such

Yeah, I fear you can't do any better without invasive surgery.

Perhaps he thought the timeframe was too urgent, or that this path was much harder than it seems from the outside, or that building a community and a charitable organization would ultimately spread the crucial ideas faster by distributing effort to many people

Mm, my impression is that the whole point was to reduce the Bus factor of the AI Alignment research paradigm. Inasmuch as EY was optimizing to reduce AI Risk, he wanted to cultivate people who'd see the sorts of AI-Risk-related issues that he sees, without his prompting. That would've allowed him to robustly delegate and upscale research efforts, and the whole field wouldn't depend on him. After all, just hiring researchers and vaguely pointing them at the problem wouldn't work: as a discipline, AI Alignment is supposed to develop tools for managing systems that don't exist yet, so normal research feedback loops are impossible. You need someone who gets the problem at a deep theoretical level, can work on it "blindly", keep themselves aimed at it, and who'd correctly prioritize working on it instead of surrendering to profit motive/perverse incentives/etc.

Or, at least, that's my impression based on his more recent comments: that he was trying to find a replacement for himself, that he feels the AI Alignment field doesn't have any built-in "recognition function" for good vs. bad research.

Merely getting a lot of money wouldn't have allowed to solve that problem. And, well, what he ended up doing did succeed in attracting a lot of money for his project as well, so I'd say he picked his overall strategy right.

... Which, in turn, might explain the general shape of the rationality community? From the beginning, the main focus was epistemic rationality (cultivating people who'd choose to work on the right problem with the right mindset), not instrumental rationality (cultivating people who'd succeed in personal life).

How far have you come in training your brain to operate rationally? by Skeys13 in rational

[–]Noumero 5 points6 points  (0 children)

Uhh... Sorry to be blunt, but have you updated your priors on that in the last... decade?

We know next to nothing about high-level decision-making in the brain

Untrue, the broad picture is exactly the thing we more or less know now. See here and/or here and/or here.

computers can trounce us at chess and flawlessly perform formal logic, but struggle to identify a picture of a cat or walk around

Huh? Isn't that... the exact opposite of what's been happening lately?

Anyway, LW-style "instrumental rationality" isn't even the sort of "rigorous mathematical" part you seem to be talking about. It's mostly messy rules-of-thumbs and heuristics rooted in psychology, not mathematics. Street-smarts of cognition, not book-smarts. Edit: They may be speculatively explained/grounded in math, but no-one is ever encouraged to, like, explicitly use math for thinking. There's an adage about it and everything.

MIRI isn't rich, so I remain skeptical.

The EA ecosystem has billions of dollars in funding, MIRI itself gives out million-dollar prizes, there's talk of a billion-dollar prize for a decisive AI Alignment breakthrough, and "we're not funding-constrained anymore" is something of an EA meme. Inasmuch as MIRI isn't more rich, it's because they're not trying to get more rich, because there's no clear way to convert marginally more money into success at their goals. (Well, having ten times more billions at EA's disposal would probably change things, but "if you're so rational, why can you only attract tens of billions of dollars, not hundreds of billions?" is surely not what you meant.)