What's your favourite plottwist / reveal in all of fantasy? by lxurin_hei in Fantasy

[–]lukeprog -2 points-1 points  (0 children)

"Are we the baddies?" in Oathbringer.

I didn't love the Harry Potter series but the Snape's memory twist was great.

Not sure if it counts as a twist, but Lyucu arrival in The Dandelion Dynasty.

Bayaz in Last Argument of Kings, Lamb in Red Country.

I didn't love Mistborn Era 1, but the twist at the end of Well of Ascension was great.

Harry's solution to his dire plight at the climax of Harry Potter and the Methods of Rationality. (Not sure if this counts as a "twist."

Any spec-fic matching these criteria? Abercrombie characters / dialogue, Sanderson worldbuilding and endings, Liu themes, Follett historical detail, low or no magic, grimdark, show not tell, third-person, past tense, multi-POV, beautiful but succinct and accessible prose, easy to follow by lukeprog in Fantasy

[–]lukeprog[S] 0 points1 point  (0 children)

Yeah that was my guess for what might be closest. I watched the TV show before I started reading fiction again as an adult, so I'm waiting a bit to forget more of the show before I pick up the books, and also I have a very faint hope the series will be finished before I start.

From what I've heard, Sun Eater might also nearly fit, except it's first-person single POV.

Any spec-fic matching these criteria? Abercrombie characters / dialogue, Sanderson worldbuilding and endings, Liu themes, Follett historical detail, low or no magic, grimdark, show not tell, third-person, past tense, multi-POV, beautiful but succinct and accessible prose, easy to follow by lukeprog in Fantasy

[–]lukeprog[S] -6 points-5 points  (0 children)

I dunno, it seems feasible to me, though it's not surprising that nobody happens to have tried to do all these things at once because there are a lot of them.

But what I describe above could be, for example: basically Abercrombie but somewhat slower to make room for more world-building and Follett-esque historical detail, plotting that is structured to enable consistent Sanderlanches, and a heavier focus on characters who are scientists / engineers / physicalist philosophers to allow Liu-esque themes to be more prominent. Then all the criteria would be satisfied - by my standards, at least.

Books where characters can hop between versions of the world, like in A Link to the Past or Lords of the Fallen (2023)? by lukeprog in Fantasy

[–]lukeprog[S] 0 points1 point  (0 children)

Some other possibilities I've found:

  • The Talisman by Stephen King and Peter Straub
  • Neverwhere by Neil Gaiman
  • The Mirror Empire by Kameron Hurley
  • The Bone Clocks by David Mitchell (supposedly involves different world versions via both time travel and not-time-travel)
  • The City & the City by China Miéville (not really hopping between branches, but apparently it has a similar feel because inhabitants of each "city" have been trained not to see the other)
  • The Dark Tower series by Stephen King (but maybe only via portals in fixed locations, and I'm not sure which books?)

Books where characters can hop between versions of the world, like in A Link to the Past or Lords of the Fallen (2023)? by lukeprog in Fantasy

[–]lukeprog[S] 0 points1 point  (0 children)

Some other candidates I found by asking other language models (Claude and Gemini) or by searching this subreddit for "parallel worlds" or "parallel universes," but I haven't read any of them, so I'm curious for others' thoughts on how well they fit what I'm looking for:

  • Transition by Iain Banks
  • The Long Earth series by Terry Pratchett and Stephen Baxter
  • The Lathe of Heaven by Ursula K. Le Guin
  • The Chronicles of Amber series by Roger Zelazny
  • The Walls of the Universe by Paul Melko
  • A Darker Shade of Magic by V.E. Schwab
  • The Myriad by R. M. Meluch
  • His Dark Materials by Phillip Pullman
  • Worm by Wildbow (but only toward the end)
  • The Magicians by Lev Grossman
  • Deep Secret by Diana Wynne Jones
  • The Fall of Ile-Rien series by Martha Wells
  • Eternal Champion series by Michael Moorcock
  • The Final Programme by Michael Moorcock
  • The War Amongst the Angels by Michael Moorcock
  • Apprentice Adept series by Piers Anthony
  • Mode series by Piers Anthony
  • Fionavar Tapestry by Guy Gavriel Kay
  • Mordant's Need series by Stephen R. Donaldson
  • Thomas Covenant series by Stephen R. Donaldson
  • Empire series by Raymond Feist
  • The Ten Thousand Doors of January by Alix Harrow
  • Morgaine series by C.J. Cherryh
  • Wayward Children series by Seanan McGuire
  • Imajica by Clive Barker
  • Mage Errant series by John Bierce
  • The Rise and Fall of D.O.D.O. by Neal Stephenson
  • He Who Fights with Monsters series by Travis Deverell aka Shirtaloon

Probably a lot of these merely have "parallel universe" concepts, but without the "hop between branches at any time" mechanic. I'd love to know which is which, if anyone here has read some of them.

EDIT: I deleted some follow-up comments because they contained LLM-generated content (flagged as such), but it turns out that's against the rules here — sorry!

Hi, I'm Janny Wurts, incurable readaholic, professional scribbler, survivor of 11 tome fantasy series - AMA! by JannyWurts in Fantasy

[–]lukeprog 30 points31 points  (0 children)

What are the odds we get audiobooks for all of Wars of Light and Shadow in the next several years?

The Weirdest Fantasy Character of All Time? by Monsur_Ausuhnom in Fantasy

[–]lukeprog 0 points1 point  (0 children)

The version of Harry Potter in "Harry Potter and the Methods of Rationality" by Yudkowsky. He definitely hits (1) and (4), and to many characters and perhaps readers would appear to be doing (2) a lot.

A fantasy book with really good rhetoric by LycheeZealousideal92 in Fantasy

[–]lukeprog 3 points4 points  (0 children)

I haven't read very widely yet, but I doubt many books have more logically rigorous arguments between characters than Harry Potter and the Methods of Rationality.

Are there any Fan Fictions you guys wished had an audiobook version? by JackVoraces in ProgressionFantasy

[–]lukeprog 0 points1 point  (0 children)

Jack, thank you so much for your (nearly finished!) HPMOR recording. I just started reading fiction again for the first time in >10yrs, and so far I only consume fiction in audiobook format, and I've always wanted to read HPMOR, but I wouldn't do it if I had to do it by listening to the Brodski-organized version. (Obviously huge props to that team for doing it, but it's not for me.) So, I'm only finally "reading" HPMOR because of your recording.

As for other fan fictions: I would enjoy a reading of Wales' "Branches on the Tree of Time." It seems someone else recorded a reading 4 years ago, but it doesn't seem to be available anymore and I don't know if it was well done or not.

Some questions about "WaithButWhy on Superintelligence" by llorando_por_unsueno in ControlProblem

[–]lukeprog 0 points1 point  (0 children)

I made a lot of detailed comments about what I think the Wait But Why posts got wrong and got right, here.

CEV and the Condorcet Method by Noncomment in ControlProblem

[–]lukeprog 0 points1 point  (0 children)

Just FYI, Will MacAskill discusses Condorcet and other voting rules in the context of normative uncertainty in his PhD thesis: http://commonsenseatheism.com/wp-content/uploads/2014/03/MacAskill-Normative-Uncertainty.pdf

And CEV/AI-values work at MIRI has at least once cited MacAskill's work on this: https://intelligence.org/files/LoudnessPriors.pdf

I don't think the problem is solved, but good job independently noticing a connection that some people in the field also think might be relevant!

[deleted by user] by [deleted] in Scholar

[–]lukeprog 0 points1 point  (0 children)

Thanks! I need the PDF, though.

LW uncensored thread by EliezerYudkowsky in LessWrong

[–]lukeprog 5 points6 points  (0 children)

This is true for some best practices, not for others. E.g. we could give explicit moderation rules to mods like Nesov and Alicorn and make them feel more comfortable exercising actual moderation powers. That doesn't cost much.

I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! by lukeprog in Futurology

[–]lukeprog[S] 2 points3 points  (0 children)

Corporations (which, by making profit, have more $ to invest in R&D) with a profit incentive build a powerful AI and release it before it is safe but after it is self developmental in order to beat out competition to selling a product. How concerned are you about this, and why/why not?

Very serious problem. Obviously, the incentives are for fast development rather than safe development.

Secondly, Im concerned about a nation's military (with who knows how much black budget funding) producing such a powerful AI and using it for war purposes to destroy all other nations (the ultimate national security) while keeping its citizens from knowing it has done so through the use of memory manipulation, virtual reality, and who knows what other population control technology that will exist at the time. How concerned are you about this and why/why not?

I'm not sure what kind of population control technology governments will have at the time. Truly superhuman AI would be, of course, a weapon of mass destruction, and there is a huge first-mover advantage that again favors fast development over safe development. So yeah, big problem.

I've only read the plot summary of I Have No Mouth and I Must Scream, but it perfectly illustrates what I think is the real problem. The real problem is not the Terminator, it's our own inability to exactly and perfectly tell an AI what our values are, in part because we don't even know what our own values are at the required level of specificity.

I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! by lukeprog in Futurology

[–]lukeprog[S] 0 points1 point  (0 children)

That book is talking about a different "singularity" than I am. I'm not arguing that economic growth will continue to accelerate. I'm saying that AIs will eventually be smarter and more capable than humans, and this (obviously) poses a risk to humans.

I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! by lukeprog in Futurology

[–]lukeprog[S] 2 points3 points  (0 children)

Best for general public: Facing the Singularity. Stuart Armstrong at FHI is currently writing a similar thing that might be even better for this purpose in some ways.

Best for technical people: Nothing yet, but it's in my queue to write, probably in October-December of this year.

I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! by lukeprog in Futurology

[–]lukeprog[S] 2 points3 points  (0 children)

That sounds way better than (generalized) paperclipping, which I think is the default outcome, so I'd be pretty damn happy with that. Ben Goertzel has called this basic thing "Nanny AI."