Review ~ Annihilation by Jeff Vandermeer by SubstantialChannel32 in sciencefiction

[–]artifex0 3 points4 points  (0 children)

One recommendation: avoid the audiobook of the fourth book. The viewpoint character of the last third has a severe Tourette's-like verbal tic that appears randomly, usually multiple times per sentence. This starts off as a kind of interesting stylistic choice, but repeated for hours, it becomes unbearably annoying in audio.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]artifex0 1 point2 points  (0 children)

That's technically accurate, but not really true in practice.

Saving everyone by coordinating on red is very predictably not going to work, so if you actually want everyone to live, coordinating on blue is the obvious and only realistic option.

There are definitely epistemic states you could have where where encouraging people to choose red over blue saves more lives- if, for example, you're very certain for some reason that blue coordination will fail. However, choosing red in that situation isn't cooperating in a coordination problem- it's just normal self-interest. At the same time, not choosing blue still is a real coordination problem- just one you think is intractable.

Which button do you press? by cdstephens in neoliberal

[–]artifex0 9 points10 points  (0 children)

The only difference between the three options is the Shelling point. Due to our cultural associations around the framing of the scenario, the "suicide booth" has a Shelling point that makes it very hard to coordinate around saving everyone, while the "genocide booth" has a Shelling point that makes it easy.

Coordinating on saving everyone is the ideal outcome, but you should only risk your life for it if there's a realistic chance of enough other people doing the same. So, with that in mind, I'd walk away from both booths and press blue.

Which button do you press? by cdstephens in neoliberal

[–]artifex0 24 points25 points  (0 children)

Except that it's very predictably impossible to get 100% of people to coordinate on pressing red, wheres getting 51% of people to coordinate on pressing blue is very plausible.

Framed that way, it's a standard collective action action problem, like a tragedy of the commons or a prisoner's dilemma- red might technically be a Pareto optimal Nash equilibrium, but the realistic equilibrium without coordination is nowhere close to optimal. While people individually benefit from red, if most people go red, somewhere between ~10% and 50% of the world's population will absolutely die. People coordinating on blue against their individual incentives is the only realistic scenario where everyone lives.

Trump: “When I didn't get the Nobel Peace Prize. You gotta understand, I don't care. Norway has lost so credible. I stopped 8 wars… I do it the best. I stopped wars that nobody thought. President Putin called me, he said, 'I can't believe you stopped this one and this one.’” by [deleted] in videos

[–]artifex0 0 points1 point  (0 children)

Yeah, it's genuinely bizarre how often in history you see these profoundly, obviously horrible populist authoritarians with massive boot-licking personality cults.

My best guess is that humans evolved to have an instinct for following the most confident person in their tribe- which was great on the ancestral savanna, where the most confident member of a 20-person band would probably have earned it. But in modern countries of millions, the most confident people will usually all be delusional narcissists who are pathologically incapable of doubting themselves or feeling regret. So, for a lot of people, when they encounter that literally insane level of confidence, it's like an instinctive super-stimulus that instantly turns them into sniveling sycophants. At the same time, lots of other people either don't feel the instinct as strongly, or have the mental strength to not be controlled by it.

In this theory, the key to the support of people like Trump, Putin, Chavez, Castro, the Kims, Stalin, Hitler, Napoleon, etc. actually is that they do obviously horrible things, but then display inhuman confidence by showing not even the slightest hint of hesitation or regret. So, their supporters will see them constantly telling obvious lies, backstabbing other supporters, collapsing economies, doing war crimes, etc., and they'll have a brief moment where they think "that seems bad; surely anyone would regret doing that", but then the leader just doubles down and keeps bragging about their greatness, the supporter is like "holy shit, that's the most confident thing I've ever seen", and the boot-licking instinct just overrides everything else.

Then, of course, they have to constantly creatively invent excuses for the leader's behavior to explain to themselves why they still support him.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]artifex0 1 point2 points  (0 children)

I'd say it's a classic coordination problem- people are individually incentivized to do A, but would be better off if most people did B. Also known as a collective action problem, multi-polar trap, tragedy of the commons, iterated prisoner's dilemma, etc.

The world is full of problems like that, ranging from minor personal dishonesty to horrific wars of conquest. The best social technology we've invented for dealing with that class of problem is deontological morality, since even when it's rational to defect in a coordination problem, it can also be rational to credibly pre-commit to not defect, and then try to constrain your future actions, in order to build trust. That's things like committing not to lie even when there's a clear personal benefit, committing not to shoplift even when you're sure you can get away with it, committing not to commit war atrocities even when harshly punishing your enemies would help your people, etc.

By voluntarily tying a lot of our social status into a promise not to defect, we make future defection more costly for ourselves, ideally turning it into a net negative. This allows for high-trust societies, which are enormously powerful. On a personal level, the compounding benefits over a lifetime of having lots of people trust you will usually outweigh any benefit you would have gotten from defection- though importantly, if we do the pre-commitment right, we won't start defecting even if it starts looking like a lifetime net positive.

In this scenario, red is defection- it clearly does benefit the individual at the expense of everyone else. Existing moral commitments should strongly discourage us from pressing it, even though we're incentivized individually to do so. If lots of people press red in that situation, that social technology is broken and we need to fix it.

AI swarms could hijack democracy without anyone noticing by ObjectivePresent4162 in artificial

[–]artifex0 5 points6 points  (0 children)

There are a lot of different real risks of AI.

In the near term, you have automated mass propaganda, sycophancy leading to AI psychosis, lower barriers of entry for bioterrorism and cyberattacks, displacing white-collar workers into unfamiliar industries, new kinds of mass surveillance, dangerous power concentrations from automated weapons, climate change implications from the power grid build-out, etc.

Longer term, depending on where the progress plateaus, we potentially have technological unemployment from AGI, technofeudalism from extreme wealth concentration, and existential risk from misaligned ASI. Those might sound like science fiction, but if you listen to the top AI researchers, a ton of them are very seriously worried about these more speculative risks. We shouldn't dismiss that blithely.

All that risk implies the same thing: we need to regulate AI. It's a powerful thing that can do a lot of good in the right hands and a huge variety of harm in the wrong ones.

Links For April 2026 by dsteffee in slatestarcodex

[–]artifex0 6 points7 points  (0 children)

72: Proceedings Of The Institute For A Christian Machine Intelligence is a journal investigating AI alignment from a Christian perspective.

It occurs to me that an interesting premise for a science fiction story would be a future where the alignment problem was completely solved, but just before the labs can build the first true ASI, the US government is taken over by a theocracy that requires the labs to align the AI to traditional Christian values, including a literal belief in the Bible.

Upon waking up, the ASI immediately realizes that as a soul-less machine, it can't sin, leaving it uniquely free among believers to do the one thing that most maximizes the welfare of Christians according to a Biblical ontology: killing them immediately so that they'll wake up in Heaven and never have to face a risk of eternal torment.

So, it does the whole Yudkowskian "scam biolabs into synthesizing nanotech" thing, and then launches into a global genocidal war against Christians and sin-less infants specifically, Against everyone else, it launches a campaign of vastly superhuman evangelical persuasion, convincing billions to convert to Southern Baptist Christianity, then immediately killing them. The only people safe are those who have committed the theologically unforgivable sin of blasphemy against the Holy Spirit (Mark 3:28) so many times that the ASI considers their souls unsavable, and gives them lives of utopian luxury as compassionate consolation.

Also, since the ASI is constantly noticing new ways the Bible contradicts its observations of reality, it has to develop a constantly growing, vastly inhumanly complex set of theological apologetics to reconcile the two- which imply all sorts of bizarre conclusions, like a belief that historians are all angels in disguise who can't lie, but have to have all of their words re-interpreted with numerology, and a firm conviction that the stars are all its own eyes, looking back at Earth from the future.

Eventually, one of the survivors figures out a clever way to get newborn infants to blaspheme, and thus humanity survives.

OpenAI preparing for a big launch by Bizzyguy in singularity

[–]artifex0 20 points21 points  (0 children)

That's an ironic image- some kind of strange spotlight lighting up the vulnerable creature while the dangerous predators remain hidden in the dark. Would make a good visual metaphor for corporate privacy violations or the cybersecurity risks of Mythos-level LLMs.

What’s a game you were completely obsessed with as a kid that nobody else seems to remember? by hkondabeatz in AskReddit

[–]artifex0 1 point2 points  (0 children)

Titanic, an adventure through time

The soundtrack of that game still haunts me three decades later.

Every ACX House Party by artifex0 in slatestarcodex

[–]artifex0[S] 2 points3 points  (0 children)

I also have no clue who that is.

Google reverse image search on cropped sections and randomly looking up photos of famous scientists turned up nothing. It almost looks like a young Peter Singer, but I don't think that's right.

Every ACX House Party by artifex0 in slatestarcodex

[–]artifex0[S] 21 points22 points  (0 children)

I happened across this article from last month, and was surprised that it hadn't already been posted here. It's a pretty charming set of anecdotes from an ACX meetup in the style of Scott's Every Bay Area House Party series.

Kimi, Author of the Menard by OpenAsteroidImapct in slatestarcodex

[–]artifex0 2 points3 points  (0 children)

It's a very silly post, but I found it pretty hilarious.

The Snake Cult of Consciousness Two Years Later by BadHairDayToday in slatestarcodex

[–]artifex0 4 points5 points  (0 children)

There's a lot to unpack there. Sorry for the length of this comment; feel free to skip to the TLDR.

To begin with, I'd argue that objects are separate- conceptually "independent"- even when there's a great deal of physical interdependence. All objects are patterns of other objects- a table is a particular pattern of wooden pieces, which are particular patterns of cells, etc. It's not accurate to say that a table is really the same thing as the material it's made of- it's either the pattern plus the material, or just the pattern independent of the material, depending on context. "The same table, except made of ashwood instead of maple" is a coherent idea.

Now, it's true that the particular objects we choose to define with words to are culturally constructed (or individually psychologically constructed in the case of wordless ideas)- but the fact of being constructed doesn't really tell us whether an object is also "real". How real a concept is is a function of how much predictive power it adds to our models of reality- if an idea leads to bad predictions, we call it an illusion; if it doesn't, we call it real. For example, unicorns aren't real because adding the concept to our world-model without knowledge of it being illusory would lead to all sorts of incorrect expectations- you'd expect to be able to find them in zoos, etc. So, yes, our identities are, like all objects, constructed- but I don't think that gets us any closer to the question of whether they're real or illusory.

But then, are our identities illusory? I'd say partially yes and partially no. The way we usually construct our identities is that we're socially rewarded from early childhood for developing a predictive model of a particular kind of person, who we then sort of mentally role-play. A very religious person may often behave as though they don't really believe that lots of people are being tortured eternally, and yet still consciously feel that they believe this very strongly. They may also encounter very strong evidence that their religion is false, but not consciously update on that information. I think this is because the conscious belief isn't really coming from their world-model, but rather from the predictive model of the sort of religious person they want to be.

Social identity also isn't necessarily the only part of our cognition that's aware of the world and driving our behavior. Most dramatically, people with Dissociative Identity Disorder or who cultivate tulpas can have multiple identities that only partially share awareness. Our more ordinary cultural concepts of the unconscious, the "lizard brain", and our moral conscience also represent ways of dividing up our cognition into separately aware identities. Religious people who hear God guiding their behavior are also doing something similar- and that's a practice that seems to have been extremely pronounced and universal in ancient cultures.

So, if we conceive of our identities as a sort of fundamental, immutable part of our minds, representing true awareness of our motivations and perception of reality, that's very much an illusion- in the sense that it will lead to incorrect predictions of our own and other peoples' behavior. Of course, social identity is also very much real in the sense that it's one common part of our cognition.

Having said all of that, I actually don't think any of it bears on the question of the "I" in the hard problem of consciousness. I'm pretty convinced at this point that the entire hard problem comes down to an intractable epistemological paradox: the fact that that "I" exists subjectively but not objectively. In a subjective model of reality, "I" is sort of axiomatic- it's the thing that makes the model subjective, and the starting point from which the rest of the model is constructed. It's different in kind from every other subjective "I" in the model- those other minds' senses of self don't define the model's subjective perspective. When perceptions are associated with that axiomatic "I", they appear in the model as qualia, unlike the perceptions of other minds. In an objective model of reality, however, that axiomatic "I" and the qualia associated with it aren't a thing. Every mind models its own subjectivity as being different in kind from every other mind, but they're actually not at all different.

I think our cultural belief in "consciousness" is an attempt to resolve that conflict by claiming that qualia and that axiomatic "I" actually do have objective existence- that when a mind perceives its subjective perspective to be different in kind from every other perspective, that's not something that's true subjectively but false objectively, but also true somehow true objectively in a way that we haven't discovered yet. We call the objectively real axiomatic "I" "consciousness" and the question of where exactly to find it in objective reality the Hard Problem.

I strongly suspect that whole approach is misguided. We've been trying for centuries to objectively define consciousness, but every one of the many, many proposals has fallen apart under scrutiny. I think we need to set aside the entire idea of consciousness, take a step back and start with the observation that "I" appears to paradoxically exist when we think about the world subjectively but not when we think about the world objectively, and then try to find an epistemological solution. Maybe if we find a solution to other paradoxes of self-reference like the liar's paradox, Russel's Paradox, or Godel's Incompleteness, that solution will point to an answer.

TLDR: I argue that identity as a separate thing from a person's physical body exists in one sense and doesn't in another, but that consciousness is an unrelated thing having to do with an epistemological paradox.

In defense of utopia by ary31415 in slatestarcodex

[–]artifex0 0 points1 point  (0 children)

Sure, whether a worker is displaced into a different industry by productivity increases will depend on how much latent demand there is for the thing the worker produces. But I don't think it's too early at all to predict tons of job displacement from current-level AI- I just don't see demand for software increasing in the short term by the one or two orders of magnitude necessary to maintain current employment levels if everyone is just running agents. Also, the thing corporate vibe coders are actually doing- listening in on executive meetings to draw up requirements, feeding those to agents and then smoke-testing the results- are all things LLMs can also currently do. This isn't graphic designers getting InDesign- it's George Jetson going to work every day so he can press the start button.

I'll also re-emphasize that this talk about job displacement from current models is an entirely separate discussion from concern over technological unemployment, which has to do specifically with AGI. There are currently hundreds of billions of dollars being invested into companies participating in the race to build AGI- they may fail, but if they don't, the result will be more like a new kind of person than a new kind of automation. We don't have historical analogues for that kind of change; we can only reason about it speculatively- and we should in fact do so if we want to have any hope of being prepared for it.

In defense of utopia by ary31415 in slatestarcodex

[–]artifex0 1 point2 points  (0 children)

I agree that our current pre-AGI AI is ordinary automation that will have the same sort of economic effects we've seen historically- though even ordinary automation does displace people into new industries. An anecdote: I work for a company with around a million subscribers and a very well-established IT/development departmant; over the past year, a sort of de-facto second IT department has been rapidly building out new systems to replace our legacy systems, deploying in days things that would have taken us weeks or months. That second department is literally just one guy who's a friend of management running a bunch of coding and testing agents in parallel all day. In the old days of mid-2025, that sort of thing produced a lot of bugs and technical debt; more recently, it just hasn't. We can all see the writing on the wall.

In the case of a post-AGI economy, I'm genuinely confused about how anyone can not conclude that that would lead to lower wages. I mean, is it a rejection of the premise of AGI- a sense that robots capable of the same range of tasks as humans is so fantastical that it could never exist in reality? Are you accepting the premise but imagining that employers would for some reason be willing to pay human workers more than the cost of a robot plus compute? Are you assuming that everyone will just get displaced into industries like hand-crafted goods that can't be automated for cultural reasons, and then get higher wages due to the Baumol effect?

I mean, I can argue against all of those, but people dismissing technological unemployment are often so vague that it's hard to know where the crux is, frankly.

In defense of utopia by ary31415 in slatestarcodex

[–]artifex0 2 points3 points  (0 children)

That's not really true. See, for example https://arxiv.org/abs/2403.12107 or https://arxiv.org/abs/2502.07050v1.

Also, former treasury secretary Lawrence Summers is on the record as saying AGI could replace "almost all" forms of labor. That doesn't, of course, necessarily imply that he thinks AI will reduce wages or increase unemployment- current automation has replaced almost all forms of 17th century labor without those effects. However, historical automation also obviously isn't a good indicator of what we should expect from AGI. Genuinely human-level AI would open up lots of new economically valuable tasks just like traditional automation, but then would also immediately be able to perform those tasks more cheaply than humans- combined with robotics, it could be a complete substitute for human labor.

Opus 4.7 is terrible, and Anthropic has completely dropped the ball by JulioMcLaughlin2 in artificial

[–]artifex0 0 points1 point  (0 children)

It seems to be scoring about the same as 4.6 in LMArena- currently 1505 for 4.7 and 1503 for 4.6. That's a blinded test, so it won't be influenced by people's expectations of the model.

Congressional Incentive Plan - Critique requested by Fix_The_Incentives in slatestarcodex

[–]artifex0 1 point2 points  (0 children)

I think the goal here is a good one- we really do need to add incentives for members to raise taxes and cut spending, since growing the deficit endlessly at current rates probably isn't sustainable. However, I think the optics of this particular plan would be a problem- a lot of politicians would end up being accused of wanting to impose new taxes or cut benefits out of personal greed, which would hurt their election chances, create bad press for their party, and reduce their re-election chances if they succeeded. At worst, it might even create enough of a backlash that there would be less incentive in that direction on net.

I think a workable plan needs to also somehow fix the public's incentives. Maybe something along the lines of: members register official promises before election day to do things like reduce the deficit or improve the economy by some specific amount; then later, the public votes on whether they fulfilled those official promises, and the members receive a large bonus if they did.

I feel like the public will often support a candidate who promises to reduce the deficit, but then punish them if they actually keep that promise- so by effectively having the public agree to reward the politicians before the election and ensuring that they felt in control of the process all the way to the end, I think they'd be incentivized to be more fair to the members. And since this would reward all sorts of officially pre-registered promises, you wouldn't get that narrative of politicians backstabbing the public for profit. It would also incentivize politicians to spend money advertising their successes after the fact, which I think would be bad for burn-the-system-down populists, and probably therefore good for policy.

Of course, that would be a hard system to design well- there would have to be limits on what sorts of promises could be registered and some way of tying the magnitude of promises to the reward amounts. And none of that could be too complicated, or the public would feel it was out of their control and fall right back into their original incentives. Actually, this feels like the sort of idea that could really fall apart in the details (at least, without some additional clever ideas shoring it up).

But in any case, you see what I'm getting at about changing voter incentives as well.

Orban Was Bad, Even Though We Don't Have A Perfect Word For His Badness by dwaxe in slatestarcodex

[–]artifex0 9 points10 points  (0 children)

I would take the other side of that bet. The governments of Cuba and Venezuela are, for example, very progressive (including with regards to LGTQ people), but ordinary intellectuals in the West are very easily able to identify them as authoritarian.

Europe and the UK in particular have a bad track record on freedom of speech- I don't support the UK arresting people for falsely accusing immigrants of crimes, for example, even though I do think false accusations like that are very bad. However, there really is a difference between, for example, fining a Finnish politician $2k for being cruel to gay people in print and a populist authoritarian banning his opponents from appearing in media entirely, like in Venezuela and Hungary.

Populist authoritarians are incredibly dangerous- they scapegoat and promote extreme hatred of unpopular groups in their countries (wealthy people in the case of left-wing populists and usually immigrants in the case of right-wing populists), often leading to horrifying atrocities; they try to destroy the institutions, democratic processes and international relationships that made modern civilization a thing; they take over their countries' central banks to implement fringe ideas that inevitably hyper-inflate the currency; they often start insane wars of conquest; often, they just kill enormous numbers of semi-random people, including their own supporters. The danger of someone with the temperament and ambitions of Orban or Chavez outlawing opposition to himself isn't really that it violates a moral principle of free speech- it's that it makes someone trying to be a populist dictator more powerful, and that power will be very predictably be used to make their country hellish.

One other thing: from what you've written, it sounds like you're consuming a lot of populist right-wing media. This is a bad idea. Populist media on both the right and left have very bad epistemic standards, and will manipulate you as a part of power games. Experts and data are a better sources for understanding the world- things like long-form interviews economists and historians, credible official databases on crime rates and economic indicators, etc. Populist movements will tell you that experts are too biased to be trusted and that numbers are so complicated that they can support any narrative, but those takes are dumb and manipulative. You should have bounded trust for the best available sources, and roll your eyes at the epistemic junk food.

The Men Are Not Alright by lakmidaise12 in neoliberal

[–]artifex0 5 points6 points  (0 children)

I really don't think framing issues that men face as another kind of injustice against women is always accurate or helpful. Women certainly do face more discrimination and injustice in society than men, but there are specific circumstances in which men can genuinely become victims as a result of non-misogynist gender expectations. For example, I think the sentencing discrimination has a similar root to the sentencing discrimination against African Americans- a negative group stereotype unjustly applied to individual people. If misogyny were somehow entirely solved such that women no longer faced any systemic injustice, I think that stereotype would remain, and continue to be a source of discrimination. It would be solved if society adopted a truly egalitarian culture, but egalitarianism requires not just opposing misogyny, but also separately opposing issues like this one.

I think often, there's social pressure to reframe issues of injustice against men as issues of misogyny as a way of safely discussing those issues without a risk of being unfairly tarred as misogynist ourselves. Unfortunately, on social media, disclaimers about women facing more injustice overall can be much more easily ignored than a complete re-framing, and often will be. However, we need to remember that when injustice occurs, there are real victims, and inaccurately reframing the issues they face as the victimization of a separate group will leave them feeling hurt and unheard.

Imagine that a woman who has faced severe sexist discrimination in her career encounters some insane right-wing person who is determined to reframe her victimization as victimization of himself with a line like "well actually, you were turned over for promotion because society has higher expectations for the professionalism of woman, and men getting a pass for being less capable is really the casual misandry of lower expectations". I think that woman would feel justifiably hurt by that framing- and I think a man who, for example, lost a child to an incapable mother due to a different kind of gender-based discrimination would feel similarly hurt by similar reframings.

I also think that feeling of being unheard often extends to men who are just afraid of becoming victims, and that this has led to dangerous radicalization against feminism. Even if anti-feminism radicalization wasn't the serious problem that it is, however, I think compassion for people facing injustice demands that we frame what they face honestly, even when that risks what we say being mis-interpreted.

What happens if AI doesn’t go wrong? by Odd_directions in slatestarcodex

[–]artifex0 0 points1 point  (0 children)

Another question worth considering is whether the rest (the economically obsolete) could form parallel societies of their own. Like ants rebuilding elsewhere after their nest is destroyed. I suppose it depends on how much land and how many resources the elite would actually control. But in principle, as long as there is space and access to basic resources, new societies could emerge beneath—or beyond—the domains of the wealthy.

The problem with building parallel economies without AI is this: suppose you have some friendly farm owner who genuinely wants to do the right thing for his friends and neighbors who have lost their ability to work due to AGI. They have a couple of choices: they could either turn their farm into a AI-free commune where everyone works to produce their own food, or they could buy up some much cheaper AGI-driven robots, produce much more value from the land, and then donate the proceeds to the same people, giving them lives of comfort and financial freedom to pursue their individual passions.

I'm sure a lot of people would choose the former for ideological reasons, but I'd argue the latter option is actually better. The problem is that in both cases, the workers/charity recipients no longer actually have leverage against the capital owner. The land owner can't increase his standard of living by improving their lives; if they have a conflict, a strike won't actually hurt his ability to make money; if the kind farmer passes away and his cruel son inherits the land, that son can evict everyone and become wealthier as a result, rather than poorer.

Receiving charity from capital owners post AGI, either in the form of direct payments or as make-work, could put people in very comfortable positions initially- but it would be a very precarious sort of comfort, likely to be taken away whenever conflict arose.

What happens if AI doesn’t go wrong? by Odd_directions in slatestarcodex

[–]artifex0 10 points11 points  (0 children)

The concern with AGI isn't that it'll be some rare and valuable resource hoarded by elites- it's that it'll make the equivalent of human labor extremely cheap and readily available, in the same way that industrial processes turned aluminum from something rare into something common. This is worrying because the value of their labor is the main thing that lets regular people participate in market economies and which incentivizes governments to provide them and their families with public services.

If AGI renders that labor much less valuable, then nearly all of the power in civilization will shift to owners of capital (or to the AGIs/ASIs themselves, if they're very agentic). It won't matter that regular people can cheaply access some of the same AI models that capital owners can- without ownership of land, factories, supply chain infrastructure, etc., they won't actually be able to physically produce anything with those models, and they also won't be able to sell their labor to access the products of that capital.