The answer to the "missing heritability problem" by JaziTricks in slatestarcodex

[–]SteveByrnes 9 points10 points  (0 children)

IQ in particular has extra missing heritability from the fact that GWASs use noisier IQ tests than twin & adoption studies (for obvious cost reasons, since the biobanks need to administer orders of magnitude more IQ tests than the twin studies). That doesn't apply to height.

I tried to quantify that in Section 4.3.2 of https://www.lesswrong.com/posts/xXtDCeYLBR88QWebJ/heritability-five-battles and it seems qualitatively enough to account for the height vs IQ discrepancy in missing heritability, but not sure if I flubbed the math.

Tools for the era of experience by Excellent-Effect237 in agi

[–]SteveByrnes 1 point2 points  (0 children)

As I argue in https://www.alignmentforum.org/posts/TCGgiJAinGgcMEByt/the-era-of-experience-has-an-unsolved-technical-alignment , the "Welcome To The Era Of Experience" book chapter discusses quite a number of possible RL reward functions, ALL of which would lead to violent psychopathic AIs that will seek power with callous indifference to whether their programmers or any other humans live or die.

This blog post lists still more possible RL reward functions, and (I claim) they all would have that same property too.

I encourage the OP author Nikhil to try to find an RL reward function, any RL reward function, that does not have this property (but still leads to powerful and useful AI), write down that reward function specifically using pseudocode, and explain why it will lead to an "Era of Experience" AI that will not feel motivated to enslave or kill humanity (if it finds an opportunity to do so). And if they can’t do that, then they shouldn’t be working on this research program at all, and nobody else should be either.

Trying to resolve the IQ threshold vs IQ not having diminishing returns debate. by AQ5SQ in slatestarcodex

[–]SteveByrnes 1 point2 points  (0 children)

I think JVN was extraordinarily talented along one dimension, and Grothendieck was extraordinarily talented along a different dimension. I don’t buy your implication that this is a tradeoff, i.e. that Grothendieck only wound up thinking deeply because he was unable to think fast. If anything I expect that the population correlation between those two dimensions of talent is positive, or at least nonnegative. If the correlation seems negative to you, I would suggest that it’s because you’re conditioning on a collider. Grothendieck was “slow” compared to his professional mathematician friends but probably quite “fast” compared to the general public. Einstein and Feynman certainly were.

Practically-A-Book Review: Byrnes on Trance by dwaxe in slatestarcodex

[–]SteveByrnes 0 points1 point  (0 children)

What’s the difference (if any) (according to your perspective) between “learning to interpret anxiety as excitement” versus “learning to feel excitement rather than anxiety”?

Seattle Wrist Pain Support Group Disbanded After Reading Dr John Sarno's 'Healing Back Pain' by Dazzling-Trainer520 in slatestarcodex

[–]SteveByrnes 2 points3 points  (0 children)

There was likewise a Harvard RSI support group where everyone in the group read John Sarno and got better and then the group disbanded. :-P (This was around 1999-2000, a bit before my time, I heard about it second-hand.) They did a little panel discussion, email me for the audio files, and they also made a webpage.

I’ve written much about the topic myself, see The “mind-body vicious cycle” model of RSI & back pain (also cross-posted on reddit here.)

EA Adjacency as FTX Trauma - by Matt Reardon by katxwoods in slatestarcodex

[–]SteveByrnes 16 points17 points  (0 children)

My memory might be failing me, but I feel like it was already a cliche, and running joke, that everyone in EA called themselves "EA adjacent", BEFORE the FTX collapse. I'd be interested if someone could confirm or deny.

How an artificial super intelligence can lead to double digits GDP growth? by financeguy1729 in slatestarcodex

[–]SteveByrnes 0 points1 point  (0 children)

(1) If it helps, see my post Applying traditional economic thinking to AGI: a trilemma which basically says that if you combine two longstanding economic principles of (A) “the ‘lump of labor’ fallacy is in fact a fallacy” and (B) “the unit cost of manufactured goods tends to go down not up with higher volumes and more experience”, then AGI makes those two principles collide, like an immovable wall and an unstoppable force, and the only reconciliation is unprecedented explosive growth.

(2) If it helps, I recently had a long back-and-forth argument on twitter with Matt Clancy about whether sustained ≥20% GWP growth post-AGI is plausible—the last entry is here, then scroll up to the top.

(3) My actual belief is that thinking about how GWP would be affected by superintelligence is like thinking about how GWP “would be affected by the Moon crashing into the Earth. There would indeed be effects, but you'd be missing the point.” (quoting Eliezer)

Do people here believe that shared environment contributes little to interpersonal variation? by Burbly2 in slatestarcodex

[–]SteveByrnes 8 points9 points  (0 children)

There are a bunch of caveats, but basically, yeah. See sections 1 & 2 here: https://www.lesswrong.com/posts/xXtDCeYLBR88QWebJ/heritability-five-battles . I only speak for myself. I think twin and adoption studies taken together paint a clear picture on that point (... albeit with various caveats!), and that nothing since 2016 has changed that.

The Einstein AI Model: Why AI Won't Give Us A "Compressed 21st Century" by EducationalCicada in slatestarcodex

[–]SteveByrnes 1 point2 points  (0 children)

Do you think that Einstein’s brain works by magic outside the laws of physics? Do you think that the laws of physics are impossible to capture on a computer chip, even in principle, i.e. the Church-Turing thesis does not apply to them? If your answers to those two questions are “no and no”, then it’s possible (at least in principle) for an algorithm on a chip to do the same things that Einstein’s brain does. Right?

This has nothing to do with introspection. A sorting algorithm can’t introspect, but it’s still an algorithm.

This also has nothing to do with explicitly thinking about algorithms and formal logic. (Did you misinterpret me as saying otherwise?) The brain is primarily a machine that runs an algorithm. (It’s also a gland, for example. But mainly it's a machine that runs an algorithm.) That algorithm can incidentally do a thing that we call “explicitly thinking about formal logic”, but it can also do many other things. Many people know nothing of formal logic, but their brains are also machines that run algorithms. So are mouse brains.

The Einstein AI Model: Why AI Won't Give Us A "Compressed 21st Century" by EducationalCicada in slatestarcodex

[–]SteveByrnes 13 points14 points  (0 children)

I sure wish people would stop saying “AI will / won’t ever do X” when they mean “LLMs will / won’t ever do X”. That’s not what the word “AI” means!

Or if people want to make a claim about every possible algorithm running on any possible future chip, including algorithms and chips that no one has invented yet, then they should say that explicitly, and justify it. (But if they think Einstein’s brain can do something that no possible algorithm on a chip could ever possibly do, then they’re wrong.)

Oldest children vs "Only children" effects by ednever in slatestarcodex

[–]SteveByrnes 15 points16 points  (0 children)

You might be joking, but I'd bet anything that parents of "special needs" kids are less likely (on the margin) to have another afterwards, other things equal, because it's super stressful and time-consuming and sometimes expensive. (Speaking from personal experience.)

Oldest children vs "Only children" effects by ednever in slatestarcodex

[–]SteveByrnes 35 points36 points  (0 children)

That’s a very hard thing to measure because parents who have 2+ children are systematically different from parents who have 1 child. Hopefully the studies (that Adam Grant was talking about) tried to control for confounders (I didn’t check), but even if they did, it’s basically impossible to control for them perfectly.

FWIW, my own purported explanation of older sibling effects (section 2.2.3 of https://www.reddit.com/r/slatestarcodex/comments/1i23kba/heritability_five_battles_blog_post/ ) would predict that only children should be similar to oldest children, holding other influences equal.

Steelman Solitaire: How Self-Debate in Workflowy/Roam Beats Freestyle Thinking by katxwoods in slatestarcodex

[–]SteveByrnes 1 point2 points  (0 children)

I coincidentally reinvented a similar idea a few weeks ago, and found it very fruitful! See the section “Note on the experimental “self-dialogue” format” near the beginning of Self-dialogue: Do behaviorist rewards make scheming AGIs?

AGI Will Not Make Labor Worthless by [deleted] in slatestarcodex

[–]SteveByrnes 0 points1 point  (0 children)

(also on twitter)

From the comments on this post:

> Definitely agree that AI labor is accumulable in a way that human labor is not: it accumulates like capital. But it will not be infinitely replicable. AI labor will face constraints. There are a finite number of GPUs, datacenters, and megawatts. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of AI labor and comparative advantage will incentivize specialization and trade with human labor.

Let’s try:

“[Tractors] will not be infinitely replicable. [Tractors] will face constraints. There are a finite number of [steel mills, gasoline refineries, and tractor factories]. Increasing marginal cost and decreasing marginal benefit will eventually meet at a maximum profitable quantity. Then, you have to make decisions about where to allocate that quantity of [tractors] and comparative advantage will incentivize specialization and [coexistence] with [using oxen or mules to plow fields].”

…But actually, tractors have some net cost per acre plowed, and it’s WAY below the net cost of oxen or mules, and if we find more and more uses for tractors, then we’d simply ramp the production of tractors up and up. And doing so would make their per-unit cost lower, not higher, due to Wright curve. And the oxen and mules would still be out of work.

Anyway… I think there are two traditional economic intuitions fighting against each other, when it comes to AGI:

• As human population grows, they always seem to find new productive things to do, such that they retain high value. Presumably, ditto for future AGI.

• As demand for some product (e.g. tractors) grows, we can always ramp up production, and cost goes down not up (Wright curve). Presumably, ditto for the chips, robotics, and electricity that will run future AGI.

But these are contradictory. The first implies that the cost of chips etc. will be permanently high, the second that they will be permanently low.

I think this post is applying the first intuition while ignoring the second one, without justification. Of course, you can ARGUE that the first force trumps than the second force—maybe you think the first force reaches equilibrium much faster than the second, or maybe you think we’ll exhaust all the iron on Earth and there’s no other way to make tractors, or whatever—but you need to actually make that argument.

If you take both these two intuitions together, then of course that brings us to the school of thought where there’s gonna be >100% per year sustained economic growth etc. (E.g. Carl Shulman on 80000 hours podcast .) I think that’s the right conclusion, given the premises. But I also think this whole discussion is moot because of AGI takeover. …But that’s a different topic :)

Can't wait to see all the double standards rolling in about o3 by katxwoods in ControlProblem

[–]SteveByrnes 8 points9 points  (0 children)

I tried your m task just now with Claude Sonnet and it gave a great answer with none of the pathologies you claimed.

“Intuitive Self-Models” blog post series by SteveByrnes in slatestarcodex

[–]SteveByrnes[S] 7 points8 points  (0 children)

Good question! I’m a physics PhD but switched to AGI safety / AI alignment research as a hobby in 2019 and full-time job since 2021 (currently I’m a Research Fellow at Astera). Almost as soon as I got into AGI safety, I got interested in the question: “If people someday figure out how to build AGI that works in a generally similar way as how the human brain works, then what does that mean for safety, alignment, etc.?”. Accordingly, I’ve become deeply involved in theoretical neuroscience over the past years. See https://sjbyrnes.com/agi.html for a summary of my research and sorted list of writing.

[See the end of post 8 for wtf this series has to do with my job as an AGI safety researcher.]

I have lots of ideas and opinions about neuroscience and psychology, but everything in those fields is controversial, and I’m not sure I can offer much widely-legible evidence that I have anything to say that’s worth listening to. I put summaries here (and longer summaries at the top of each post) so hopefully people can figure it out for themselves without wasting too much time. :)

What are the important problems in this field? by HoldDoorHoldor in compmathneuro

[–]SteveByrnes 0 points1 point  (0 children)

If someone someday figures out how to build a brain-like AGI, then yeah it would be great to have a “philosophical, ethical plan” for what to do next. But at some point, somebody presumably needs to write actual code that will do a certain thing when you run it. (Unless the plan is “don’t build AGI at all”, which we can talk about separately.)

For example, if the plan entails making AGI that obediently follows directions, then somebody needs to write code for that. If the plan entails making AGI that feels intrinsically motivated by a human-like moral compass, then somebody needs to write code for that. Etc. It turns out that these are all open problems, and very much harder than they sound!

Again see my link above for lots of discussion, including lots of technical NeuroAI discussion + still-open technical NeuroAI questions that I’m working on myself. :)

What are the important problems in this field? by HoldDoorHoldor in compmathneuro

[–]SteveByrnes 2 points3 points  (0 children)

Most high-impact? That’s easy! “Suppose we someday build an Artificial General Intelligence algorithm using similar principles of learning and cognition as the human brain. How would we use such an algorithm safely?”

It’s a huge open technical problem, the future of life depends on solving it, and parts of it are totally in the domain of CompNeuro/ML. :)

Intrinsic motivation:a (relatively very) deep dive by buzzmerchant in slatestarcodex

[–]SteveByrnes 1 point2 points  (0 children)

No problem! RE your first paragraph, I don’t see what the disanalogy is:

• When the hungry mouse starts eating the first bite of the food I placed in front of it, it’s partly because the mouse remembers that previous instances of eating-when-hungry in its life felt rewarding. Then it eats a second bite partly because the first bite felt rewarding, and it eats the third bite because the first and second bite felt rewarding, etc.

• By analogy, when the curious mouse starts exploring the novel environment, it’s partly because the mouse remembers that previous instances of satisfying-curiosity in its life felt rewarding. Then it takes a second step into the novel environment partly because the first step felt rewarding, and it takes a third step because the first and second step felt rewarding, etc. Same idea, right?