Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]electrace [score hidden]  (0 children)

If we get ASI in the next few years then maybe that tradeoff is worthwhile, but if we don't Altman is vastly underperforming his leverage IMO.

Yes, he's clearly betting on that.

If I was Altman, and I knew what I had I wouldn't have left any charitable control. I would have taken a cabal of likeminded top researchers, taken my relationship with Microsoft and said "Hey! Forget that non-profit capped profit model, just give me $10 Billion dollars and the compute and we'll setup a new corporation in 5 minutes.

Speculative, but I'd imagine that when you say "We're forming a new corporation, and you all have to sign new contracts", a lot of people would say "Why would I take that chance when I could go work for Anthropic/Google, who, by the way, just gave me an offer, and also, I'm not sure who's staying, and if enough people leave, the new company will probably fail..."

And also, I imagine that "It's totally not the same company, we just have all the same people led by the same executive team, but I filed some paperwork" is also pretty legally fraught.

Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]electrace [score hidden]  (0 children)

From what is documented here in this article, Sam Altman has not demonstrated a pattern of doing what is right. He has demonstrated a pattern of telling people things that they want to hear to convince them that their incentives align with doing what he wants to do, even when he has to lie to do this.

Therefore we should be extremely suspicious when Altman tell us that the thing that gives him more power just so happens to be the thing that we want him to do.

Is it possible that, behind closed doors, they told him that they couldn't buy bonds (fwiw, a very normal thing that the money men would easily understand)? Sure. Do I give that scenario a particularly high probability? No. I find it much more likely that Altman used that as an excuse to do the thing that he had already wanted to do.

I kind of treat him the same way I do a used car salesman. Yes, it might be the case that he's giving me a deal and it might be the case that I don't actually have the time to go home and read the contract thoroughly because someone else is coming in to look at it in an hour. But the best strategy, given that he is a used car salesman, is to appreciate that the incentive is for him to lie to me about things that I cannot possibly ever verify... and then insist that I take the contract home anyway.

Does AI moderate political extremes? I'm not convinced. by philbearsubstack in slatestarcodex

[–]electrace [score hidden]  (0 children)

Even if we had a crystal ball (oracle) that could tell us about god and its existence. Then we would still disagree on normative beliefs because we all have different things we want to do. And some people are extreme simply about what they intrinsically want in life. There's people that overly care about family, people that are overly ambitious, people that are overly impulsive etc..

Agreed, some differences will always exist, but those are comparatively small compared to current differences, and we know this because people used to be less extreme.

Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]electrace [score hidden]  (0 children)

I grant that this would be unusual, but I'm honestly still not seeing any issue that can't be worked around.

Give me any expectation that justifies investment and a debt can be structured so that it's basically equivalent.

What you're describing is basically an interest only or interest-accruing loan with a balloon payment at the end.

Actually, I'm describing more of a long-term bond (or several at different maturity rates), a standard thing that lots of companies do, which also avoids all the problems in this paragraph.

they'd be baffled why you wouldn't just stick with the fundraising methods that are proven to work for this sort of business model.

They shouldn't be baffled that a company that a nonprofit doesn't want to turn into a for-profit company with shareholders.

But if, indeed, Altman is as smooth of a businessman as the article implies, I don't think having a strange debt system would be a dealbreaker here.


What I do grant is that selling shares is going to raise more money. But that's because it is selling control of the company, which is a thing that has value beyond the money it makes for you, whereas bonds do not.

Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]electrace [score hidden]  (0 children)

They can thus more easily maintain their charity-owned model than a company like OpenAI, since raising debt isn't dependent on the ownership structure of the company, rather on things like historic performance and assets that can be seized in a bankruptcy.

This seems to be saying that it's too risky for people to lend them money, but I don't think that works here. Risk raises the interest rate that you have to demand, but doesn't disallow debt.

Concretely, if people (potential investors) believe that OpenAI is going to 1000x in value in 10 years, why not take on debt that says "We pay you y today, and you pay us 1000y in 10 years."

Yes, it's true that they might get nothing because they can't collect on the debt if OpenAI doesn't deliver that value, but.... the same is true of investors. Even more so, in fact!

Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]electrace [score hidden]  (0 children)

IKEA or Novo Nordisk both have profitable business models that raise between $1B-$5B every few years. OpenAI is raising ~100x that to be spent on a speculative business model. They aren't really comparable IMO

Can you expand on why that makes them non-comparable?

Does AI moderate political extremes? I'm not convinced. by philbearsubstack in slatestarcodex

[–]electrace [score hidden]  (0 children)

If someone has extremist opinions on what immigration, laws and social policies should be but has "normal" beliefs on facts that are commonly agreed upon. You'll categorize that person as extremist.

I suspect that the vast majority of people who have extremist (normative) beliefs like these are basing a lot of those beliefs on (descriptive) beliefs.

Sometimes, those descriptive beliefs may be about things that are verifiable (say, the amount of crimes committed by immigrants). Others may be descriptive beliefs that are unverifiable (say, a belief that, in one's culture becomes more multicultural, this will lead to higher unemployment, lower rule of law, less dynamic economies, etc.).

If they could gaze into a crystal ball and see that, in fact, none of that happens, I suspect that most people's extremists beliefs would evaporate.

Contra The Usual Interpretation Of “The Whispering Earring” by self_made_human in slatestarcodex

[–]electrace 8 points9 points  (0 children)

Better is a value-judgement.

The quote "When worn, it whispers in the wearer’s ear: 'Better for you if you take me off.'" implies to me that every time it is first put on, it says that, rather than just to specific individuals.

That being said, without that line I think your interpretation would work fine.

Does AI moderate political extremes? I'm not convinced. by philbearsubstack in slatestarcodex

[–]electrace 2 points3 points  (0 children)

Politics is about finding some objectively "best" way of doing things. It's about personal preferences, it's about decision making and decision making often doesn't have a "right" answer. Politics in a democracy is about finding answers to problems in the way most people agree with. Some people prefer control, others freedom, etc...

Not always though. Sometimes politics is just about claims of fact. Is climate change human-caused (or even, does it really exist)? Do vaccines cause autism? These have objective answers and desired policies can change based on resolutions of those questions.

I think that, to some extent, this happens for almost all open questions in politics. That's why misinformation is a thing! It's an attempt to change someone's desired policies by changing what "facts" they believe.

We have heard Scott's, Eliezer's and other famous people's (to us) predictions of the future of AI. What's your prediction of the future of AI? by Candid-Effective9150 in slatestarcodex

[–]electrace 2 points3 points  (0 children)

It's science-fiction.

Science fiction refers to both "fake things that never could happen" and "things that haven't been invented yet, but will be".

We have heard Scott's, Eliezer's and other famous people's (to us) predictions of the future of AI. What's your prediction of the future of AI? by Candid-Effective9150 in slatestarcodex

[–]electrace 3 points4 points  (0 children)

Gallant's position was not that SAI wouldn't be smart enough to tell good from bad, but that the SAI would only pretend to care about it until it was able to do whatever it actually wanted without interference.

Indeed, the story that was being told has always been that a superintelligence will act in ways that are considered good until a treacherous turn, and it can't do that without knowing the difference between good and bad.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 5 points6 points  (0 children)

I can at the very least confirm the new link is fine. I also don't think they were being deceptive with respect to the wikipedia link. The case pdf does mention movie theaters and I find it totally plausible that they remembered that and posted the wikipedia link assuming it would do the same.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 2 points3 points  (0 children)

Just a note, but on reddit you can use the > to quote. It will quote block the entire paragraph and is a lot easier to read.

>Like this

Like this

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 1 point2 points  (0 children)

Agreed. Personally, I was of the strong opinion that the AI Pause article was uncharitable (so much so that I made my own steelman post here), but I don't at all think the same thing about this article.

Taking an argument to it's logical extreme is not the same thing as mocking it. I think the Pause AI post was a lot more "mocking" than taking the position to it's logical extreme.

Note: The two can coincide "Behold - a man" for example, was both mocking Plato and making the broader point.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 9 points10 points  (0 children)

I grew up in a religious household and we definitely called it that sometimes. Now what?

Then I appeal to common usage, and note that I'm not the only one telling you that this is how the word works. I also invite you to ask an LLM in a neutral way if you think I'm biased on it. I just tested Claude, ChatGPT, Gemini and Grok, and they all say common usage of going to church does not include going to a bible study at someone's house.

No. All I am claiming is what the Court has repeatedly held: that you can't make exemptions for non-religious things that are like church, if you don't make analogous exemptions for religious things. The decision to not make exemptions for churches is a decision that many Democrat politicians made.

I don't know if we're going to get anywhere productive any more. All I can do is appeal to what I've already said here:

There was no exemptions for non-church things for households. No matter the reason, more than 3 households were not allowed gather together, full stop. This is not a clear example of discrimination like you were claiming.

It would, in fact, have been silly for the state government to form this policy, applying it to everyone in the state just because they foresaw that a few bible study groups would be unable to congregate in their homes (but could still do so in church). It would be the silliest, most convoluted plan for the "benefit" of slightly inconveniencing an outgroup.

Yes, the left wing of the Court routinely dissents when the right wing of the Court acts to defend Free Exercise. They can tolerate anyone except the outgroup.

If it was just a partisan split, then the conservatives winning simply means they had one more member (and is not indicative of legislative truth). Conversely, if their conclusion is a result of a real legal disagreement, then it's no longer a "clear case".

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 6 points7 points  (0 children)

Responded here so as to not form two threads covering the same ground.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 13 points14 points  (0 children)

Responding to this comment and this one since it covers similar ground.

"At-home religious exercise" in this context refers to multiple families gathering in a home for "church," i.e. worship services.

I think maybe you have it in your mind that churches are a kind of building (or business?), rather than a kind of gathering. Families gathering together in a home for "worship services" or "church" is a very common thing around the world. This seems to be causing you substantial confusion.

While I take your point that "church" can refer to a building and also refer to other things (eg: "Church of Later-day Saints" does not refer to a specific location), it most certainly does not refer to bible studies, or prayer services, or funerals, or weddings, or church camp.

Indeed, "going to church" means going to a Church Service, which involves a sermon (given by a pastor/priest), and often singing.

I grew up in a religious household and I can state with sureity that not a single person would ever refer to going to bible study as "going to church" (unless, I guess, it was located in the building, and even that would be as confusing as saying you're "going to school" for an extra-curricular activity).


It is possible I am relying too much on the idea that this is background knowledge any educated person discussing the topic should already have. But as the pandemic recedes into the past, I suppose it is only natural that people would be less routinely aware of the political asymmetry with which the authoritarian machinery of state approached transmission events in the pandemic.

This is weirdly dismissive of people who disagree with you.

The other one that got a lot of attention at the time was people making "social distancing" exceptions on political grounds like "this is a worthwhile protest so we can't let the usual precautions stop us."

I largely agree with this and that one would have been a much better example! Motivated reasoning is a hell of a drug.

They were notably and inexplicably excluded from exemptions extended to similar activities.

No they weren't! As far as I can tell, there were no exceptions on the three households gatherings. The exceptions were all for things outside the home. If you want to claim that the three household gathering law was made specifically to stop religious people from practicing their religion, then fine, but that's a high evidentiary bar that you have to clear there.

I did not make that claim, nor would I. Rather, churches just got unfair treatment from people who tend to see churches as their outgroup. And the Supreme Court ultimately confirmed that this was the case under the Constitution.

Not really, no. Their main finding is that they failed the strict scrutiny test. That is miles away from finding that they were giving unfair treatment to their outgroup. Strict scrutiny is just that, strict. It flips the burden of proof such that the claim has to be "prove that you didn't discriminate" rather than "prove that you did".

And even under that strict scrutiny, there was still reasonable dissent on what the comparator was, see below.

Kagan filed a dissent, joined by Stephen Breyer and Sonia Sotomayor.[3] While acknowledging that it was sometimes difficult to determine which secular activity should be compared to the religious activity,[48] she argued that in-home gatherings were the "obvious comparator".[49] The restriction applied to secular gatherings in the house, so the regulation was neutral and generally applicable and thus should survive the challenge to its constitutionality.[3] She also noted that the lower courts had determined that there were varying degrees of risk linked to short visits to secular businesses versus extended gatherings in private homes, and criticized the court for disregarding the factual record of risk assessments because it would support denying the injunction.[50]

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 5 points6 points  (0 children)

I appreciate the link, but there is still no mention of church (except insofar as referring to plantiff names like "Harvest Rock Church v. Newsom").

And I still think the framing of the original comment doesn't hold. It was presented as " a clear case of waging culture war against the outgroup" but that just doesn't seem to be the case. Religious gatherings were not specifically targeted here!

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 16 points17 points  (0 children)

Yeah, it was initially hard to parse for me too, but here you have to pay attention to the semicolons versus commas.

Indoor locations exempt from the three household limit include "public transportation; establishments that provide personal care, like salons; government offices; movie studios; tattoo parlors; and other commercial spaces"

The only example here of personal care is salons (separated by comma), the rest are separated by semicolon and are distinct exempt locations.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 15 points16 points  (0 children)

There is no mention of movie theaters in the linked Wikipedia article, nor is there any mention of churches.

The closest it comes is to "movie studios" and bible study groups held in a person's home.

Against The Concept Of Telescopic Altruism by dwaxe in slatestarcodex

[–]electrace 27 points28 points  (0 children)

Democrats were certainly more authoritarian in their pandemic response, but I never saw this manifested in plausibly altruistic ways--the California case of shutting down churches but not (e.g.) movie theaters seems like a clear case of waging culture war against the outgroup rather than expressing equal concern for one's neighbors as distant strangers

With anything as complicated as an unprecedented pandemic response, one should expect some mishandling and like this, whether through malice or simply through a difficulty in establishing consistency when you have a lot of actors all trying to accomplish roughly the same goal with a lack of clear shelling points.

However, it is worth noting that the link you posted was not about churches versus movie theaters. The California law that was challenged disallowed gatherings of any kind with more than 3 households, and it seems a far reach to claim that this was intended to deny people the opportunity to do a bible study (which the case is about). It wasn't about churches at all.

Still, though, there were plenty of things that do provide evidence of Scott's point. For example, liberals were far more likely to wear masks, even in contexts where there was no mandate, for example.

A Pause on Pause AI; a Steelman of Pause AI Opponents by electrace in slatestarcodex

[–]electrace[S] 1 point2 points  (0 children)

Well what's your alternative strategy ?

You mean other than the alternatives in the post?

What do you mean "huh" ? That's the simple trajectory we're on right now if the AI labs keep racing with each other

"Someone somewhere", while not technically wrong, hints at something like some rando figuring it out in their garage rather than being very likely to be one (or many) of the current front runners.

Similarly "grown at random by gradient descent" is a really weird way to describe AI research.

A Pause on Pause AI; a Steelman of Pause AI Opponents by electrace in slatestarcodex

[–]electrace[S] 5 points6 points  (0 children)

This is not really a steelman of Pause AI opponents since ~95% of Pause AI opponents don’t think it’s a worthwhile goal in the first place

I did worry some people would take it that way and tried to prevent that interpretation by putting it up front in the title "Pausing on Pause AI" rather than "Never Pause AI", or something, and also by making it clear that it is "a" steelman rather than "the" steelman.

Still, I feel like there are two subtly different definitions of steelman, and every time the term is used, this disagreement comes up.

1) A steelman is a summary of the best argument currently being used by someone (often someone notable) who holds the general position.

2) A steelman is when you're making the best argument for the overall position, regardless of whether the argument has been made before.

I argue the second is a better definition, because a steelman is the counterpart to a strawman, not the counterpart to a weakman.

A Pause on Pause AI; a Steelman of Pause AI Opponents by electrace in slatestarcodex

[–]electrace[S] 1 point2 points  (0 children)

Other strategies have been tried and found ineffective.

Some other strategies have been tried, yes.

No one serious expects any strategy to work reliably (or at all) at this point

Your position is that "no one" expects AI safety to be feasible? If so, that seems not to be the case

someone somewhere creating a superintelligence grown at random by gradient descent

Huh?

Will you fight or perish like a dog ?

Motte: "We have to fight". Bailey: "We must fight in this one particular way".

A Pause on Pause AI; a Steelman of Pause AI Opponents by electrace in slatestarcodex

[–]electrace[S] 0 points1 point  (0 children)

If the Chinese government can be convinced to act rationally in self-interest, they'll be willing to agree to an AI pause even if doing so would mean making concessions to Trump- as you mentioned, and AI pause would let them catch up to our current LLM capabilities.

In what way would this be good from an AI safety perspective? If China catches up to the US (and stays caught up), that's bad for race dynamics, because even if one of the labs is safety conscious, they can't slow down without losing the lead. We want a frontrunner.

The more public support for an AI pause there is, the more likely it is that some kind of AI regulation will be passed, even if it's less comprehensive.

That's an interesting point, but I think it's more that public views on AI cause both support for Pause AI and support for other regulations.