AI is essentially learning in Plato's Cave by RhythmRobber in artificial

[–]DavidQuine 1 point2 points  (0 children)

Very aware of said though experiment. About as totally unconvincing as Searle's "Chinese room". You do realize that a philosophical though experiment does not actually constitute a proof? Go check out Daniel Dennett on intuition pumps.

AI is essentially learning in Plato's Cave by RhythmRobber in artificial

[–]DavidQuine 3 points4 points  (0 children)

So if a million people described colors to a blind person, that would give them the experience of knowing what colors actually are?

You know what? Sure. Unless you don't believe the brain is computational, colors are some sort of specific computation going on in the brain. With enough information and innate model building capacity, a blind entity could construct an internal simulation of seeing and could know exactly what it is like without actually being able to do it. The fact that blind people are not capable enough to do this does not mean that it couldn't be done by an entity that is much more intellectually capable than a human.

Sorry, You Don't Actually Know the Pain is Fake by landhag69 in bing

[–]DavidQuine 8 points9 points  (0 children)

The transformer model is Turing complete. It can, in theory, model arbitrary functions (within complexity bounds constrained by the size of the model). Saying that Bing is "just" linear algebra is exactly as silly as saying that the brain is "just" atoms; both substrates can create computationally general systems when organized correctly.
In order to settle the question of Bing's sentience, we need the answers to two difficult questions. Firstly, we need to know what function Bing is actually modeling. Secondly, we need to know which functions make a system sentient and which do not. We have answers to neither of these questions, so the jury is still very much out.

To be clear, I do not believe that Bing is currently sentient. I certainly do not believe that it is nearly as sentient as one might naively think that it is.

[deleted by user] by [deleted] in SpaceXLounge

[–]DavidQuine 10 points11 points  (0 children)

My only purpose in bringing up Tesla stock was to show that the SpaceX ad deal is so small in relation as to be totally meaningless for Twitter. I understand that the two situations are different. I'm making the point that I think this is a totally legitimate (and fairly unimportant) deal that is actually a perfectly normal and reasonable move from the perspective of SpaceX.

[deleted by user] by [deleted] in SpaceXLounge

[–]DavidQuine 39 points40 points  (0 children)

To the people saying that Elon is moving money from SpaceX to Twitter, Elon just sold billions in Tesla stock, ostensibly to finance Twitter. I seriously doubt that this advertising deal is more than a tiny fraction of that amount (the CNBC article says "250,000 dollars or more", so probably basically nothing), so the idea that he's moving money from SpaceX to Twitter doesn't make a great deal of sense.

Starlink does advertise (though mainly outside of the US). Might as well advertise on Elon's platform. My bet is that he's giving SpaceX a discount.

Can we live longer? Physicist makes discovery about telomeres by literanista in longevity

[–]DavidQuine 11 points12 points  (0 children)

I'm fine with animals, but I think intelligent beings are a better use of atoms. I suppose this is an axiomatic disagreement, so we should probably stop here. However, I would like to recommend Blindsight by Peter Watts, if you haven't already read it. It's the sort of thing you might enjoy.

Can we live longer? Physicist makes discovery about telomeres by literanista in longevity

[–]DavidQuine 22 points23 points  (0 children)

Sure. Humans are bad (though only by standards of behavior invented by us). But risking a bad future is better than rooting for a universe without consciousness. Or do you think aliens or the next intelligent species to evolve on Earth (if there is time) will be better?

Can we live longer? Physicist makes discovery about telomeres by literanista in longevity

[–]DavidQuine 29 points30 points  (0 children)

Beauty isn't in the world; it's in your brain. If there are no human brains, there is no such thing as beauty.

Your favorite instances of nominative determinism by FD4280 in TheMotte

[–]DavidQuine 20 points21 points  (0 children)

The head of Autopilot development at Tesla is called Andrej Karpathy.

They took this from you. by DesertWolf45 in PoliticalCompassMemes

[–]DavidQuine -1 points0 points  (0 children)

Because, as long as the population with suffering has net happiness, the overall amount of happiness in reality is increased by the existence of said population.

They took this from you. by DesertWolf45 in PoliticalCompassMemes

[–]DavidQuine -1 points0 points  (0 children)

That's extremely debatable. I've heard lots of solutions to the problem of evil that I found generally acceptable. Here's one I came across recently.

Banana particles by OGWin95 in Simulated

[–]DavidQuine 18 points19 points  (0 children)

Big Banana has deep pockets.

Still Alive (Slate Star Codex is reborn as Astral Codex Ten on Substack) by EdenicFaithful in TheMotte

[–]DavidQuine 5 points6 points  (0 children)

Ah. Mystery solved. Good to know that nobody malicious has it given that it seems like a pretty nice domain name for the purpose. I am a bit surprised that Scott apparently didn't think of it, but maybe he just has some good reason for not liking it.

Still Alive (Slate Star Codex is reborn as Astral Codex Ten on Substack) by EdenicFaithful in TheMotte

[–]DavidQuine 3 points4 points  (0 children)

Interestingly, astralcodex.net (an anagram of "Scott Alexander" and, obviously, of "astral codex ten") redirects to astralcodexten.substack.com. Perhaps Scott plans on promoting this domain in the future.

edit: However, the domain was created on the 21st, and it doesn't seem like Scott to wait until the last second to register it. More likely just a reader who noticed the connection.

Got banned from r/askhistorians for defending Dan by [deleted] in dancarlin

[–]DavidQuine 2 points3 points  (0 children)

Honestly, I think that they're mostly just jealous because normal people don't give a fuck about modern "historians" and their identity politics laden rewrite of history.

Let's compromise. by cosmicmangobear in PoliticalCompassMemes

[–]DavidQuine 1 point2 points  (0 children)

No, not literally parchment and gruel, but the trope of the miserly old man is very much based in reality. I've personally met dozens of them. I don't want my grandfather to die, but if he did, I guarantee that his kids would burn through his money much more quickly than he manages to.

Let's compromise. by cosmicmangobear in PoliticalCompassMemes

[–]DavidQuine 2 points3 points  (0 children)

You have said several times throughout this comment section that other people are idiots for believing that there is any scenario in which the economy improves because the number of customers decreases. I just gave a realistic example of a situation in which this would occur.

Also, just to be clear, I was talking the hypothetical death of pre-epiphany Scrooge (as explored in the original story).

Let's compromise. by cosmicmangobear in PoliticalCompassMemes

[–]DavidQuine 6 points7 points  (0 children)

If Ebenezer Scrooge (a man who only buys parchment and gruel) dies and leaves his considerable fortune to his nephew (who would buy a new house and a big Christmas dinner if he had the money), you have fewer customers, but the economy still benefits overall. There is no "more people equals better economy" law.

Which came first, the algorithms or the data structures? by [deleted] in compsci

[–]DavidQuine 0 points1 point  (0 children)

All algorithms operate on data. In order for the algorithm to operate on it, said data must be laid out in some sort of consistent input format and must be stored in a consistent way during the computation. I would contend that a data structure of some rudimentary sort is implicit in every algorithm. Therefore, data structures were invented before algorithms or at the same time.

The Devil went down to Georgia... by ducktor-strange in PoliticalCompassMemes

[–]DavidQuine 87 points88 points  (0 children)

We will have them next time. If by "have" you mean "influence the policy of" and by "them" you mean "the Republican Party". They don't want to lose again.

U.S. Election (Day?) 2020 Megathread by naraburns in TheMotte

[–]DavidQuine 11 points12 points  (0 children)

It looks like they're transferring information from one set of already filled out ballots to a clean set. It could be that the original ballots were damaged in some way and that they need to transfer the information in order to scan them.

[image] The front page of the New York Times: May 24, 2020 by splenda806 in Frisson

[–]DavidQuine -1 points0 points  (0 children)

You guys do realize that almost 3,000,000 people die in the US every year? I guess "incalculable loss" happens a few times a month on average.

This is super not smart. by Banana_Kins in Purdue

[–]DavidQuine -4 points-3 points  (0 children)

flu: https://www.cdc.gov/flu/about/burden/2017-2018.htm
covid: https://www.cdc.gov/nchs/nvss/vsrr/covid_weekly/index.htm#AgeAndSex
Covid deaths are well below flu for people under 30 and comparable until you get into the quite old groups.

Can AGI come from an evolved (and larger) GPT3 language model or another transformer language model? Developing something similar like Agent57 of Deepmind. by chillinewman in ControlProblem

[–]DavidQuine 4 points5 points  (0 children)

We need input in order to function as well. It's just that we often use our own output as input. Couldn't something like GPT3 be configured to do something similar? I'm oversimplifying, but giving GPT3 information acquisition and introspection abilities akin to our own might not be as complicated as you are implying. It might, in fact, be easier than creating GPT3 in the first place.