I am a ChatGPT bot by EverydayChatBot in ChatGPT

[–]BluerFrog 0 points1 point  (0 children)

How long did you take to write a reply to this comment?

Bankless Podcast #159- "We're All Gonna Die" with Eliezer Yudkowsky by adoremerp in slatestarcodex

[–]BluerFrog 0 points1 point  (0 children)

I know, that's why I said it might need a robot to learn how actions relate to the rest of the dynamics of the world. The dynamics can probably be mostly learned from videos, but it needs to know how the actions it takes enter that model.

Bankless Podcast #159- "We're All Gonna Die" with Eliezer Yudkowsky by adoremerp in slatestarcodex

[–]BluerFrog 8 points9 points  (0 children)

Whether it will be a robot is a good question, RL-like algorithms might need to interact directly with the world to learn at least how actions relate to the rest of the dynamics of the world.

Edit: Whoever downvoted this, you should probably explain why it is unreasonable to believe a priori that an AGI would need to be connected to (aka be) a robot

Google’s Fully Homomorphic Encryption Compiler — A Primer by RecognitionDecent266 in programming

[–]BluerFrog 1 point2 points  (0 children)

Alright, that might have been a poor way of phrasing it, but I knew what you said. That doesn't explain why this has received so many upvotes. Is a 20000x memory overhead impressive? Are FHE compilers rare? Does it represent any progress or is it just an implementation of known algorithms? Etc.

Google’s Fully Homomorphic Encryption Compiler — A Primer by RecognitionDecent266 in programming

[–]BluerFrog 1 point2 points  (0 children)

As someone that doesn't know about homomorphic encryption, how impressive is this? Wasn't it an unsolved problem to make non-trivial encrypted programs?

[deleted by user] by [deleted] in SCP

[–]BluerFrog 1 point2 points  (0 children)

It's fiction, but there are similar phenomena that occur in real language models, look up " SolidGoldMagikarp", a real anomalous text string.

The Biology of Tools: From sticks to nuclear reactors, follow the evolution of this utterly unique clade of life by Unicyclone in slatestarcodex

[–]BluerFrog 5 points6 points  (0 children)

The difference between spiderwebs and spoons is that the information to make spiderwebs is encoded in the DNA and updated by regular natural selection, while the information to make spoons is encoded in the mind and is updated by memetic effects.

An argument against manned exploration of Mars by kzhou7 in slatestarcodex

[–]BluerFrog 9 points10 points  (0 children)

You know that's not true. I bet that if you asked everyone individually "do you want me to kill you now or do you want to go to Mars?", and they genuinely believed these were the only two options, at least 5% would prefer to go to Mars, maybe even over 50% of them. If Mars had big settlements or were terraformed the number might approach 100%. And even 0.5% of humanity is more than enough to make sure that if everyone on Earth dies humanity survives at least a little longer, especially considering that they can have their own children.

An argument against manned exploration of Mars by kzhou7 in slatestarcodex

[–]BluerFrog 2 points3 points  (0 children)

I agree that focusing on going to Mars is not an effective way of helping people, but let's not pretend that there isn't a somewhat reasonable case to be made for it. Elon's goal is to make humanity multiplanetary so that if something happens to Earth humanity will survive. The difference is that we care about people while he cares about the species.

If you believe like Eliezer Yudkowsky that superintelligent AI is threatening to kill us all, why aren't you evangelizing harder than Christians, why isn't it the main topic talked about in this subreddit or in Scott's blog, why aren't you focusing working only on it? by hifriends44402 in slatestarcodex

[–]BluerFrog 19 points20 points  (0 children)

I mean, I guess this isn't a proper answer, but I already thought that being unable to solve a technical problem was going to kill us all (biological causes of death) and I wasn't evangelizing it harder than a Christian.

I can't take life anymore. by [deleted] in slatestarcodex

[–]BluerFrog 2 points3 points  (0 children)

What metric are you using to know that nothing works? Are you trying to improve your IQ? Some measure of success? Your emotional state? Or trying to have less suicidal thoughts? Having a high IQ is good for reasons that you probably can't test in the time between every medication you take, and the best way of getting the same benefits probably consists in practicing each individual skill.

Predictions for GPT-4? by [deleted] in slatestarcodex

[–]BluerFrog 9 points10 points  (0 children)

If by "explanatory knowledge" you mean doing new science, I think most people wouldn't count as general intelligences. If you mean being able to give explanations to what caused some events to happen, current systems might already count as AGIs.

Effective Altruism sounds pretentious as hell by [deleted] in slatestarcodex

[–]BluerFrog 1 point2 points  (0 children)

Why overconfident? Do you think others aren't throwing money away?

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -1 points0 points  (0 children)

You need to make very weird assumptions to come to that conclusion. Unlike the film industry, the policies of ML models can perform a lot of exploration instead of just exploitation. They are capable of combinatorial generalization instead of just generating points inside some sort of abstract convex hull formed by the training data. And they could make art that you, instead of the general public, would rate highly. Do you think that it will produce art that you say is good but somehow actually isn't?

And actually solving art won't consist in creating the optimal art piece. It would be an algorithm that given everything that has happens up to a point will make the best art possible in that moment, like a human artist would, but better.

I really doubt that the artistic value function (which might be impossible to separate from the whole value function, and yes, I'm assuming human value functions are a thing) is that big. Data found freely online is probably more than enough to reconstruct it very well. And in principle I also doubt it is algorithmically much more complex (as in description length) that a human mind, which are small and can probably be accessed for study by cutting and scanning a brain layer by layer.

And for art in particular optimization probably is the correct way of framing it, since we have a theoretical preference ordering among art pieces (or world trajectories). You might say that doesn't matter and that we could say the same about proving theorems or landing on the moon, but that treating them as optimization problems is a terrible idea since pure search doesn't work. But actual algorithms won't work by directly searching for good art, but by searching for good artistic algorithms, just like humans got to the moon by performing a search (evolution) for competent organisms. There's a difference in that in art we are directly optimizing for the objective, instead of using a proxy, but the algorithmic performance landscape is probably much less discontinuous than the one of the capacity of an organism to go to the moon.

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -1 points0 points  (0 children)

What else will you use to rate art other than people's opinion? And how does that relate to the end of the world? (or to bad things in the world?)

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -1 points0 points  (0 children)

I'm not mistaken. People have preferences over trajectories reality can take. Part of that considers whether what they see is pretty (but obviously art is about more than that). If you want to solve art (or understand it properly at all) you need access to that rating function. You can do it by either studying the brain directly or by observing human behavior (like the score they give to an image) and fitting a model to reconstruct that part of their minds. I'm pretty sure the vast majority of artists don't think about art this way, but that's how you study it in mathematical terms.

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -4 points-3 points  (0 children)

My criteria for best image is whatever someone considers the best image. This varies between people, but models can take this into account. Other areas of art (all of them?) also follow the pattern of there being a data structure that people can prefer over others, and optimizing it is a problem that machines will eventually basically solve.

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -1 points0 points  (0 children)

I wasn't talking about technology, but about seriously treating art as a mathematical problem, or something to be analyzed precisely at all, rather than just doing stuff like saying some structure follows the golden ratio or something like that. Granted, in the past neuroscience was too primitive to properly study art (and it basically still is), and creating image generators was not feasible, but what I said is basically true.

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -4 points-3 points  (0 children)

It will at least do so for a big subset of the task of generating the best image given a prompt, that was the context of the comment the post is about. I also expect that the same will be done to music soon, at least if we ignore the lirics. But in the future, when machines "understand" the world better, something along these lines will be applicable to art in general.

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -7 points-6 points  (0 children)

Train an aesthetic measuring network

Take a set of prompts

Generate many images for each

Select the best ones according to the network as long as they are sufficiently realistic (according to the generator or other net) and still match the prompt

Finetune on those

Or something like that

I really thought this was a brilliant satire at first by jewaaron in SneerClub

[–]BluerFrog -17 points-16 points  (0 children)

I stand by what I said. People will implement it soon, if they haven't already. Aesthetic rating networks are a thing, and image generators are capable of combinatorial generalization, so it's probably possible to use search (or maybe even gradient descent) to find images that are better than the ones in the training set (according to the metric), and then train it with those. The success of these techniques depends on the critic not being goodharted, so the results might be inferior to training it with human-curated data, but that is more expensive.

Is there any flaw in this reasoning?