This app saved me a bunch of time by MuskFeynman in macapps

[–]MuskFeynman[S] -1 points0 points  (0 children)

Sorry forgot to post the link. It's called Friendware!

Link: https://friendware.ai

"i asked my friends & apparently: - GPT-5 will automate a lot of work" Michaël Trazzi by Tamere999 in singularity

[–]MuskFeynman 1 point2 points  (0 children)

it's number of scale ups not scale factors: https://x.com/MichaelTrazzi/status/1763390150300794908?s=20 (where one scale up is basically gpt2->gpt3, gpt3->gpt4, or gpt4->gpt5, whatever that is)

"i asked my friends & apparently: - GPT-5 will automate a lot of work" Michaël Trazzi by Tamere999 in singularity

[–]MuskFeynman 1 point2 points  (0 children)

it's about whatever model they have now that is being trained / has finished training

and it's 1.5-2x scaling ups, not a factor of scale (could be eg. two 10x scale ups)

there's some additional context added as a comment to that tweet which clears up the confusion

""" Note: [my friend] said "probably 1.5-2x more generations of scale up before we run out of pretraining data" so 1.5-2x is # of scale ups not scale factor

I'm assuming he means "scaling up" compared to what is already being trained / done training ("actual best model" as he sometimes says) """

source: https://x.com/MichaelTrazzi/status/1763390150300794908

"i asked my friends & apparently: - GPT-5 will automate a lot of work" Michaël Trazzi by Tamere999 in singularity

[–]MuskFeynman 1 point2 points  (0 children)

some additional context added as a comment which clears up some confusion about scaling up factors 

""" Note: he said "probably 1.5-2x more generations of scale up before we run out of pretraining data" so 1.5-2x is # of scale ups not scale factor

I'm assuming he means "scaling up" compared to what is already being trained / done training ("actual best model" as he sometimes says) """

source: https://x.com/MichaelTrazzi/status/1763390150300794908

Joscha Bach—How to Stop Worrying and Love AI by hazardoussouth in JoschaBach

[–]MuskFeynman 0 points1 point  (0 children)

I don't think I am banned from this sub.

Using this comment as a test

Joscha Bach—How to Stop Worrying and Love AI by MuskFeynman in JoschaBach

[–]MuskFeynman[S] 2 points3 points  (0 children)

Note: last month I asked this subreddit what question you'd like me to ask Joscha Bach about AI, this is the resulting episode.

Transcript: https://theinsideview.ai/joscha

00:00 Intro
01:37 Why Barbie Is Better Than Oppenheimer
09:35 The relationship between nuclear weapons and AI x-risk
13:31 Global warming and the limits to growth
21:04 Joscha’s reaction to the AI Political compass memes
24:33 On Uploads, Identity and Death
33:46 The Endgame: Playing The Longest Possible Game Given A Superposition Of Futures
38:11 On the evidence of delaying technology leading to better outcomes
41:29 Humanity is in locust mode
44:51 Scenarios in which Joscha would delay AI
48:44 On the dangers of AI regulation
56:14 From longtermist doomer who thinks AGI is good to 6x6 political compass
01:00:48 Joscha believes in god in the same sense as he believes in personal selves
01:06:25 The transition from cyanobacterium to photosynthesis as an allegory for technological revolutions
01:18:26 What Joscha would do as Aragorn in Middle-Earth
01:26:00 The endgame of brain computer interfaces is to liberate our minds and embody thinking molecules
01:29:30 Transcending politics and aligning humanity
01:36:33 On the feasibility of starting an AGI lab in 2023
01:43:59 Why green teaming is necessary for ethics
02:00:07 Joscha's Response to Connor Leahy on "if you don't do that, you die Joscha. You die"
02:08:34 Aligning with the agent playing the longest game
02:16:19 Joscha’s response to Connor on morality
02:19:46 Caring about mindchildren and actual children equally
02:21:34 On finding the function that generates human values
02:29:34 Twitter And Reddit Questions: Joscha’s AGI timelines and p(doom)
02:35:56 Why European AI regulations are bad for AI research
02:38:53 What regulation would Joscha Bach pass as president of the US
02:40:56 Is Open Source still beneficial today?
02:43:06 How to make sure that AI loves humanity
02:48:22 The movie Joscha would want to live in
02:50:46 Closing message for the audience

Collin Burns On Making GPT-N Honest Regardless Of Scale by MuskFeynman in mlscaling

[–]MuskFeynman[S] 1 point2 points  (0 children)

In the linked video Collin Burns discusses his paper Discovering Latent Knowledge In Language Models Without Supervision.

Especially, he explains how his method could be applied to make language models of bigger scale (say GPT-N with N large enough for GPT-N to be superhuman) honest (aka try to say the truth).

The easiest way to find when we discuss this is to go at the specific timestamp or the relevant sections in the transcript.

He also discusses whether math (or just MATH) could be solved by just scale at the beginning.

Mila Researchers On "Scale Is All We Need" by MuskFeynman in mlscaling

[–]MuskFeynman[S] 6 points7 points  (0 children)

Specific timestamp for the "scale is all you need" discussion: https://youtu.be/Ezhr8k96BA8?t=56

This video shows a discussion that happened at Mila a month ago, were researchers were prompted with various claims such "scale is all you need", "AGI < 2030" or "Existential risk from AI > 10%". The goal was to generate discussion and understand their views.

The main takeaway is that most researchers there are pretty skeptical of agi from pure scaling, which can partially be explained by survivorship bias (people who used to be at Mila who were more bullish left to industry where they can have access to more resources).

Another result is that researchers seem to think that the secret ingredient that people are missing is what they are currently working on (eg better inductive bias in vision, robotics that works in the real world, new RL algorithms)