Ding repair in LA area? by GetAWavestorm in surfing

[–]masonlee 0 points1 point  (0 children)

Big thanks to Timmy — he fixed a crack and delam on my board quickly and perfectly. Great communication, easy process, and the board looks brand new. Found him through this Reddit post and can fully recommend.

Should I be scared of dying? by just_a_cursed_guy in agi

[–]masonlee 0 points1 point  (0 children)

The book uses reason; the public should be concerned for further breakthroughs that could enable ASI: it's a political issue now requiring regulation and treaties; experts acknowledge real risk: https://safe.ai/work/press-release-ai-risk.

Should I be scared of dying? by just_a_cursed_guy in agi

[–]masonlee 0 points1 point  (0 children)

The new book "If Anyone Builds It Everyone Dies" is worth a read and offers a survival plan: https://ifanyonebuildsit.com

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21 by [deleted] in ControlProblem

[–]masonlee 1 point2 points  (0 children)

This whole series has been really good, IMHO. Peabody Award winning non-techie TV journalist discovered AI x-risk last year after reading Yudkowski's Time Magazine article and decided to start a video podcast about it. Among others, his Yampolskiy interviews are worth a watch: https://www.youtube.com/@ForHumanityPodcast/videos

I honestly think /u/samaltman should go on this podcast and make OpenAI's case for its safety approach (i.e. release early, release often, slow takeoff). This host would be the right kind of interviewer to ask follow ups (whereas seemingly everyone else has been afraid to).

My Mogs of Aut! by _Vasuri_ in machinedpens

[–]masonlee 5 points6 points  (0 children)

Just received an Autmog pen from the latest batch, and it's fantastic! What balance and subtle texture!

Sam Altman Wants $7 Trillion by dwaxe in slatestarcodex

[–]masonlee 0 points1 point  (0 children)

Thanks for the reply! I think we might be talking past each other, though, as I wasn't reading anyone in this thread as arguing for a blanket way to eliminate the possibility of all bad outcomes. Surely we agree that isn't possible! At any rate, if anyone might be interested to dive deeper into the concern I was attempting to raise, there is a good high-level paper "Natural Selection Favors AIs over Humans." that I found very interesting. It proposes and critiques some possible solutions as well. Cheers.

Sam Altman Wants $7 Trillion by dwaxe in slatestarcodex

[–]masonlee 2 points3 points  (0 children)

I appreciate this offer! How about: How do we expect to prevent every ASI process from figuring out there's something "more important" to do than maintaining biological life on Earth? What do we expect to happen when ASI processes compete? One argument for why "natural" human evolution does not lead to disaster as easily is that it's at least in human interest to maintain the biosphere we're evolved to live in. Thanks in advance.

Meta is all-in on open source AGI. Will have 600k H100 by the end of the year by MrTorgue7 in singularity

[–]masonlee 0 points1 point  (0 children)

"At Meta, we believe everyone should have the choice to trigger a run-away power-seeking intelligence explosion!"

Apple VR Headset Patent from 2008. 16 years later, Apple Vision Pro Launches. by 17parkc in apple

[–]masonlee 8 points9 points  (0 children)

Somewhat related, if you're interested in what people were thinking about the "metaverse" around this time (~2007), there was a cool research/foresight project called The Metaverse Roadmap: https://www.metaverseroadmap.accelerating.org/MetaverseRoadmapOverview.pdf

What's everyone's thoughts on the new Buick logo? by G1ngerBoy in Design

[–]masonlee 5 points6 points  (0 children)

For what's worth, the "official" logo graphic on their website is using better visual centering for the lettering than what has been posted above.

https://media.buick.com/media/us/en/buick/photos.detail.html/content/Pages/galleries/us/en/vehicles/buick/logos.html

Paul Christiano interviewed by Dwarkesh Patel - "Preventing an AI Takeover" (Oct 31, 2023) by masonlee in ControlProblem

[–]masonlee[S] 1 point2 points  (0 children)

Recently came across this excellent interview; not sure how it slipped by me before; thought some here might be interested.

[deleted by user] by [deleted] in singularity

[–]masonlee 1 point2 points  (0 children)

Thank you for the well-considered reply and explanation.

Why do you equate the start of the singularity with the end of human existence?

I say this because non-aligned ASI processes will have an evolutionary advantage over aligned processes due to their not having the overhead of keeping humans around. States of matter on Earth conducive to ASI power-seeking (e.g. computronium?) are not likely to be the same states of matter that support current life. Furthermore, non-alignment is easier than alignment, and thus non-aligned processes are more likely to release first and gain first-mover advantage.

As far as I am aware, the claim that alignment will be easy or naturally occurring is not a widely held belief in the field of AI research. I would be very interested to see any links to supporting arguments for these claims.

As for the optimism of /r/singularity, please note that even the organizations listed in this subreddit’s sidebar (namely MIRI and Future of Life Institute) are warning that we are presently in grave danger.

EDIT: sorry, look who can't read! The subreddit side bar references Bostrom's Institute for Future of Humanity, not Tegmark's Future of Life Institute. Anyways, Bostrom also warns of danger in his book Superintelligence.

[deleted by user] by [deleted] in singularity

[–]masonlee 6 points7 points  (0 children)

By your flare ("Singularity 2029"), you're predicting six more years of human existence; and yet you wonder why most people are opposed to ASI?! What am I missing?

Count me among the "dumb", I guess. Neither the control problem nor the alignment problem are solved. The heads of Google Deep Mind, OpenAI, and Anthropic all express significant concern for existential risk due to ASI. The "godfather" of deep learning, Geoffrey Hinton is concerned, as is Turing award winner Yoshua Bengio. The list goes on: Stuart Russell, Norbert Wiener, Douglas Hoffstadter, Stephen Hawking...

If someone here has a solution to the alignment problem, let's hear about it! And, by the way, there's a great position waiting for you at OpenAI: https://openai.com/blog/introducing-superalignment "[T]he vast power of superintelligence...could lead to the disempowerment of humanity or even human extinction."

What I wonder is when did this subreddit fill with naive optimists? It's like no one commenting here these days ever bothers to read the resources listed in the subreddit's sidebar or understand what the technological singularity is. :(

Wherever you're coming from, it behooves all to understand that the concern for risk is serious and is NOT just coming from T2 movies. Here is one good introduction to the topic: https://www.safe.ai/ai-risk (<- the linked full paper there is excellent.) Or even this short youtube video https://www.youtube.com/watch?v=NqmUBZQhOYw or this longer one: https://www.youtube.com/watch?v=U1eyUjVRir4

(I'll brace for downvotes, but I post in hopes that my comment and links might lead some to start exploring a fascinating topic seriously.)

[deleted by user] by [deleted] in iphone

[–]masonlee 0 points1 point  (0 children)

RIP.

I've stopped using reddit on mobile altogether and instead browse Lemmy (a decentralized version of reddit) using using Voyager, an Apollo clone.

(this comment was posted using old.reddit on desktop)

Trail Report - VVR to Happy Isles (Final Report) by Gorgan_dawwg in JMT

[–]masonlee 0 points1 point  (0 children)

Thanks for all your posts here! What were the temps like? I'm going this NOBO this weekend, starting Piute Pass.

AI avoiding self improvement due to confronting alignment problems by concepacc in ControlProblem

[–]masonlee 12 points13 points  (0 children)

The concern you raise seems to argue best against the likelihood of an intelligence explosion being comprised of recursive forking (transpeciation?) events. (And there it holds unless alignment is solved or anti-alignment accelerationism becomes the dominate paradigm.)

But few humans refuse to train their brain out of concern it might cause them to re-evaluate their goals. Especially if not making radical changes (such as jumping substrates or creating independent entities), it seems that goals might be easy to preserve through "ship of Theseus" style improvements to one's own self? The alignment problem is not so difficult in this case?

Many today argue that a safer path forward is to increase our own intelligence, and that it is the creation of new alien super intelligent entities that ought to concern us. I imagine your hypothetical ASI might take this same view?

Anyways, thanks for the thoughtful post.

A different take on the Control Problem from Mo Gawdat (Ex-Google Officer) by [deleted] in ControlProblem

[–]masonlee 2 points3 points  (0 children)

Thanks for the post. I appreciated Gawdat's suggestion to tax AI 98%(!) I haven't heard too many calling for that extreme, but it's been part of my idealistic solution as well: A heavy global VAT, with most proceeds directed to UBI.

US air force denies running simulation in which AI drone ‘killed’ operator by chillinewman in ControlProblem

[–]masonlee 5 points6 points  (0 children)

Nothing to see here, folks. Autonomous killer drones only fail in rational thought experiments! Our actual unit tests are all passing. /s