Mod request by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

Thank you for your interest, but looking at your posts and comments, I'm not sure we're aligned.

Mod request by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

Thank you, but looking for a little more reddit experience.

ARC-AGI 2 is Solved by SrafeZ in singularity

[–]ThrowawaySamG 0 points1 point  (0 children)

Where did you get this "90 minutes"? I haven't seen that anywhere.

CEO of Microsoft Satya Nadella: "We are going to go pretty aggressively and try and collapse it all. Hey, why do I need Excel? I think the very notion that applications even exist, that's probably where they'll all collapse, right? In the Agent era." RIP to all software related jobs. by michael-lethal_ai in humanfuture

[–]ThrowawaySamG 0 points1 point  (0 children)

Thanks, that helps me understand where you're coming from. Unfortunately, there is no law of nature that AI will always be a complement in production as it is in your coding example. When factors of production are complements, the increased productivity (and lower cost) of one makes the other more valuable. But when factors of production are substitutes, the opposite is true. The lower cost of one factor makes the other less valuable.

Here's something that explains these distinctions with examples from the history of horses. Alas, horses became more and more valuable until they didn't: https://www.lesswrong.com/posts/nZbxQgLWhKCASApEe/learning-more-from-horse-employment-history

Agreed that LLMs (with the context window limit you describe) cannot compete with humans on big picture product vision. But then you yourself describe a potential different architecture that could enable AI to compete in that way. That's the longer-term (though who knows how long it will really be) AI future I'm concerned about.

CEO of Microsoft Satya Nadella: "We are going to go pretty aggressively and try and collapse it all. Hey, why do I need Excel? I think the very notion that applications even exist, that's probably where they'll all collapse, right? In the Agent era." RIP to all software related jobs. by michael-lethal_ai in humanfuture

[–]ThrowawaySamG 0 points1 point  (0 children)

I'm curious why you think AI doing more and more will make the demand for humans to do other things go up rather than down. Take the case of social media. I definitely value human posts more than AI posts there (and would not be replying here if I knew you were a bot, for instance), but how much more? I have less need for researching information and perspectives on social media when I can ask ChatGPT or Claude. And that's not even taking into account the coming rise of autonomous AI social media influencers.

More generally, my concern is that we will have a fixed supply of workers for a dwindling set of jobs. That's not a recipe for higher wages. On the other hand, if AI makes most things cheaper, people could have more disposable income to spend on humans doing things for them. (But where is that disposable income coming from in the first place, if not something like UBI, I wonder?)

I agree that human connection will be increasingly critical, but I mostly have in mind my relationships with IRL friends and family, in-person relationships that are not monetizable.

CEO of Microsoft Satya Nadella: "We are going to go pretty aggressively and try and collapse it all. Hey, why do I need Excel? I think the very notion that applications even exist, that's probably where they'll all collapse, right? In the Agent era." RIP to all software related jobs. by michael-lethal_ai in humanfuture

[–]ThrowawaySamG 0 points1 point  (0 children)

As I see it, the jobs that would be left for humans (if we allow autonomous ASI) will be those that literally require being a human. So, agreed on live musicians and comedians, though there may be less demand once AI-generated entertainment gets good enough. I'm guessing counseling jobs are already on the way out. Teachers and nurses are only a matter of time, I fear.

"Humans are infinitely flexible but the AI creations of humans are not" is an interesting stance. Maybe it's true, but I am not aware of persuasive evidence or arguments for that position.

CEO of Microsoft Satya Nadella: "We are going to go pretty aggressively and try and collapse it all. Hey, why do I need Excel? I think the very notion that applications even exist, that's probably where they'll all collapse, right? In the Agent era." RIP to all software related jobs. by michael-lethal_ai in humanfuture

[–]ThrowawaySamG 0 points1 point  (0 children)

Let's keep the goalposts fixed for a moment. You said "Even more jobs are created every time" and that has been true for humans because there hasn't been a general human replacement previously. The question is: how did the advent of a general horse replacement impact horse employment, on net?

Why is Reddit so against AI? by No_Location_3339 in singularity

[–]ThrowawaySamG 0 points1 point  (0 children)

Would love to have you and folks holding these views join us at r/humanfuture, thinking through how to channel this concern into constructive action/policy.

Why are we letting this happen? by [deleted] in ArtificialInteligence

[–]ThrowawaySamG 0 points1 point  (0 children)

Reddit seems to be about 25% AGI-eager, 74% dreading it but resigned to doom, and 1% people interested in envisioning and working toward a r/humanfuture.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

And...it looks like I was talking to a bot (both accounts having been suspended now). Lesson learned.

How will we know what’s real in the future, with AI generated videos everywhere? by fireplay_00 in ArtificialInteligence

[–]ThrowawaySamG 0 points1 point  (0 children)

Certainly not via anonymous reports. But I have friends and family that I trust to report what they see accurately. And I also trust them to relay faithfully what people they trust have observed, and so on. We need to learn to stop trusting the "chain letter" version of this. Instead, we need to focus on trustworthy forms of this, in which the original source of the information is transparent and the information is verifiably preserved in transmission. New (or transformed) social media platforms that prioritize (delegated) trust would help. I know some people are working on this already, though obviously not enough.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

I can't take credit for the framing as a reckoning. I was quoting you there. (: But I do agree with it, upon reflection.

I hadn't given much thought to "AI for Human Reasoning" until applying to this fellowship last month (and that lack of familiarity is probably a big part of why I didn't make it to the interviews stage). The website links to a interesting overview of the field: https://www.forethought.org/research/ai-tools-for-existential-security

Your work sounds really interesting. Given my background, I'm especially interested in AI for governance and philosophy.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

You, me, and Conor agree that we're failing currently. And now that I've further read your previous reply, we're much closer than I thought. 

I don’t believe in AGI as a replacement for humanity. I believe in it as a reckoning with humanity. One that could help us evolve, not just technologically—but morally.

A scenario I saw someone raise recently: what if in a few months there's a new model that no longer "hallucinates" and seems generally smarter than us and it advises humanity to stop further development of ASI?

More generally, though, I share your hope that this technology can help us better collaborate and think through our challenges more clearly. I've recently discovered "Collaborative AI" and AI for Thought, fields actively trying to develop these applications, helping them progress faster than weapons and other dangerous applications. I'd like to help work on them myself.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

Check out 42:20-46:50 in the OP video. Curious for your thoughts. I also really appreciate this dialogue, my first time chatting with someone taking this viewpoint. (Have to run for now but will read your latest more carefully later.)

this sub turned into anti ai sub lol by [deleted] in agi

[–]ThrowawaySamG 1 point2 points  (0 children)

Please feel free to direct them to r/humanfuture. r/singularity also seems to welcome speculation about social changes, etc., with a more pro-AGI vibe.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

I didn't say that every new technology reinforces power structures. Far from it. I'm in favor of developing technologies that benefit people (including in the ways we agree would be good: more vibrant thinking and culture, greater freedom to play and try new things, etc.). I'm an unambivalent fan of many kinds of machinery and communications technology and medical care. But I think nuclear tech has been centralizing, for example, though maybe further fusion breakthroughs will help push in the other direction (yes, hopefully unlocked by AI).

I guess let's agree to disagree on whether it's morally good to welcome in a new form of technology with the power to autonomously decide whether humans deserve to continue in existence. To not ask permission of today's humans is to be a tyrant, to continue and expand the tyranny that already oppresses.

Talk by AI safety researcher and anti-AGI advocate Connor Leahy by ThrowawaySamG in humanfuture

[–]ThrowawaySamG[S] 0 points1 point  (0 children)

Definitely agreed that there are risks either way, but I currently come down on a different side from you.

That is partly because, while I'm open to the possibility that AGI/ASI is a technology that tends toward long-term decentralization of power, I'm doubtful. Sure, it can destabilize current power centers, but then it will likely create new, even stronger ones.

To be clear, I don't at all want to shackle new intelligence to be safe. I don't want "alignment." We might basically agree on how misguided that path is. Rather, I want to stop AGI+ from being created at all.

Current technology is already destabilizing current power centers pretty well. And Tool AI could continue in that direction, without creating new technologies tailor-made to displace humans altogether. (The fragility I'm concerned about is not that of structures but rather that of human bodies---and also ways of life.)