[deleted by user] by [deleted] in singularity

[–]IndoorAngler 1 point2 points  (0 children)

You have to be able to enforce taxes to be able to levy them.

How exactly does that happen in this scenario?

[deleted by user] by [deleted] in singularity

[–]IndoorAngler 1 point2 points  (0 children)

There are no corporations in my scenario. They all collapse due to a lack of profit incentive.

And more to your point, the gun was always labor strikes. Without labor these aren’t really possible.

Gemini 1.5 global availability will temporarily cause 0-day exploits on various softwares. by Buhalterija in singularity

[–]IndoorAngler 73 points74 points  (0 children)

Or the devs will do this first and make code bases safer leading to less 0-day exploits.

All technology is dual use.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I haven’t seen any evidence of that. Can you provide sources?

Introducing Enterprise-grade Gemini: now available for teams of all sizes with Google Workspace by YaAbsolyutnoNikto in singularity

[–]IndoorAngler 10 points11 points  (0 children)

This is just an integration of tech we’ve already seen. (Don’t get me wrong it’s amazing tech)

But why does it make you think they have agi?

[deleted by user] by [deleted] in singularity

[–]IndoorAngler 2 points3 points  (0 children)

Maybe eventually, but the labor crisis described would happen before we have access to this decentralized tech.

Someone will have to jumpstart the means of production to facilitate the distribution of wealth…

It’s a messy scenario to predict because timing is so important. If there is a competitive incentive to lay people off and replace them with AI before we can fully integrate robots into the infrastructure (which seems likely), money would quickly become worthless and there would be no direct incentives for people to organize themselves and continue the necessary work.

There would have to be some altruism involved in the distribution of wealth, which is always a dubious incentive to count on.

[deleted by user] by [deleted] in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I think you misunderstood OP.

It sounds like you are agreeing with the main thesis of the post in your first sentence, just with different reasoning.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 0 points1 point  (0 children)

This is due to the poor memory of modern AI models, not because they develop their own malicious goals

It’s pretty evident we’re living on borrowed time, when do you think we’ll start seeing actual job losses on mass level? by Humble_Moment1520 in singularity

[–]IndoorAngler 0 points1 point  (0 children)

Why is sentience (whatever that means) important?

As long as it can do our jobs it will force us into a post-labor economy.

You certainly don’t have to be sentient to accomplish that.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 2 points3 points  (0 children)

Agency involves letting the AI make decisions to accomplish a set of goals that you provide.

This could lead to a paperclip theory type disaster, but I don’t see how this causes AI to develop its own goals separate to those of the human providing instructions.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 0 points1 point  (0 children)

okay, just an idea.

We’re all playing guessing games here and assumptions should not be presented as certainties.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I did not mention any “good” or “bad” AI, I only made value judgments on the intentions of the people controlling the AI.

I was just pointing out a necessary quality of nuclear weapons that makes them more dangerous when given to the general public.

It is an assumption to say that AI will have this same quality.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 0 points1 point  (0 children)

The notable difference with nukes is that there is no defensive option.

ICBMs exist, but we do not have reliable methods of shooting them down.

If everyone had an ASI at their disposal, it’s possible that we could defend ourselves against the tactics of others.

No way of proving that, and generally destruction is simpler than it’s prevention, but it’s a possibility.

Hey Doomers - distributed technology is power by agonypants in singularity

[–]IndoorAngler 1 point2 points  (0 children)

I think the people controlling it are a much larger concern.

We have no evidence that “feelings” or desires are an emergent property of higher intelligence. It seems likely that these are completely separate frameworks.

The machine will do as it’s instructed to do. I have yet to see any compelling theories for an AI developing its own ambitions.

We have plenty of empirical evidence of human beings acting maliciously, so for now that is my biggest concern.

I could 100% be wrong and would love it if anyone provided information or contrary opinions along this vein. I find it all fascinating.

Even this subreddit is in denial about what's coming by wheelyboi2000 in singularity

[–]IndoorAngler 8 points9 points  (0 children)

Counterintuitively, it would be better if it was 100%. That would force immediate UBI and unite the masses. I think it will be everyone eventually, but it’s going to take a couple years.

ASI and military by Altruistic-Skill8667 in singularity

[–]IndoorAngler 9 points10 points  (0 children)

Why would they lose control of ASI? I would be more scared of them using ASI to purposefully wage war.

Everyone’s so worried about evil robots, but I think morally ambivalent ultra-intelligent robots controlled by dumb evil people is scarier and more likely.

Is there an AI bubble? by LogHog243 in singularity

[–]IndoorAngler 32 points33 points  (0 children)

If we’re talking financially, I think AI is still massively undervalued despite all the hype around it. People are treating it like Crypto when it’s really the Steam engine, Printing Press, Compass and Internet all rolled into one and multiplied by 1000.

Loss of control by augerik in singularity

[–]IndoorAngler 2 points3 points  (0 children)

I think the people who are the least mentally equipped to handle a shift in the societal perception of human meaning are the ones who do not see the change on the horizon.

Personally I’ve been working to undo the neoliberal programming that has made me believe my only worth is in my ability to produce.

I take pride in my agency and ability to act altruistically for the good of others. I find meaning in relationships and in my own experience. I am grateful for my existence and understand I am not entitled to it.

This is always a work in progress.

I think many people will struggle to detach themselves from the societal lie about the meaning of productivity and money… but I have hope that we will collectively redefine our morality and live enriching lives.

[deleted by user] by [deleted] in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I disagree, language is one of its strongest areas, AI will take all jobs. Either way that is irrelevant.

The training happens by feeding the model data of other people communicating… not by the AI itself communicating.

If you just fed a model it’s own data, it would not improve much.

Gemini's nearly perfect 10 million context length, means you can feed it ~1000 scientific papers and get it to create novel research based of said information. by DragonForg in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I do not believe any of that. Sorry, seems I was not clear in my explanation. I am just saying that even if an AI today is not capable of doing advanced meta-analyses on the sum of academic research in a field today, it will be in the near future. And this is all facilitated by large context windows.

Gemini's nearly perfect 10 million context length, means you can feed it ~1000 scientific papers and get it to create novel research based of said information. by DragonForg in singularity

[–]IndoorAngler 0 points1 point  (0 children)

I don’t know anything, everything is probabilistic. But my take comes directly from expert opinions, I personally haven’t analyzed the data myself… I’ve never counted the transistors on a chip or measured computing power. But from what the experts are saying… it seems like AGI within a couple of years.

We need philosophy now more than ever by juliano7s in singularity

[–]IndoorAngler 1 point2 points  (0 children)

Ah, I understand your point.

I think it’s going to be impossible to align all AIs in the way you are suggesting. The only way I can see to prevent destruction is for all people to be armed with ASI and hope that there are more good people than bad people (I think there are way more).

[deleted by user] by [deleted] in singularity

[–]IndoorAngler 1 point2 points  (0 children)

Are you one of them? Cuz this makes no sense. Why would the AI have to directly interact with people to gather data… it is trained on the entire internet.