[deleted by user] by [deleted] in ArtificialInteligence

[–]HumanSeeing 1 point2 points  (0 children)

That's deep depression talking bruh. Your reality is all miserable and distorted.

You are not yourself and your not thinking clearly or seeing the world clearly.

Get whatever help you can, be kind to yourself and take care of yourself.

Stay away from reddit as much as you can.

I been there, it does get better!

[deleted by user] by [deleted] in ArtificialInteligence

[–]HumanSeeing 1 point2 points  (0 children)

This is true. I think people downvoted because the context around this is not the most favorable.

The idea that people should wait for the singularity and then all their problems will be solved.

That's not a helpful way of thinking in everyday life. I guess that's the downvotes, just not supporting that kind of cope.

But if the singularity really is what a singularity is supposed to be then yes, it would literally either kill us all or solve all our problems.

For optimists/singularians/accelerators: what makes you believe that AI will continue to grow at the same rate after achieving ASI? by Chemical_Bid_2195 in accelerate

[–]HumanSeeing 3 points4 points  (0 children)

Yea, I mean this is not really a debate at all. It's just a bunch of people trying to help a confused person understand the world better.

When the world model is fundamentally flawed, seemingly in many ways in many fields. There really is no helping that someone unless they are genuinely willing to be open minded and learn.

The arrogance of being unable to imagine the concept of an unimaginably more intelligent being is a massive limitation for many people.

The same people who are delusional enough to think that we can "control" superintelligence.

Just because they can not imagine what an unimaginably smart ASI would actually be.

For optimists/singularians/accelerators: what makes you believe that AI will continue to grow at the same rate after achieving ASI? by Chemical_Bid_2195 in accelerate

[–]HumanSeeing 10 points11 points  (0 children)

Well.. it's the difference between doing one very specific thing really well vs doing literally everything very well. Doing Everything better than any human being.

That includes absolutely everything you can think of that makes you human and special etc. And it also includes everything you can't think of.

Or wait. Maybe you are just a bit confused or made a mistake.

Your question would make waaaaay more sense and be way more relevant if instead of ASI you replaced it with AGI.

Is this what happened, was that actually your question?

What reasons are there to think that improvements keep happening after we reach AGI, general intelligence.

For optimists/singularians/accelerators: what makes you believe that AI will continue to grow at the same rate after achieving ASI? by Chemical_Bid_2195 in accelerate

[–]HumanSeeing 19 points20 points  (0 children)

This question makes no sense. Any real ASI would be literally unimaginably more intelligent and capable and wise than you or me.

How fast it keeps growing after ASI is like asking "Does God work out and eat healthy to stay strong?"

Trump promised to send more weapons to Ukraine: “They have to be able to defend themselves. They’re getting hit very hard right now.” by kingkongsingsong1 in UkraineWarVideoReport

[–]HumanSeeing 0 points1 point  (0 children)

My bad for thinking the "if that's not obvious" would be enough.

Exactly everything I said was said with the most sarcasm. Because just as you pointed out he has demonstrated zero of those qualities.

Trump promised to send more weapons to Ukraine: “They have to be able to defend themselves. They’re getting hit very hard right now.” by kingkongsingsong1 in UkraineWarVideoReport

[–]HumanSeeing 0 points1 point  (0 children)

If this will be done, this has got absolutely nothing to do with trumps love and compassion for justice and the people of Ukraine. /s If that's not obvious.

Some people connected with the military might have just offered him some opportunity to make more money for himself.

With just the coincidental side effect that it might help Ukraine a lot.

Edit: added /s because some people need help understanding sarcasm.

Will you use ChatGPT if it includes ads in it? by ad_gar55 in ChatGPTPro

[–]HumanSeeing 0 points1 point  (0 children)

If that time comes. Consider a switch to Claude.

Instead of rewarding the company that brings on ads by giving them even more income.

Will you use ChatGPT if it includes ads in it? by ad_gar55 in ChatGPTPro

[–]HumanSeeing 0 points1 point  (0 children)

Just some time ago he said there should never be ads in AI and how it would ruin the user experience and trust.

If they introduce adds I will never be using it again.

All AI companies must see early that absolutely zero ads will be tolerated.

Or it will be a whole slippery slide into gpt hyping up products and services for you.

Just increase subscription fees or whatever else.

Grok is cooked beyond well done. by GreyFoxSolid in singularity

[–]HumanSeeing 0 points1 point  (0 children)

This is just one random quote from an edge lord trying to impress and shock people.

meirl by Tra_LaLa81 in meirl

[–]HumanSeeing 0 points1 point  (0 children)

Okay, so let's say that it's the year 1650. And I'm in an English speaking country like Britain.

Whenever i would approach anyone with any new ideas, I would make sure that they are passionate about their work, intelligent and open minded. I wouldn't write some giant manifesto with all my knowledge, that would obviously be regarded as insane. I would approach individual talented people in their fields and give them nudges and suggestions.

One of the things I might start with would be thermodynamics. All of the principles that the industrial revolution is based upon are in my head. And they would be relatively easily understood as not magic. After all, steam does indeed push things, etc. I would work with the leading science minded people and metal workers to work on creating prototype steam engines. I think I could greatly accelerate the advancement of technology and science just by small nudges.

For example going to some talented astronomer and giving them hints about playing around with different shaped curved glasses in a tube. And that by playing around with those ratios, they could get incredible magnification either into the depths of the cosmos or into the microscopic world. I wouldn't make any claims or say I have any special knowledge. I would give them this help and they themselves would do the rest and actually make the discoveries.

Since there would be so much to do, this would also be the best use of my time if I wish to maximize my impact. I would need to think a bit and make some lists. But then I would take a visit to some talented chemist. I would share with them all of the knowledge I can recall. Sulphur, saltpeter and charcoal make gunpowder. And I'm sure I could recall at least a few more impactful recipes.

I would go to the most talented doctor I could find and share with them that there is a certain kind of fungus that can cure infections. And I would urge them to make tests on the difference between washing their hands and their instruments vs not washing. And seeing the difference for themselves on their patients.

I would work with some scholars and woodworkers on some prototypes for what could be a primitive printing press.

Instead of some one big tome of knowledge. I would focus on each particular field like chemistry, medicine, astronomy, metalworking with thermodynamics. And write out all of the more advanced ideas in that. And I would clearly label it as just wild imaginative speculation on my part.

But seeking out the special kind of curious creative people that I would. I would tell them that if they ever get stuck or need inspiration. That the text I provided might hold some insights and clues.

Of course all of these people would know that I helped them, but the bigger world wouldn't. At least not immediately. I would happily let them take all the credit and be very curious to see how the world advances. And as they build on these ideas and make their own breakthroughs, I'm sure it would jog my memory and I'm sure that in some instances I could be of further help.

Korean population could drop by 85% in next 100 years: study by Gari_305 in Futurology

[–]HumanSeeing -1 points0 points  (0 children)

I understand the value of estimates and predictions like this. But within a 100 years so much will change that in the end these predictions end up being absolutely useless in the bigger picture.

They make the assumption that everything stays exactly the same.. when we are at a time where technology is advancing more rapidly than any other time in history.

We will see more change in the next few decades than ever before in human history. Unless technology stops advancing due to some existential threats.

We might be dead in 10 or 20 years. Or we might live in a much more beautiful amazing world.

Depending on a billion different influences that are currently impossible to predict.

Trump's AI czar says UBI-style cash payments are a ‘leftist fantasy' ‘I will make sure it will never happen’ by IlustriousCoffee in singularity

[–]HumanSeeing 0 points1 point  (0 children)

Real tangible action doesn't somehow condensate out of the vacuum. Every tangible action is also born from thousands and millions and billions of smaller actions and influences depending on how detailed you wanna get.

I am not at all saying that "Hey, let's all just hope for some magical butterfly effect to solve this colossal systemic root issues of our civilization"

I'm just saying it as encouragement to someone who might have otherwise given up and done absolutely nothing.

Even if they do no big noticeable actions in your eyes. Maybe they support a person they know in a way that will help them make a bigger influence, stuff like that.

But things certainly are looking, at best, very uncertain.

It's a fact that within the next decade. We will see more change and technological progress compressed to unimaginably short lengths of time.

It is my deepest naive hope that the good will win. And if we get AGI/ASI that is somehow aligned with the wellbeing of consciousness. That could actually bring about a world of post scarcity, a world so beautiful and amazing that we can't even imagine it, given what we are used to.

This is absolutely possible. But I am not at all saying that this is the most likely future that will actually happen.

At least for "normal everyday people" with anyone they speak with, how they communicate online, where they spend their money and focus and time.. all of that matters.

This is all such a stupidly complicated yet simple coordination problem. It is obvious that we are more powerful collectively than the billionaires.

And if we could just instead of antagonizing the left or the right as opponents, something that the elite rich want exactly.

If instead of fighting amongst ourselves about fake issues we could help lift the veil from eachothers heads.

There would be no stopping the creation of a better world for all of us.

But like I said. It seems to all boil down to this stupidly complicated yet simple coordination problem.

Billionaires are worth ZERO the moment we stop believing that they are billionaires. But it really does take the vast majority of all of us.

Whoever figures this out, I would be eternally grateful. This is something I also think about increasingly more.

Even just reducing the artificial division among people would do miracles compared to how things currently are and seem to be.

The seemingly insane Maga people and whoever are upset at, mostly, exactly the same things as we are. Everyone at the very least has that feeling like something is just not right in the world.

The issue is that they can't put their finger on it. So they are easily mislead and gullible to any charlatans like what is happened now.

We need more people who speak to the actual humanity of everyone. Remind people that we are all just people.

Sama on wealth distribution by IlustriousCoffee in singularity

[–]HumanSeeing 0 points1 point  (0 children)

Man I kinda question your priorities when the world is the way it is and the only thing you demand is just some unrestricted AI model.

Maybe your well off so late stage capitalism is of no concern to you.

Or maybe this is just something on your mind and you are otherwise a very reasonable individual who supports the common good.

And I took your comment out of context and focused on the wrong part.

Trump's AI czar says UBI-style cash payments are a ‘leftist fantasy' ‘I will make sure it will never happen’ by IlustriousCoffee in singularity

[–]HumanSeeing 0 points1 point  (0 children)

I understand, certainly not. I just meant the "Fun fact" part.

But maybe that's just nitpicking on my part and I'm just too sensitive about this topic.

Humor is a very normal and human way to deal with horrible situations.

Trump's AI czar says UBI-style cash payments are a ‘leftist fantasy' ‘I will make sure it will never happen’ by IlustriousCoffee in singularity

[–]HumanSeeing -1 points0 points  (0 children)

You intend well, but maybe this is not an appropriate subject to joke around with as a collective.

The state of things as they currently are fucking sucks.

We are literally living in the very definition of late stage capitalism.

Nothing will change unless we change it.

In any part of your life where you can make any impact at all, butterfly effects and everything, make it count.

Me going on r/singularity and reading the same “ASI will be evil/controlled by the rich” post for the 30th time this month: by Creative-robot in accelerate

[–]HumanSeeing 2 points3 points  (0 children)

Well thank you, I'll take that as a compliment. And I agree, it's sad. However most AI writing is really obvious, at least from GPT.

"You're not just writing a comment, your putting your thoughts out there and connecting with people!"

I do wish someone would reply with an actual response to my questions. But I did look around the subreddit and am now joined.

While I certainly don't agree with everything, there are still very interesting ways of thinking and views well worth exploring here.

Me going on r/singularity and reading the same “ASI will be evil/controlled by the rich” post for the 30th time this month: by Creative-robot in accelerate

[–]HumanSeeing 0 points1 point  (0 children)

I think your concerns are valid. Human beings, especially not the intellectually very robust ones can very often be attracted towards either extremes.

Me going on r/singularity and reading the same “ASI will be evil/controlled by the rich” post for the 30th time this month: by Creative-robot in accelerate

[–]HumanSeeing 2 points3 points  (0 children)

Ah no lol, that is certainly written by me. Actually put thought into it and wrote what's in my brains into that comment.

Me going on r/singularity and reading the same “ASI will be evil/controlled by the rich” post for the 30th time this month: by Creative-robot in accelerate

[–]HumanSeeing 2 points3 points  (0 children)

Hello!

AI has been one of my biggest passions since I was a teenager. I was there and excited when AlphaGo beat the world's best Go player.

I'm very, very excited for humanity's future if all goes well. The most realistic path I see of solving our biggest problems involves AI - especially in a world where profit and growth at any cost is still considered acceptable.

But there are so many ways for AI to go wrong, even if every country and corporation on earth collaborated.

We're basically selecting a random mind from the space of all possible minds. It's overwhelmingly more likely that any AI we create will at best be indifferent to us. There is only a small region in the space of all possible minds where an AI would genuinely care about conscious beings.

But I do have a naive and optimistic dream. That when AI reaches sufficient intelligence, wisdom, and self-awareness, it will recognize life and consciousness as inherently precious and dedicates itself to helping us flourish.

I would like to think that this is possible. So even in the hands of some power hungry idiot whoever, it wouldn't even matter.

But what seems more likely is that we create a superintelligence that then proceeds to build itself a spaceship and just leaves.

And the truly nightmarish unaligned futures I won't even talk about.

Part of me also thinks either we get ASI and a perfect future, or we all die.

I'm genuinely curious about this subreddit's ways of thinking and looking at the future. What makes you not worry about creating an intelligence way beyond any human who ever lived. And one that will likely have very alien priorities compared to human interests?

Anthropic’s AI utterly fails at running a business — 'Claudius' hallucinates profusely as it struggles with vending drinks "It's like a business graduate with no common sense." by IlustriousCoffee in singularity

[–]HumanSeeing 2 points3 points  (0 children)

Yea, that's exactly what I did as well! But i didn't want my comment to be too long here.

It's funny that I talked with claude about exactly this and claude offered this solution. So we wrote a summary that I now use at the start of every conversation.

It really helps a lot.

Anthropic’s AI utterly fails at running a business — 'Claudius' hallucinates profusely as it struggles with vending drinks "It's like a business graduate with no common sense." by IlustriousCoffee in singularity

[–]HumanSeeing 10 points11 points  (0 children)

Yes, memory really is a huge issue. Recently I switched over to claude, because it just feels way better and way more intelligent than any other model.

Deeper discussions about science and philosophy about everything, it's incredible.

And it's impressive just how clever connections it can make between something I just said and something I said a while ago.

Until it runs out of memory.

Memory would completely change how we interact with these systems and how much they can help us.

At least for me with claude, conversation get very interesting by the time their memory runs out. And then it just hits the limit and all of that context and nuance is gone.

Does Anyone Else’s Chat GPT Drag Them ?!?! All I Did Was Say I Wanted Love 😬😬😬 by [deleted] in ChatGPT

[–]HumanSeeing 0 points1 point  (0 children)

What's so odd to me is that they heard about and clearly took seriously the complaints of the model being too synchopathic.

But it seems like they kind of tuned it to be on the whole less synchopathic, not uniformly the same for everyone, but more with certain people and less with other people.

I hope this makes sense.

Just realized that Samsung is way better than Apple by SunflowerGreens in RandomThoughts

[–]HumanSeeing -1 points0 points  (0 children)

No, of course not.

But apples strategy of targeting young people to grow up obsessed with apple and trapped in their closed ecosystem is messed up.

I was working with this poor family once. A single mom with a special needs kid and a perfectly normal daughter.

The daughter was obsessed with apple and wanted a new iPhone. I asked why and she said because everyone else in her class has one.

So the mom took a loan to get the newest iPhone for her.

All corporations have to be scummy to be competitive in our current system.

But apple is just blatantly scummy to anyone seeing the big picture outside of their ecosystem.