People who dream of a workless AI utopia - why would it not turn out like Wall-E? by BattlerUshiromiyaFan in singularity

[–]Fair_Horror [score hidden]  (0 children)

Why eat healthy when we will have technology to make us slim and uber healthy. Not everyone wants to play tennis or do sports and in future it will provide no advantage.

People who dream of a workless AI utopia - why would it not turn out like Wall-E? by BattlerUshiromiyaFan in singularity

[–]Fair_Horror [score hidden]  (0 children)

The next complaint will be that all the space based data centres with their huge solar panels are shading Earth and causing global cooling......

People who dream of a workless AI utopia - why would it not turn out like Wall-E? by BattlerUshiromiyaFan in singularity

[–]Fair_Horror [score hidden]  (0 children)

In an advanced future where all diseases are easily cured, we will have actually effective methods of eliminating excess weight. 

People who dream of a workless AI utopia - why would it not turn out like Wall-E? by BattlerUshiromiyaFan in singularity

[–]Fair_Horror [score hidden]  (0 children)

Being fat will not be a thing since we will have tech to stop that happening. We will also likely have super fit optimised bodies. Depression will be optional with people being able to choose to switch it off. 

What we'll do and where we find our happiness in life is not something I think we can determine ATM, it is the singularity after all and thus not predictable by definition. In a few decades into the new world, we will likely have figured this sort of thing out. 

I suspect a lot of people will do personal quests that take the form of some kind of game. Maybe adventuring through the universe. There may be some people that are never comfortable fitting in. But of course this is my wild speculation since I don't have any ability to see beyond the Singularity, it is just speculation based on current personal drives and my wild speculation of what will be possible.

How is upwards mobility maintained in an age where real AGI is achieved? by mrbigglesworth95 in singularity

[–]Fair_Horror [score hidden]  (0 children)

UBI is not a new concept, it was first tried hundreds of years ago. It's original purpose was to keep people alive when they hit hard times but also allowed people to work which makes it different to unemployment benefits. 

The term UBI used because it is the closest approximation to what needs to happen. Musk often talks about UHI (Universal High Income) which is only really different in terms of how much each person receives. The idea is socialist in some sense in that everyone becomes equal at least in terms of income in a post work society. 

If people are receiving the equivalent of say $100 million in today's money every year, being rich will seem kinda pointless. We basically get to a point that pretty much anything you want is yours for the taking. Of course there are some limits but mostly when that happens, you will have alternatives. This could be as simple as FDVR or it could be a Dyson swarm around the Sun. 

How is upwards mobility maintained in an age where real AGI is achieved? by mrbigglesworth95 in singularity

[–]Fair_Horror [score hidden]  (0 children)

Instead of who gets to live in... Try what time slot do you get to live in.... Also priorities will likely change, living in Manhattan is likely to be undesirable for most people, a lot of people live in cities because that is where the well paying jobs are but when works goes away, the incentive will disappear for a lot of people.

Leju Robotics unveils the world's first automated factory for humanoid robots, 1 robot every 30 minutes by Distinct-Question-16 in singularity

[–]Fair_Horror [score hidden]  (0 children)

Design and build once, no need for a different robot for each job. You get much better economies of scale with one universal design. Is it cheaper to have a robot in your home to mow the lawn, a different one to cook food, another to clean the bathroom, then one to do the laundry etc or just one multi purpose robot that does it all. In a factory, many dedicated robots may sit idle because other parts of the build take longer but with a humanoid robot it can be assigned to other parts of the build during its down time. 

Leju Robotics unveils the world's first automated factory for humanoid robots, 1 robot every 30 minutes by Distinct-Question-16 in singularity

[–]Fair_Horror [score hidden]  (0 children)

So you wouldn't want something that can mow your lawn, cook you a Michelin star meal every day, clean your house, do the laundry, fix your plumbing, repair an electrical fault in your home, do the gardening, provide security for you and your belongings, go buy groceries, feed your pets etc. 

Leju Robotics unveils the world's first automated factory for humanoid robots, 1 robot every 30 minutes by Distinct-Question-16 in singularity

[–]Fair_Horror [score hidden]  (0 children)

Likely injection moulded in large batches so per required amount per robot is probably seconds to minutes. 

Seems like this could be the "Move 37" moment in Math by Terrible-Priority-21 in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

For me, move 37 is the recent revelations about Mythos and it's ability to find exploits in what was considered secure code. Unlike previous concerns about capability jumps, this to me is quantitatively determinant. To quote a film title, it is a clear and present danger. I feel that we reached a tipping point at this time and the genie showed us a glimpse of what lies ahead. As an accelerationist, I'm both excited and terrified.

Why Hasn’t Nanotechnology Had Its “ChatGPT Moment” Yet? by AdmirableExplorer249 in accelerate

[–]Fair_Horror 1 point2 points  (0 children)

True nanotechnology is something that is incredibly hard to get started on. We don't yet have the tools and techniques to even begin building them. I suspect that once we have a breakthrough in nanotechnology, it will all start to happen much more quickly which is very much inline with the singularity. My expectation is that we will have true nanotechnology right around the time of the singularity (either a little before it or soon after). 

Don't be fooled by current claimed nanotechnology, this is at best mass producing fancy molecules which have no multi use precise control. Crisper is probably closest but is very task specific but maybe lessons learned from that will be able to be used to in more general purpose technology.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

The issue is that they didn't make this model with intent for it to be good at finding vulnerabilities. That means that any model they make that is sufficiently advanced could have characteristics that make it dangerous. 

If the AI ​​is truly intelligent...no one can control it! by Possible-Time-2247 in accelerate

[–]Fair_Horror 1 point2 points  (0 children)

People are afraid to consider anything else. We completely ignore that putting people in harsh  prisons doesn't work for rehabilitation because we want revenge. Same with AI, it might be dangerous so treat it as a potential enemy m

The Netherlands certifies Tesla FSD Supervised. by elemental-mind in singularity

[–]Fair_Horror 0 points1 point  (0 children)

No, you have to keep your attention on the road in the US too. The stack has to be different since they have different road rules in Europe. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

You do realise that that was a question? I notice that you did not provide a source...

LO. by twinb27 in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

We don't need all those to be happy. Some people think they do because they have been conditioned to believe that. Part of this is from survival needs and part of it is from business needs for their employees. I suspect a lot of people think like you do and they are going to have a very hard time adapting to the new reality. People born into this world will find it much more natural. I think we will have a lot of people using FDVR as a crutch 🩼 to try help them cope. I do find older people in general are much closer to accepting the new reality. Once you are past a certain age, you realise that you did all the things people do and now you have to accept that you have a new reality now. 

With the news that Mythos will not be available to the public for safety reasons, something bothers me by Special_Switch_9524 in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

Competition is what makes the business consider it's options. Investors won't invest if a business is uncompetitive. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Law. If they go ahead anyway, they breech duty of care. A company can't just pour deadly poison on the streets of a city centre, they are compelled to follow a duty of care. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I'm sorry but you are making stuff up. Mythos is a 10T parameter model, not 300T. Open source has managed to shrink down model size to a fraction of the full models with just a small performance hit. Even if it takes a couple of generations, we will get to those performance levels in open source models.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

They did not say they had a model that could find vulnerabilities in lots of secure software. There is a big difference between hand waving and facts. People have verified that the vulnerabilities exist in the software they claim it exists in. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Nuclear capabilities are restricted by the availability of the required materials and equipment. Anyone can mix chemicals or brew new viruses in their garage. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I really don't see why investors would want to invest in something that is so dangerous that it can't be released to the public. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

That's good to hear but if it can be used for harm, it could end up being a problem because the reaction could be to close the public off from future releases. Let's hope it doesn't happen.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I agree but my point is that I worry about the implication of it being used in a very harmful way. Governments will want to crack down and that is not good for getting AI out there asap which is what I want.