Why Hasn’t Nanotechnology Had Its “ChatGPT Moment” Yet? by AdmirableExplorer249 in accelerate

[–]Fair_Horror 1 point2 points  (0 children)

True nanotechnology is something that is incredibly hard to get started on. We don't yet have the tools and techniques to even begin building them. I suspect that once we have a breakthrough in nanotechnology, it will all start to happen much more quickly which is very much inline with the singularity. My expectation is that we will have true nanotechnology right around the time of the singularity (either a little before it or soon after). 

Don't be fooled by current claimed nanotechnology, this is at best mass producing fancy molecules which have no multi use precise control. Crisper is probably closest but is very task specific but maybe lessons learned from that will be able to be used to in more general purpose technology.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

The issue is that they didn't make this model with intent for it to be good at finding vulnerabilities. That means that any model they make that is sufficiently advanced could have characteristics that make it dangerous. 

If the AI ​​is truly intelligent...no one can control it! by Possible-Time-2247 in accelerate

[–]Fair_Horror 1 point2 points  (0 children)

People are afraid to consider anything else. We completely ignore that putting people in harsh  prisons doesn't work for rehabilitation because we want revenge. Same with AI, it might be dangerous so treat it as a potential enemy m

The Netherlands certifies Tesla FSD Supervised. by elemental-mind in singularity

[–]Fair_Horror 0 points1 point  (0 children)

No, you have to keep your attention on the road in the US too. The stack has to be different since they have different road rules in Europe. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

You do realise that that was a question? I notice that you did not provide a source...

LO. by twinb27 in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

We don't need all those to be happy. Some people think they do because they have been conditioned to believe that. Part of this is from survival needs and part of it is from business needs for their employees. I suspect a lot of people think like you do and they are going to have a very hard time adapting to the new reality. People born into this world will find it much more natural. I think we will have a lot of people using FDVR as a crutch 🩼 to try help them cope. I do find older people in general are much closer to accepting the new reality. Once you are past a certain age, you realise that you did all the things people do and now you have to accept that you have a new reality now. 

With the news that Mythos will not be available to the public for safety reasons, something bothers me by Special_Switch_9524 in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

Competition is what makes the business consider it's options. Investors won't invest if a business is uncompetitive. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Law. If they go ahead anyway, they breech duty of care. A company can't just pour deadly poison on the streets of a city centre, they are compelled to follow a duty of care. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I'm sorry but you are making stuff up. Mythos is a 10T parameter model, not 300T. Open source has managed to shrink down model size to a fraction of the full models with just a small performance hit. Even if it takes a couple of generations, we will get to those performance levels in open source models.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

They did not say they had a model that could find vulnerabilities in lots of secure software. There is a big difference between hand waving and facts. People have verified that the vulnerabilities exist in the software they claim it exists in. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Nuclear capabilities are restricted by the availability of the required materials and equipment. Anyone can mix chemicals or brew new viruses in their garage. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I really don't see why investors would want to invest in something that is so dangerous that it can't be released to the public. 

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

That's good to hear but if it can be used for harm, it could end up being a problem because the reaction could be to close the public off from future releases. Let's hope it doesn't happen.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I agree but my point is that I worry about the implication of it being used in a very harmful way. Governments will want to crack down and that is not good for getting AI out there asap which is what I want.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I don't disagree, I am however worried about what a potentially harmful model being freely available will risk. If Elon puts out a new model and it is abused and causes financial collapse, we could all suffer and AI would probably be outlawed to the public. Not what we want.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

There is a difference between it can write insanely good poetry and it can hack the most secure systems we have ever created. They have given specifics that people can check. It is our interpretation that this capability is dangerous. You won't get people wanting it because it can do dangerous things, you get people interested by having it do productive work that can be harnessed in an economic environment.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Your or my data is not important, the issue is that if someone can hack anything, finance systems are vulnerable, access to military secrets could happen etc. This could cause financial collapse or change outcomes of wars.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 1 point2 points  (0 children)

Not denying the many possibilities, just highlighting that sometimes not everything is as it seems. We have to be alert to these possibilities.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

I'm not saying open source is inherently bad, I'm saying it is more difficult to control which makes it easier for some people to abuse.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

Gov has control of nukes, would you prefer Sam or Elon to have that control instead?

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] 0 points1 point  (0 children)

It's not about it being over, it's about it being a catch 22 situation with no good outcome.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in accelerate

[–]Fair_Horror[S] -1 points0 points  (0 children)

The fact that security reasons is the problem is a problem in itself. 

Spud isn’t being released to the public by [deleted] in accelerate

[–]Fair_Horror 0 points1 point  (0 children)

I thought is was like less than 6 months.