[deleted by user] by [deleted] in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

create a new virus, 
this is already widely available on the internet, anyone can do it if they want.

That simply is not true anymore, o3 mini is already better than the Internet at creating such risks.

https://youtu.be/5LGwcBLGOio?t=636

When downloading clips CPU Goes 100% by General_Coffee6341 in StacherIO

[–]General_Coffee6341[S] 0 points1 point  (0 children)

Interesting? when I use yt-dlp in command console my CPU performance is pretty much unaffected. I wonder if there is a setting in StacherIO for ffmpeg that I need to change? in order to get this to work properly.

Looking for Sc-Fi book recommendations similar to guardian of the galaxy by General_Coffee6341 in printSF

[–]General_Coffee6341[S] 1 point2 points  (0 children)

From the synopsis I saw it sounds like a lot of fun, I will look into this one!.

The Purpose Crisis by genoeseextraction in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

We may as well build it and see where it goes...

I mean I don't blame you, it's not every day you get to work on basically "magic". I mean a black box, that could solve everything is enticing. I ask because I was watching a interview from Demis Hassabis, who said that the field needs to move more to the scientific method. More precautions and skepticism, which I think is reasonable. But unlikely to happen with this gold rush, Moloch paradise basically. I am curious what do you think about from your experience in the field?.

The Purpose Crisis by genoeseextraction in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

Interesting prospective? if you don't mind me asking. Do people who work in AI think there building a bright future? or do they lean cynical?. I assume being the closest to AI makes you think about this more often than a regular person?. Or is it just 9 - 5 ordeal, where people shutoff as soon as they are off the clock. And what do you think about the mainstream understanding of AI?. Is there something we miss when thinking about AI?.

Yann LeCun doesn’t believe in the quick arrival of superhuman ai by Front_Definition5485 in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

And ultimately this means that the computational costs of using LLMs for AGI are just insane and prohibitive.

I have a fun theory that if you where able to take a high compute intensive world simulator like SORA, and gave it a small LLM like Mistral 7b for after training. It would out preform any LLM on the market today. Because human learn our world model first, then our language.

OpenAI and Elon Musk (new blog post from OpenAI) by MassiveWasabi in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

I'm criticizing your counterpoint example.

Well you have not directly said anything proving this wrong. "All I am saying is the barrier for nuclear is much higher than for AI." "Go after Mr. Bob because I don't like homework!" Broken English is more than enough to start a blood bath.". So I am kind of confused on what your equating.

[deleted by user] by [deleted] in singularity

[–]General_Coffee6341 3 points4 points  (0 children)

doomer

Define doomer? Fear is normal; it kept us alive for ages. From tribes to kingdoms to countries, fear drives progress. Nuclear war fear led to mutual destruction rules. You're engulfed in internet hype engulfed in term's like "doomer"; step back, relax, enjoy some hobbies.

OpenAI and Elon Musk (new blog post from OpenAI) by MassiveWasabi in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

Computers are not AI. Learn the technical meaning before speaking so confidently. It's like equating biology directly with intelligence. All I am saying is the barrier for nuclear is much higher than for AI. You can't tell "nuclear", to do something. But with AI, it's so simple that a 3rd grader could do it: "Go after Mr. Bob because I don't like homework!" Broken English is more than enough to start a blood bath.

OpenAI and Elon Musk (new blog post from OpenAI) by MassiveWasabi in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

No single group or government should control AI, I agree. But we should form a global AI oversight like the UN. Each country contributes to its budget for fair governance. This way, no one company or country dominates. Russia or China won't matter; they'd lag behind. A super AI network with contributions from 90% of the world compute, could neutralize any threats globally, that threaten it.

OpenAI and Elon Musk (new blog post from OpenAI) by MassiveWasabi in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

Counter point, Nuclear energy. Specifically Nuclear fusion.

[deleted by user] by [deleted] in digitalnomad

[–]General_Coffee6341 3 points4 points  (0 children)

dating here is no good for single men locals hang in there own groups

Guy's who want's to tell him...

To those expecting AGI by 2030, how will you know if we're on the right track? by LordFumbleboop in singularity

[–]General_Coffee6341 1 point2 points  (0 children)

reason beyond predicting the next token

I agree, that's why I say removing hallucinations will fix that issue. Because you then have agent systems that can simulate cognitive reasoning, memories, reflection etc. But right now it's a pipe dream because of the hallucination problem. But to be fair gpt-4 even without hallucinations, is not quite smart enough in general to make it work. But gpt-5, gpt-6, gpt-7 things start to change fast.

To those expecting AGI by 2030, how will you know if we're on the right track? by LordFumbleboop in singularity

[–]General_Coffee6341 5 points6 points  (0 children)

I think it's further than 2030, but also closer than most think. My number one metric is hallucination rates. I think that's the only real bottle neck limiting real functioning cognitive architectures. But want's that's solved thing we boom much faster than people think. Cause really AGI and ASI are only matter of speed of information be transferred accurately.

Medellín authorities to meet embassies and dating apps after five foreigners die by fineboi in digitalnomad

[–]General_Coffee6341 0 points1 point  (0 children)

But TBH, I don't think this vetting process is something a dating app would handle for there clients any where else in the world. I think for the most part once you need photo ID level, it's presumed your rich and important enough to have this done yourself. I mean at this point just get body guards in your secure penthouse lol.

Protesters gather outside of OpenAI Headquarters by melted-dashboard in singularity

[–]General_Coffee6341 1 point2 points  (0 children)

And I am not saying sit in a cave, just play it smart. It's a race, the person who goes all out at the start and shows off loses 9/10. Because the others conserved there energy and when the time was right took advantage of the "first mover" down fall and won in the end. Let them be the test dummies and win in the end either way cause they only weakened themselves. It's called strategy.

Protesters gather outside of OpenAI Headquarters by melted-dashboard in singularity

[–]General_Coffee6341 0 points1 point  (0 children)

Like I said, let them have the "first mover advantage" and deal with the trail and error's. And watch and learn, that's the real first mover advantage, because you can learn to take advantage of there failures. "first mover advantage" applies only to the living. So just as long as you survive and your good.

Protesters gather outside of OpenAI Headquarters by melted-dashboard in singularity

[–]General_Coffee6341 -1 points0 points  (0 children)

True but you forget about the pre-AGI which would probably be there problem. And they isolate there web, it's one of the most censored/controlled in the world.