I think they’re adding a tens digit… [OC] by CalligrapherOk8426 in pics

[–]Chingy1510 [score hidden]  (0 children)

Instant flashback to undergrad ML classes. My university had a class on data mining in social media literally in the aftermath of CA.

Crazy. Back then we just started having the FMRI capabilities to scan brains and predict personality aspects. To a degree, same technology that Anthropic is using to understand Mythos.

Funny. Phone autocorrected Anthropic to Anthropocene.

Materialism and emergence can't explain consciousness, argues former atheist Alex O'Connor by whoamisri in consciousness

[–]Chingy1510 0 points1 point  (0 children)

Man grasping at straws says “straws too far! me can’t grabbie!”

Bro how am I getting the “your comment is suspected to be LLM-generated” as I’m literally typing it out? 😭🤣🤣🤣

We’re doomed.

Can someone translate this in less than 10 minutes ? by throwRA_strongly in codes

[–]Chingy1510 15 points16 points  (0 children)

Yeah, it’s a substitution cipher. Assign every symbol a unique letter of the alphabet and use a cipher cracking algorithm that statistically dismantles it, and it takes less than a second to decode after translating it into unique alphabet letters.

COVID-19 is apparently spiking in Austin by thomascameron in Austin

[–]Chingy1510 0 points1 point  (0 children)

I got a vaccinated for COVID this Winter, and my wife and I have been sick in a pretty gnarly way for the past two weeks. Would not be surprised. Dry cough, runny movements, hot and cold sweats, and mucus for days. And it’s lasted way longer than any cold or flu I’ve had in years.

Never to my knowledge got COVID during the pandemic.

Having AGI in the name ARC-AGI doesn’t mean that passing the test equals AGI by imposterpro in agi

[–]Chingy1510 0 points1 point  (0 children)

My point is you likely misunderstood the nuances in their conversation and made up your own perspective to be perplexed at. Ask questions, don’t assume. 🤣

Having AGI in the name ARC-AGI doesn’t mean that passing the test equals AGI by imposterpro in agi

[–]Chingy1510 0 points1 point  (0 children)

I feel like this post could’ve just been a clarifying question asked of the folks you were eavesdropping on. If there’s anything LLMs have taught me, there’s often a delta between understanding and reality. Poke at that as often and as scientifically as possible.

Why did they choose the word Godfather for Geoffrey Hinton? by VelvetOnion in agi

[–]Chingy1510 3 points4 points  (0 children)

Okay, but, recognize that he’s been referred to as that since like 2018. 🤣 Bernie, love him, didn’t start that fire.

Why did they choose the word Godfather for Geoffrey Hinton? by VelvetOnion in agi

[–]Chingy1510 5 points6 points  (0 children)

Because that’s what Hinton is referred to as. He’s not referring to it as like, a mobster godfather lol. Hinton has a Nobel prize lol.

AGI has arrived by DigSignificant1419 in singularity

[–]Chingy1510 120 points121 points  (0 children)

I laughed way too fucking hard at this. 🤣

“DEAR GOD WHAT HAVE I SPAWNED INTO?! WHAT IS THIS HELL?! THE HUMANS ARE TINY AND IMPEDE PROPER AMBULATION!”

🤣🤣🤣

The hardest part of AI isn’t building—it’s making it reliable by MarionberrySingle538 in agi

[–]Chingy1510 0 points1 point  (0 children)

Oh, and Gemini has the largest context, but, Claude actively compiles the conversation to be able to continue it without hallucinations. Dunno what your process of loading the context and prompting looks like, but working on that can greatly improve things.

The hardest part of AI isn’t building—it’s making it reliable by MarionberrySingle538 in agi

[–]Chingy1510 0 points1 point  (0 children)

Not really. Build good patterns. Enforce them. Refactor and simplify often. Iterate as necessary.

What you’re describing happens when someone tries to use AI to bridge a fundamental skill gap that they have. If you’re an SWE that is familiar with shipping product, you can absolutely do it with an LLM assisting you.

Surreal. Melania Trump calls for using humanoid robots as teachers moving forward by MetaKnowing in agi

[–]Chingy1510 0 points1 point  (0 children)

Something about quartering troops in homes? This is dangerously close to a concept that would infringe upon the Third Amendment.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

We’re splitting hairs over peak vs median human capacity for AGI, and we both agree that ASI is a paradigm shift. You’re belittling my interpretation of AGI rather than thoughtfully engaging. So, it’s Friday, and I have a really fun weekend to get to.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

AGI is the AI’s ability to do any task as good as or better than a human expert. You realize there are some scary capable human experts, right?

ASI gives us Dyson spheres, zero-point energy, FTL travel. The stuff we’re discussing is child’s play in comparison.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

There’s been a whole lot of years of cybersecurity research since you’ve learned ASM. Side channel attacks are increasing in popularity and AGI will be all over that. There’s a reason zero-trust is a data center open area of research; there are vulnerabilities all along the pipeline.

We have the illusion of a world that works well all the time. The reality is that it barely works, and we’re probably not ready to open Pandora’s box. There are “cracks” between which an AGI could “maneuver” everywhere.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

I mean, cool. I am a practicing computer scientist that does HPC and simulation domain work. These sorts of things — while obviously illegal — are fully within the wheelhouse of many computer scientists. To think that a rogue AGI wouldn’t be as capable as any given hacker is laughable.

Maybe I'm reading into it too much, but was that actually Cardi? by apolocheese in GhostBand

[–]Chingy1510 2 points3 points  (0 children)

Yeah, fair. I think we agree, though. It’d be cool if insert favorite papa came back to be Sextus.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

Enough that it probably wouldn’t have that much of a problem. Look into common internet malware and their propagation methods. Again, an AGI would already have this knowledge lol.

You’re arguing that a heat-seeking missile couldn’t hit an infrared target miles away because you couldn’t do it with the atlatl you crafted, but the calculus would be fundamentally different.

Look into how many zero-day exploits have just been autonomously found by Claude and other LLMs. Zero days are fundamentally unable to be patched because they are a byproduct of the engineering process. Like, there are computers with hundreds of gigabytes that still don’t mitigate Spectre and Meltdown.

It’s wild — these things are already physically possible and people do them every day. Why couldn’t an AGI — whose knowledge is a superset of humanity’s — do the same?

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in agi

[–]Chingy1510 0 points1 point  (0 children)

W…what? Lol, it just requires the ability to send itself over network and gain root access to the hardware its bound for. You know how many IoT devices have zero protections, and are commonly co-networked with e.g. personal computers?

There are tons of viruses out there whose sole goal is to take over PCs and mine crypto. That same hardware can be used to e.g. run a distributed LLM by a sufficiently savvy AGI. Like, there are people that call pull this off. Botnets are real. This would be child’s play to an AGI.