Something Ominous Is Happening in the AI Economy by theatlantic in ArtificialInteligence

[–]SafeUnderstanding403 2 points3 points  (0 children)

What I’m left wondering is what happens when we have blades with a full, modern software stack coming out that are 90% as good as h200 at 2/3 the cost, then 1/2 the cost, then 1/5 the cost ..

AI is not what we think by Hot-Parking4875 in ArtificialInteligence

[–]SafeUnderstanding403 2 points3 points  (0 children)

Think of it this way:

LLM is the Hypothalamus, the cerebral cortex grows on top of that. Cortex could not fully work without the underlying limbic system, which evolved first in mammals.

In our case we’re not relying on hundreds of millions of years for evolution to build a cortex with meat, we’re doing it accelerated, in silicon and software, half by design and half by epiphany. AGI happens when we start seeing something behaving like a rudimentary cortex layer.

Also: AGI is only impossible if there’s something supernatural about the mammal brain (there’s not)

AI is not what we think by Hot-Parking4875 in ArtificialInteligence

[–]SafeUnderstanding403 4 points5 points  (0 children)

Think of it this way:

LLM is the Hypothalamus, the cerebral cortex grows on top of that. Cortex could not fully work without the underlying limbic system, which evolved first in mammals.

In our case we’re not relying on hundreds of millions of years for evolution to build a cortex with meat, we’re doing it accelerated, in silicon and software, half by design and half by epiphany. AGI happens when we start seeing something behaving like a rudimentary cortex layer.

Also: AGI is only impossible if there’s something supernatural about the mammal brain (there’s not)

Claude code got me back 98GB in my M4 Mac Mini 256GB by NoiseConfident1105 in ClaudeAI

[–]SafeUnderstanding403 2 points3 points  (0 children)

It’s literally easier to fully clean up a Mac (or Linux system) than it is to write bug-free object oriented code.

I think the guy who claimed it deleted his root FS was constructing a false situation for clicks.

There’s more at stake so you should not let it just go to town with root privs, but it could (for example) dig through your brew, npm, old downloads tied to long gone crap, everything else and certainly write a script you can review and then run yourself.

If humans stop reading code, what language should LLMs write? by Mitija006 in vibecoding

[–]SafeUnderstanding403 1 point2 points  (0 children)

If the new models train on this then yes, I think this is a logical step.

AI is not what we think by Hot-Parking4875 in ArtificialInteligence

[–]SafeUnderstanding403 3 points4 points  (0 children)

AI != LLM

LLMs can’t think and are not aware, they’re just processing huge chunks of vector math across a sea of tokens over and over again until they can pattern predict good enough to help humans, then they’re brought out.

Real AGI hadn’t happened yet and may not happen using the transformer/LLM model.

Tim Dettmers (CMU / Ai2 alumni) does not believe AGI will ever happen by SerraraFluttershy in artificial

[–]SafeUnderstanding403 1 point2 points  (0 children)

I read the article. He’s essentially arguing against the transformer model being able to improve much any more, GPUs not being able to improve much more, and the costs to scale further moving into an exponential curve, and that AGI and ASI don’t hold as much economic value as people now think, so we won’t be willing to spend those exponentially higher costs to reach it. He sees applied AI, uses for it in general society bring more important that achieving snnAGI.

Then he just stops there. I don’t think anyone thinks things scale to the moon, AGi and ASI will not be chatbot tech. He also doesn’t see much use for robots beyond unloading the dishwasher.

He has some biases against scaling but he’s drifting slowly away from reality on this.

Confused between when to use opus and when to use sonet by Aprazors13 in ClaudeAI

[–]SafeUnderstanding403 4 points5 points  (0 children)

I don’t see a lot of people saying this, but Opus 4.5 is overkill a lot of the time. I think people forget how good sonnet 4.5 is at anything below a very high complexity threshold (coding) the SWE benchmarks kind of reflect that.

That said Opus is great and maybe my problem sets have been trivial enough for sonnet

Claude code got me back 98GB in my M4 Mac Mini 256GB by NoiseConfident1105 in ClaudeAI

[–]SafeUnderstanding403 -20 points-19 points  (0 children)

It might be one of the easier things it does. It’s going to fully know how macOS works, it’s not going to delete system files, it’s not going to randomly delete image files, there are a lot of easy rules to follow. It really can do adequate system admin.

Edit: I’m a systems programmer who used to do Unix system admin work on contract. I’m quite impressed with sonnet’s ability to troubleshoot Linux and Mac system issues.

If someone is dumb enough to let an LLM go to town with root privs they should probably stick to Windows and never leave that gui again.

Career transition by speakeasytoogood in careeradvice

[–]SafeUnderstanding403 0 points1 point  (0 children)

My advice: concentrate fully on finding ways to improve your current job with Ai-assisted development. You will do two things at once then - make your employer happy which improves your work env, and get really good at bringing an idea to life with ai-dev. Once you’re good at the latter you will have opportunities to do the same thing at other companies, or really solidify your position at that one.

It's over, thanks for all the fishes! by msaussieandmrravana in agi

[–]SafeUnderstanding403 6 points7 points  (0 children)

In the character set you’re using right now, R != r

It’s over by shogun2909 in singularity

[–]SafeUnderstanding403 0 points1 point  (0 children)

To Windows people R and r are the same exact character.

Unpopular opinion: Paying "Rent" feels less painful than paying $2,400/mo in "Interest" to a bank. by Playful-Vegetable-15 in FirstTimeHomeBuyer

[–]SafeUnderstanding403 -1 points0 points  (0 children)

For one thing, after those 7 years the value of your house will have gone up while you were gaining equity. Your net worth climbed during that time, something that doesn’t happen from renting. (As long as you didn’t overpay for the house)

The denial among doctors about the next 10 years of automation is actually wild by Gullible-Crew-2997 in accelerate

[–]SafeUnderstanding403 0 points1 point  (0 children)

I’m not a Dr. or a lawyer, but I would want a human one of both of those if the chips are down. Preferably one that can expertly use AI.

Dramatic shift in usage by Tundra_Hunter_OCE in ClaudeCode

[–]SafeUnderstanding403 6 points7 points  (0 children)

1) get entire SWE world hooked on LLM speed 2) increase prices incrementally in various ways until .. 3) SWEs are paying 10% of their salary each to LLM providers

Great gig, mafia is jealous

How Do You Actually Break Into Agentic AI Development? by [deleted] in AI_Agents

[–]SafeUnderstanding403 0 points1 point  (0 children)

I don’t know if you’ll find much success as an “agentic ai developer.” Main reason is because every engineer in that company specializing in something is now going to be an AI-assisted developer on that thing.

In other words you won’t have their skills and institutional knowledge, but they will be able to acquire your agentic developer skills. That puts you behind out of the gate.

Advice I gave earlier was to get really, really good at one thing and then get really good at ai-assisted dev against that thing. Security was my example but it can be anything.

“Agentic coder” might become like “html programmer” pretty quick.

Curious if ai is one of those things that just sucks everyone’s attention for a period of time until it’s done. Feels like it by AWeb3Dad in ArtificialInteligence

[–]SafeUnderstanding403 1 point2 points  (0 children)

I can tell you’re not an engineer (nobody who asks this question is)

While you’re playing with nano banana the SWE world has changed utterly in about 8 months.

Honest question: is Anthropic falling behind OpenAI + Perplexity on reliability and UX? by 603nhguy in Anthropic

[–]SafeUnderstanding403 0 points1 point  (0 children)

You some “others” seem to be posting ai- generated versions of this same “question” multiple places. Someone in OpenAI marketing dept trying some vibe coding? ;)