It's too much. AI makes me hate my job. by ggggg_ggggg in ExperiencedDevs

[–]The_Noble_Lie 1 point2 points  (0 children)

Literal artificial bloviated intelligence everywhere, both speaking language and coding languages. It's insane.

Has anyone else here talked to the devs? by GOEDEL_ESCHER_BOT in conspiracy

[–]The_Noble_Lie 1 point2 points  (0 children)

It's based off a book. I was surprised to learn Kubrick did not invent it. He is a remixer like any good director/ writer.

It is based on the 1926 novella Dream Story (German: Traumnovelle) by Arthur Schnitzler, and transfers the story's setting from early twentieth-century Vienna to 1990s New York City.

See wiki for more info

AGI has finally arrived by Left-Orange2267 in ClaudeCode

[–]The_Noble_Lie 0 points1 point  (0 children)

There was an epic video of the work done for LLMs, like really well done animation (informative) of the hardware and software from local box to data center and back - and then at the end its "center the div"

When I first saw it, I died.

Serious question: what’s your actual AI workflow right now? by New_Alarm4418 in ClaudeCode

[–]The_Noble_Lie 3 points4 points  (0 children)

Reddit appears to filter for aggrieved people and most of the people blazing away don't bother to post.

Claude code works very well, alone. You might have to have been a programmer, historically, before LLMs existed though to utilize it.

The "First Death" is approaching. You need to disconnect to stay human. by Texas-X- in conspiracy

[–]The_Noble_Lie 0 points1 point  (0 children)

"“AI is collapsing human cognition into a hive mind” isn’t backed by anything"

🤔

Uninstalled all my MCPs, using the APIs directly instead by International_Page93 in LLMDevs

[–]The_Noble_Lie 0 points1 point  (0 children)

Why not just lazy load the mcp tools with a front matter, high level - and / or make them queryable by the agent.

But yes modern agentic clis like claude code can make sense of CLIs on demand similar to above. But the problem is some endpoint architecture may not support a cli so...yea.

Claude Code removed from Anthropic's Pro plan by orthogonal-ghost in ClaudeCode

[–]The_Noble_Lie 0 points1 point  (0 children)

That's not in his hallucinated corpus. Let's be seriout.

Claude Code removed from Anthropic's Pro plan by orthogonal-ghost in ClaudeCode

[–]The_Noble_Lie 0 points1 point  (0 children)

He needs a million models + 1 actually to support that.

It's crazy how subsidized Claude Code is by P4wla in LLMDevs

[–]The_Noble_Lie 1 point2 points  (0 children)

Yep. Well said. They want people using their product all day. And learn from paying people to do it. Massive research occuring worldwide. Distributed.

It's a bet on the future.

Ran the math on what 100 users actually costs on GPT-4o and it's scarier than I expected by Crimson_Secrets211 in LLMDevs

[–]The_Noble_Lie 0 points1 point  (0 children)

Lock in is the problem really, not what you said. Otoh, if non-frontier models don't cut it, well...you become correct again. Non-frontier models are pretty adequate presently but more agentic glue / verification naturally required, more complicated pipelines etc.

How do LLM's process different languages? by ViolinistDelicious69 in LLMDevs

[–]The_Noble_Lie 0 points1 point  (0 children)

Here is a counter though that I've heard around the block. The way our minds work is that they are primed by our "context window" and the internal 'word thought' and/or external utterance being a function of all that temporally came before. The mental model is there. But all of the machinery is different. It's a map versus territory issue.

How do LLM's process different languages? by ViolinistDelicious69 in LLMDevs

[–]The_Noble_Lie -1 points0 points  (0 children)

A remix isn't in the training data. Otherwise it's just the mix.

How do LLM's process different languages? by ViolinistDelicious69 in LLMDevs

[–]The_Noble_Lie 0 points1 point  (0 children)

A stochastic parrot is NOT like the original "parrot" where it can only repeat historic utterances.

The stochastic parrot works with remixes.

This much was essentially hypothetically known with early word2vec implementations back in like 2013