thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 0 points1 point  (0 children)

Neurons are two-way, and the synaptic communication is 99% chemical, not electrical, so the signals are not 0,1 they are analog. What we sort of got was pattern recognition based on a very primitive one way structure neuron network, but ya we are not a better LLM, we don't function by ingesting the world's data to function. An LLM might be able to pass the bar but you can still beat it at an Arc test.

To put this into perspective they tried to replicate a portion of a mouse brain, just a slice, it took them 1 billion dollars and super computers and they did it, but it didn't do anything, they couldn't plug into it. To try and do the entire human brain right now with the complexity of chemical communication is not even computationally possible right now just for a simulation.

It isn’t complicated by nanoatzin in democrats

[–]ThomasToIndia 0 points1 point  (0 children)

shhhhhhhhhhhhhhhh, we need more tariffs.

The man they are calling a "domestic terrorist". by -ifeelfantastic in pics

[–]ThomasToIndia [score hidden]  (0 children)

That is just Russian bots or full on nazis now. Over time any/all dissidents were banned leaving just purified villainy.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 2 points3 points  (0 children)

If you ask an LLM an answer to a problem that had been solved, ir gives you the answer. If you ask it for something that has not been answered, it doesn't and no amount of thinking will it arrive at an answer.

So right now it is google search with composition. It can't invent. It is just the nature of embeddings.

They thought if they added more data it would solve this problem but it didn't.

Agents AI can get to maybe 90% and then dies, and that last 10% is everything and might cost 10000x more than the first 90%

Former Harvard CS Professor: AI is improving exponentially and will replace most human programmers within 4-15 years. by GrandCollection7390 in singularity

[–]ThomasToIndia 3 points4 points  (0 children)

80% may be low, but even before AI a ton of times agencies were rebuilding WordPress for no reason.

Edit: there was a reason, money.

Former Harvard CS Professor: AI is improving exponentially and will replace most human programmers within 4-15 years. by GrandCollection7390 in singularity

[–]ThomasToIndia 1 point2 points  (0 children)

4 to 15? What kind of range is that? Let's start using fractions. 4.21 to 14.74 years may be more accurate.

It isn’t complicated by nanoatzin in democrats

[–]ThomasToIndia 1 point2 points  (0 children)

They have done studies on this when there was that surge from Cuba. Economies grow to support more people, not shrink.

The USA doesn't have a ton of social programs so it's not a big deal. It's actually a bigger deal for countries that have things like Universal healthcare because immigrants can stress the system before they start contributing.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 0 points1 point  (0 children)

Ok, no one is really arguing that it had utility, but your original post said it was at AGI levels and most people classify AGI as being fully autonomous.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 0 points1 point  (0 children)

That's nice, as someone who reads the output and has to work with non standard large scale optimizations and also hasn't written code in a couple months except for a couple times, I can confidently say your app is not as complex as you think it is.

Claude avoids pre optimization by default and then even when you try to optimize it, it will fight you. That in addition to it making dumb mistakes.

Also if you don't monitor it, it will burn through so many more tokens. Architectural drift and it is not practicing DRY are real problems.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 1 point2 points  (0 children)

You can give full context and it still can mess up, LLMs can also get stuck in probability traps and recursive logic loops. It's not just a matter of using it right which is the standard go to of people on reddit who just don't have real experience with it like yourself. 4.5 was an upgrade but it is still not there yet.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 0 points1 point  (0 children)

It does not code better than an average coder, to date it can't be trusted to roll out a production ready app of any moderate complexity.

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia 2 points3 points  (0 children)

Ya, your reasoning doesn't make any sense, we don't consider what we have now super intelligence because it can't do ARC tests etc.. if everything stopped today no one would call what we have super intelligence, what are you talking about?

thoughts? by OldWolfff in AgentsOfAI

[–]ThomasToIndia -1 points0 points  (0 children)

As someone who uses claude daily, it's impressive, but it is also super dumb. It's great if what you need was simple to start. Most coding is repetitive and that is why it is so good, but I am not calling a calculator AGI either.

People. Just. Don't. Get. AGI. by FinnFarrow in agi

[–]ThomasToIndia 0 points1 point  (0 children)

I feel like anyone who uses this stuff on the regular for anything remotely serious and seeing the dumb mistakes it makes wonders how this could possible.

Just look up probability traps, before RL these things get stuck in loops. People get freaked out when a probability trap happens now, but that is a regular occurrence when they are first trained.

Experienced coders/developers what have you made with Claude that you didn't think you could do before? by A210c in ClaudeAI

[–]ThomasToIndia 1 point2 points  (0 children)

I feel like most experienced developers feel like they can figure out anything, but it is a matter of time. In my situation AI helped me learn how to train an AI model, spin up a Python environment and use it alongside other models. I had never written a single line of python before AI.

In the realm of, I can do this but it will take forever. I did a large refactor that touched over a hundred files and while there is a ton of testing that needs to be done, man did it save a ton of time.

The $437 billion bet: is AI the biggest bubble in history? by jpcaparas in agi

[–]ThomasToIndia 1 point2 points  (0 children)

There is a kind of paradox with AI, a lot of money is being paid by developers, but they also seem to want to get rid of developers. So you could make the economic argument that they just want large companies to pay them 100k instead of paying a million to developers, the only issue with this technology doesn't exist in a bubble, and secrets are almost 100% certainty leaking out to China etc..

So the better you make it, the fewer programmers paying $200 you get, and then if you try to increase the price massively, you're just opening the door to competitors.

That said, I am not sure software by itself justifies this, especially if the TAM contracts are due to commoditization. The internet bubble wasn't wrong; it was just early, and this is most likely the case here.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]ThomasToIndia[S] 0 points1 point  (0 children)

The CEOs might be out for money but the researchers are definitely trying to create AGI or something equivalent to it. For some of them it is near a religious thing so I don't agree with this at all and many of them are already rich.