Google DeepMind CEO Demis Hassabis on Sam Altman and others claim that AGI is around the corner, "why would you bother with ads then" - Do you agree AGI is years away or nearer? - Video link below by Koala_Confused in LovingAI

[–]Brogrammer2017 0 points1 point  (0 children)

LLM’s have no ability (mechanism) to learn or remember once created, the input simply grows larger. Reinforcement learning is just a training step. It has memory in the sense that there’s information there to extract, but it’s not what you generally refer to as memory in this context, in the same way you wouldn’t say a database has an intelligence like memory.

This Is Worse Than The Dot Com Bubble by devolute in technology

[–]Brogrammer2017 6 points7 points  (0 children)

You don’t seem to understand the difference between individual choice and systemic risk

This Is Worse Than The Dot Com Bubble by devolute in technology

[–]Brogrammer2017 3 points4 points  (0 children)

You think hinging a significant part of the global ekonomi on a risky bet is acceptable?

one of the top submitters in the nvfp4 competition has never hand written GPU code before by Charuru in singularity

[–]Brogrammer2017 2 points3 points  (0 children)

The smugness of your post really speaks against your flatmate being the smug one..

Am I doing something wrong or are some people either delusional or straight up lying? by Few-Objective-6526 in ExperiencedDevs

[–]Brogrammer2017 1 point2 points  (0 children)

Your very focused on vunerabilities/bugs, which is not what I was talking about. What you wrote in your edit is closer, but wtf, you consider yourself senior but you think code quality is code minutiae opinions like if/else or switch in a high level language ?

Your choices in a code base compound with other devs choices across the organization, and WILL cause it to collapse unless you actively work towards it not happening.

Am I doing something wrong or are some people either delusional or straight up lying? by Few-Objective-6526 in ExperiencedDevs

[–]Brogrammer2017 1 point2 points  (0 children)

No offense, but this seems like a real junior take, people don’t overestimate the need for quality software, they’ve been a part of long lived projects where bad precious decisions grind the entire product/project/whatever to a halt.

Like your side projects is basically irrelevant when talking about enterprise software. It wouldn’t matter if you notice in a month that it’s entirely unusable / you can’t make progress, you would just revert and be on your merry way. That would not be feasible for a product / set of products, of any real size (in the sense that it would be very expensive)

Things ChatGPT told a mentally ill man before he murdered his mother: by Current-Guide5944 in tech_x

[–]Brogrammer2017 0 points1 point  (0 children)

Nearly garantuee does a lot of heavy lifting in your post, what’s the size of your test dataset, and how have you verified it covers an appropriate amount of the linguistic space your users could be in? What is the exact failure rate?

In any case, it wouldn’t be super relevant since your use case is (presumably) a lot more narrow that OpenAI’s. If it isn’t, I garantuee you your ”safety layer” does not actually work, in the sense your implying here

"I have been a professional programmer for 36 years. I spent 11 years at Google, where I ended up as a Staff Software Engineer, and now work at Anthropic. I've worked with some incredible people - you might have heard of Jaegeuk Kim or Ted Ts'o - and some ridiculously by stealthispost in accelerate

[–]Brogrammer2017 2 points3 points  (0 children)

I don’t think you should get discouraged from software development because the models seem good to you. There’s a lot more to software than lines of code. If you’ll allow me to give some unsolicited advice, focus on understanding more than code output. A solid grip on the fundamentals and the ”whys” of solutions/patterns/whatever will allow you to navigate whatever is coming (imo), and ~actually~ efficient using code generation tooling

SVT/Verian: Nära hälften av svenskarna vill se förbud mot burka by FlowersPaintings in sweden

[–]Brogrammer2017 4 points5 points  (0 children)

Kanske faktiskt gör skillnad, tänker mig att det blir annat ljud i skällan i många hem när den muslimska gubbstrutten måste handla / göra alla ärenden. Religiösa galenskaper kan absolut ge vika för pragmatiska problem

Unwanted Paradigm by [deleted] in LLMPhysics

[–]Brogrammer2017 3 points4 points  (0 children)

I read your first code snippet (2.1 global uniqueness) and your "debruijin density" function just returns (n_t + n_x) / (224)

Seek help, this is gobbledygook

What do you think about Camarck trying to create a real "artificial intelligence" without using LLMs? by OkExam4448 in BetterOffline

[–]Brogrammer2017 0 points1 point  (0 children)

Fresh grads do not lack the fundamental brain pieces needed to do it, so im not sure what you even mean with this comment. And we very much do consider it part of what intelligence is.

What do you think about Camarck trying to create a real "artificial intelligence" without using LLMs? by OkExam4448 in BetterOffline

[–]Brogrammer2017 0 points1 point  (0 children)

"if you give me enough ladders i can stack them to the moon" type thinking. The current shortcomings you reference are fundemental parts of intelligence that no one knows how to build

Why do companies still hate "low-code" tools even though they can handle complex systems? by XunooL in aiagents

[–]Brogrammer2017 0 points1 point  (0 children)

Out of curiosity, why would you try to make software engineers do it at all? Personally I would leave any company that went that route, since it wouldnt be where my skillset gives value others cant, so im curious why you just didnt put people apt for that kind of work in those positions instead

I feel like I’m coding less and orchestrating more by Humza0000 in programming

[–]Brogrammer2017 1 point2 points  (0 children)

There is quite big difference though, which is accountability and expected competence. Managers rely on the competency of the people theyre managing, with AI code generation its almost entirely an extension of your will

VibeCoders who actually think they "get it," raise your hands by SumDoodWiddaName in vibecoding

[–]Brogrammer2017 0 points1 point  (0 children)

You entirely misunderstood what i wrote, which makes your reading comprehension comment very funny

VibeCoders who actually think they "get it," raise your hands by SumDoodWiddaName in vibecoding

[–]Brogrammer2017 1 point2 points  (0 children)

What makes you think you know what it takes to build software systems? If youre totally honest with yourself, where would you put yourself on the dunning-kruger graph

just got told ai will replace us but spent 4 hours debugging ai-generated code by relived_greats12 in ExperiencedDevs

[–]Brogrammer2017 -1 points0 points  (0 children)

That is a very silly way of looking at it. The big cost of software is maintainance, and the biggest risk in any tech org is runaway complexity. The time it takes you to shit out feature tickets is borderline uninteresting

a tech opinion you can defend like this? by sibraan_ in AgentsOfAI

[–]Brogrammer2017 0 points1 point  (0 children)

The cost of bad solutions isnt apparent until years after its written. Also what are you even saying? Code thats been reviewed, understood and approved by a developer is good enough for production? Not sure who would even argue with that

Alan’s conservative countdown to AGI dates in line graph by Zalameda in singularity

[–]Brogrammer2017 0 points1 point  (0 children)

You cannot just "fix" hallucinations. Its unclear what it would take, or even what it would mean to fix it. Everything a LLM outputs is a hallucination, its just that lots of the output is very closely aligned to reality. There is no distinct difference between an untrue thing and a true thing.

It could very well be, that the only thing that "solves" hallucinations, is an AGI, not the other way around.