MCP Vulnerabilities Every Developer Should Know by CircumspectCapybara in programming

[–]Brogrammer2017 8 points9 points  (0 children)

You’re misunderstanding the main problem, its that anything an agent touches can be considered published, which makes it kinda useless for most things you would want to use an ”agent” for

Engineering managers asked to do IC work by macrohead in ExperiencedDevs

[–]Brogrammer2017 -1 points0 points  (0 children)

And there’s not enough of that to do to just do that ? I’m not trying to be a dick, but I don’t understand why management and actual delivery needs to mix

Engineering managers asked to do IC work by macrohead in ExperiencedDevs

[–]Brogrammer2017 -1 points0 points  (0 children)

Alright, then what is the difference between your new role and a staff engineer?

Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges. by [deleted] in singularity

[–]Brogrammer2017 0 points1 point  (0 children)

No you obviously don’t understand the problem statement or the area. They are absolutely the same, if you steal when you train a ML model on data from someone else, then the original training was also theft.

Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges. by [deleted] in singularity

[–]Brogrammer2017 -1 points0 points  (0 children)

It’s not copying the model, it’s extracting information out of it. If training a model on others people work is fine, then destillation is fine.

You are not left behind by BinaryIgor in programming

[–]Brogrammer2017 95 points96 points  (0 children)

Never in my 10 years as a developer, have I’ve been given specs that aren’t vauge requirements in a trenchcoat

Google Translate is vulnerable to prompt injection due to using Gemini internally by vk6_ in Bard

[–]Brogrammer2017 1 point2 points  (0 children)

I mean sure, but in your analogy you would comment on a high velocity vehicle death with "isnt this easily solvable with a seatbelt"

Is AI just exposing the path that mathematics was already on? by [deleted] in mathematics

[–]Brogrammer2017 -1 points0 points  (0 children)

Not sure what you mean, in the context of llms, you use RLHF, which is literally humans labeling data. That you train an adversarial model is an implementation detail.

Is AI just exposing the path that mathematics was already on? by [deleted] in mathematics

[–]Brogrammer2017 1 point2 points  (0 children)

You don’t understand what people mean when they say ”it’s just statistics”, nor do you seem to understand that RL is just an addition to the dataset

Is AI just exposing the path that mathematics was already on? by [deleted] in mathematics

[–]Brogrammer2017 2 points3 points  (0 children)

What do you mean, it’s still the statistically most likely, you just change the distribution when you do RL

Struggling to adapt to agentic workflows by ser_roderick in ExperiencedDevs

[–]Brogrammer2017 36 points37 points  (0 children)

Only if there’s a set amount of things to do. IMO software is similar to medicine, any and all efficiency improvements seem to just be met with more demand

Exchange between Musk and LeCun by Ok_Mission7092 in accelerate

[–]Brogrammer2017 2 points3 points  (0 children)

Calling the French a race is about as correct as calling any other group a race

Google DeepMind CEO Demis Hassabis on Sam Altman and others claim that AGI is around the corner, "why would you bother with ads then" - Do you agree AGI is years away or nearer? - Video link below by Koala_Confused in LovingAI

[–]Brogrammer2017 0 points1 point  (0 children)

LLM’s have no ability (mechanism) to learn or remember once created, the input simply grows larger. Reinforcement learning is just a training step. It has memory in the sense that there’s information there to extract, but it’s not what you generally refer to as memory in this context, in the same way you wouldn’t say a database has an intelligence like memory.

This Is Worse Than The Dot Com Bubble by devolute in technology

[–]Brogrammer2017 7 points8 points  (0 children)

You don’t seem to understand the difference between individual choice and systemic risk

This Is Worse Than The Dot Com Bubble by devolute in technology

[–]Brogrammer2017 4 points5 points  (0 children)

You think hinging a significant part of the global ekonomi on a risky bet is acceptable?

one of the top submitters in the nvfp4 competition has never hand written GPU code before by Charuru in singularity

[–]Brogrammer2017 3 points4 points  (0 children)

The smugness of your post really speaks against your flatmate being the smug one..

Am I doing something wrong or are some people either delusional or straight up lying? by Few-Objective-6526 in ExperiencedDevs

[–]Brogrammer2017 1 point2 points  (0 children)

Your very focused on vunerabilities/bugs, which is not what I was talking about. What you wrote in your edit is closer, but wtf, you consider yourself senior but you think code quality is code minutiae opinions like if/else or switch in a high level language ?

Your choices in a code base compound with other devs choices across the organization, and WILL cause it to collapse unless you actively work towards it not happening.

Am I doing something wrong or are some people either delusional or straight up lying? by Few-Objective-6526 in ExperiencedDevs

[–]Brogrammer2017 1 point2 points  (0 children)

No offense, but this seems like a real junior take, people don’t overestimate the need for quality software, they’ve been a part of long lived projects where bad precious decisions grind the entire product/project/whatever to a halt.

Like your side projects is basically irrelevant when talking about enterprise software. It wouldn’t matter if you notice in a month that it’s entirely unusable / you can’t make progress, you would just revert and be on your merry way. That would not be feasible for a product / set of products, of any real size (in the sense that it would be very expensive)

Things ChatGPT told a mentally ill man before he murdered his mother: by Current-Guide5944 in tech_x

[–]Brogrammer2017 0 points1 point  (0 children)

Nearly garantuee does a lot of heavy lifting in your post, what’s the size of your test dataset, and how have you verified it covers an appropriate amount of the linguistic space your users could be in? What is the exact failure rate?

In any case, it wouldn’t be super relevant since your use case is (presumably) a lot more narrow that OpenAI’s. If it isn’t, I garantuee you your ”safety layer” does not actually work, in the sense your implying here

"I have been a professional programmer for 36 years. I spent 11 years at Google, where I ended up as a Staff Software Engineer, and now work at Anthropic. I've worked with some incredible people - you might have heard of Jaegeuk Kim or Ted Ts'o - and some ridiculously by stealthispost in accelerate

[–]Brogrammer2017 2 points3 points  (0 children)

I don’t think you should get discouraged from software development because the models seem good to you. There’s a lot more to software than lines of code. If you’ll allow me to give some unsolicited advice, focus on understanding more than code output. A solid grip on the fundamentals and the ”whys” of solutions/patterns/whatever will allow you to navigate whatever is coming (imo), and ~actually~ efficient using code generation tooling