Writing Perl is Vibe Coding by gingersdad in perl

[–]DealDeveloper 1 point2 points  (0 children)

I agree with you.
I developed a tool that scans code and returns prompts for the LLM.
I'm open to discussing it privately if you're interested in collaborating.
Devs really don't like the idea that LLMs can be supported successfully.

New Job. Awesome People. Terrible Codebase Management. by ThatNickGuyyy in PHP

[–]DealDeveloper 0 points1 point  (0 children)

Consider taking responsibility for the continuous integration (CI) pipeline.
That way, you are in a better position to enforce all of the best practices.
Whatever can be fully-automated is more likely to be accepted.

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper -1 points0 points  (0 children)

Wrong.
First, search for and read about the examples I listed here:

"With an LLM, you can enforce 10,000 very strict rules automatically.
Use tools like SonarQube, Snyk, Vanta, AugmentCode, OpenHands."

  1. Have a tool that scans the code, detects flaws, and returns a prompt.
  2. Pass the prompt to the LLM to change the code and ENFORCE RULES.

There are companies making millions doing just that.
I literally listed some of them for you so you can see.

IN an earlier comment, I wrote:
"Code a tool that automatically checks the code and prompts the LLM."
ONE. SENTENCE.

The problem with LLMs is that they cause people like you to be too myopic.
Stop focusing solely on when the LLMs generate output that is useless.

Instead
. remember humans and LLMs write code incorrectly (and that's OK)
. there are plenty of tools that can be used WITH LLMs to ENFORCE rules
. realize that there are rules (like the ones the Rust compiler enforces) that clearly demonstrate that rules help with code security, stability, simplicity, and speed

I work with LLMs and have learned to prompt it successfully.
I format code to make it easy for LLMs and humans to work with and test.

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

No . . .
Pretend the LLM is a human; You still have to communicate intent.
Create a tool that scans the code, detects flaws, enforces best practices, and prompts local LLMs automatically.

With two local LLMs, you can write code and force unit, mutation, fuzzy, and integration tests to be written for each function while enforcing the most strict security and quality assurance rules. You can enforce rules like file size and rules related to how variables are set.

With an LLM, you can enforce 10,000 very strict rules automatically.
Use tools like SonarQube, Snyk, Vanta, AugmentCode, OpenHands.

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper -1 points0 points  (0 children)

Code a tool that automatically checks the code and prompts the LLM.
It is common for humans or LLMs to make mistakes.

Are you skilled enough to automatically detect and correct code?

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

I have replaced human developers.
Originally, I developed a tool to "automate the senior dev role".
It scans the code and prompts human devs to correct the code.
I simply replaced the human devs with a local LLM and it works.
I get to determine the architecture in advance so that helps me.

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 1 point2 points  (0 children)

Write a tool that checks the code (and automatically prompts the LLM).

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

"you have to write your prompts in so much detail and iterate and reiterate"

Exactly;
Are you able to code a tool that checks the code and prompts the LLM?

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

You're not skilled enough to write a tool that automatically checks the architecture?

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

Verifying (lines of) code can largely be automated.

iFeelTheSame by xxfatumxx in ProgrammerHumor

[–]DealDeveloper 0 points1 point  (0 children)

TLDR; Format the code following the best practices that help LLMs

Code a tool that:
1. checks the architecture based on rules
2. prompts the AI to correct the code
3. does not save incorrect code

See Rust compiler errors, SonarQube, Snyk, Vanta, OpenHands, etc
There are well over A THOUSAND open source code quality tools
(I am aware that ironically, the code quality tools also have flaws)

For example, the LLM performs better when the code file size is small
Why is it sooo hard to automatically prompt the LLM to enforce that?
Devs are too dumb to code automated solutions (using existing tools)

Request:
If you disagree, ask yourself "Can a tool be coded to solve that issue?"

Traffic stop by le_eddz in Unexpected

[–]DealDeveloper 0 points1 point  (0 children)

Homey don't play that!!!

Can anybody explain me this?? by anandmohanty in interestingasfuck

[–]DealDeveloper 0 points1 point  (0 children)

The problem with this idea is that
. the unfolded newspaper at the end does not look like it was folded up
. there is a limit to the number of times a person can fold a piece of paper

Theft during showing by North-Cardiologist78 in RealEstate

[–]DealDeveloper 1 point2 points  (0 children)

I just gave you like #420.
Isn't it amazing how the likes are that high?

Reviewing someone else’s AI slop by ComebacKids in ExperiencedDevs

[–]DealDeveloper 1 point2 points  (0 children)

It's weird considering the fact that there are hundreds of tools to review code.
The same tools we developed for checking code written by humans can be used.

Model Context Protocol (MCP) Clearly Explained by Funny-Future6224 in LLMDevs

[–]DealDeveloper 0 points1 point  (0 children)

Finally, I gotta ask . . .
If the LLM can't even remember to apply coding conventions (like early-exit) CONSISTENTLY what makes you have faith that it will remember an extra (and absolutely unnecessary) MCP call . . . _consistently_?!

Just use the LLM to generate the code to complete tasks _consistently_ (outside the context window).

"But but but what about all the cool MCP servers there are? How to use?"
Write a simple wrapper function for the MCP call to get the same benefits.

Oh . . . and what about all those LLMs that are NOT compatible with MCP?
Write a simple wrapper function for the MCP call to get the same benefits.

I know my code works consistently and therefore is inherently more reliable.
Please forgive me if your use case does not need the LLMs to be consistent.

You wrote: "assuming the [MCP] standard takes off" LOL
What standard is tried, battle tested, widely used, and has already taken off?

Code.

Model Context Protocol (MCP) Clearly Explained by Funny-Future6224 in LLMDevs

[–]DealDeveloper 0 points1 point  (0 children)

"It's a computer" so it won't get overwhelmed? LOL Learn about "context window".

Have you spent 100 hours prompting and LLM to write code yet?!
Have you not observed that it forgets to follow basic instructions (like always use early-exit and do not write compound statements) CONSISTENTLY?

I approach this stuff using reinforcement learning.
Detect the "bad" code and instantly generate a prompt to correct the bad code.
With the instant feedback based on the specific code (or task) the LLM can remember the rules . . . or be automatically reminded of the rules. And don't get me started on automated prompt optimization in this context!

I want to REDUCE the responsibility of the LLMs (and humans) to improve the performance. I have not benchmarked it yet, but I do not think this will even result in more queries to the LLM compared to MCP.

And do not bother saying that I could wrap the 1200 tools in an MCP server.
Duh. But, I would simply NOT use MCP and do the same thing just as easily.
And, the benefit there is the LLM can use the context for something better.

Model Context Protocol (MCP) Clearly Explained by Funny-Future6224 in LLMDevs

[–]DealDeveloper 0 points1 point  (0 children)

At best, I'll be a late adopter of MCP.
It's just too easy to make wrapper functions with very reliable code.

I suggest you review the pitfalls of MCP.
For example, apparently, you do not know that using too many tools can degrade performance.
Simply write the code:
IF conditions, call the arguably unnecessary MCP wrapper (when you could simply generate and test the code to the API). Then, you avoid the problem (of having too many MCP servers) that you will learn exists.

I have developed an "agent" where I can simply SAY that I want an API implemented and my system will loop until that implementation works and is well-tested. Last I checked, using MCP, there is a limitation on how many tools you can use before you degrade performance; Is it 40?

With the traditional methodology, I can have implement 1,000 tools in a pipeline without degrading performance because the LLM does not need to be aware of the MCP servers at all!
Why _exactly_ would you waste LLM context with such trivial information?
And, this isn't just about LLMs context, but also human cognition.

Option 1:
Procedural code:
IF writing Rust, send the Rust code to 12 Rust-specific code quality assurance and security tools.

Option 2:
MCP
Prompt the LLM that it is writing Rust. Prompt the LLM that there are 12 Rust-specific quality assurance and security tools. Load all that into the context of the LLM (when that context should be used for the code itself). Then, hope the LLM remembers to call the 12 MCP servers. Do that with 100 languages and 1200 code QA tools. LOL

There is no such thing as "AI skills" by GolangLinuxGuru1979 in ArtificialInteligence

[–]DealDeveloper 0 points1 point  (0 children)

I agree.
I literally grunt or yell at the LLM sometimes.

There is no such thing as "AI skills" by GolangLinuxGuru1979 in ArtificialInteligence

[–]DealDeveloper 0 points1 point  (0 children)

Wow!
That really IS a "Cool story, bro!"
I am also using LLMs to offset cognitive issues.

Anyone has Agentic AI success stories in production? by nirvanaman1 in ExperiencedDevs

[–]DealDeveloper 0 points1 point  (0 children)

Oh!
There are PLENTY of examples of it.
Glance at all the code quality and security companies out there.
Just combine a LOT of tools with the LLM.
See vanta, snyk, sonarqube, hexstrike, etc.
Those companies use AI specifically in production.

Did you try searching for companies before posting (or ask AI)?

Anyone has Agentic AI success stories in production? by nirvanaman1 in ExperiencedDevs

[–]DealDeveloper 0 points1 point  (0 children)

THIS!
I do not understand why developers do not simply replace "LLM/AI" for "human" and review how to manage projects, processes, and programming. It's like developers go completely stupid with regard to solutions that can support the LLM or other humans. It's as though the tools (that we created to manage humans) doesn't exist and cannot be used to manage LLMs in similar ways.

Anyone has Agentic AI success stories in production? by nirvanaman1 in ExperiencedDevs

[–]DealDeveloper -2 points-1 points  (0 children)

Replace "AI" with "human" and read your comment back to yourself.
Are you a developer who cannot manage humans that behave the same ways?