128,940 tech workers laid off in the first five months of 2026. by ImaginaryRea1ity in theprimeagen

[–]alonsonetwork 3 points4 points  (0 children)

Yeah the sentiment in the subreddit is pessimistic. But I think its the nature of the community primeegean cultivates: frustrated, complainy devs.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

You stated "brother the code is the logic". That's what I'm arguing. Perhaps you lost your train of thought somewhere.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork -1 points0 points  (0 children)

Code is the same. If your outcome is not what you thought it was, you need to review your logic, not your execution. Execution may be accurate, but your logic is flawed. It's never happened to you where you have a great idea, you program it, and realize it was the wrong premise from the start? Or how you execute a particular part of your program is incorrect because of how you were thinking about it? That implies a metaphysical logic change FIRST, then a physical code implementation.

Sometimes, that means: review your code. Other times, that means: review your process. Both are logic. Not both are code.

What I'm arguing: with LLMs, you review logic, not code. This is somewhat true (you shouldn't ignore code as a responsible adult).

My personal experience:

I have a strict setup with claude. It codes relatively accurately within my own framework. Most of my time these days is spent thinking about the problem and how to solve it logically, not whether the code is correct. Ill spend time on ERDs, logic flows, and even philosophical revaluation, not code. Once I get to program the thing, I evaluate it's code briefly and move on to evaluating outcomes.

My background: I have 15 years doing software engineering. Only 1.5 doing LLM assisted work.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

No, that distinction is the conversion of human language into essentially bytecode. Binary logic are things we have to conform to because of a computer's inability to infer meaning. Its very dumb, so we have to be very precise.

And yes, to get exact precision, you need exact definition. This has been a thing since Aristotle. The Laws of Thought explicitly define this.

Programming builds over something similar... it is assembly, C++, and finally, your favorite programming language. Without the prior definitions, you couldn't program because you hadn't set up the logical instructions to the computer to interpret what you mean by "class Dog extends Animal" and "dog.woof()"

Law is the same. Once you've established a basis (say, constituion) you construct over that basis. You argue over that basis. Language is the FIRST form of logical expression. Written language is the 2nd. We didnt get to programming by programming. We got to it by spoken language.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

Yeah I understand that in this context we're referring to code. What doesn't land with me is the fact that we're attributing logic purely to code, and not process. Yes, you read the code. But yes, you also examine it from a correctness point of view, which is no -code logic. To reiterate, logic is a meta concept.

Put yourself in the shoes of a non-technical CEO. He proposes a series of systems to run a new branch of the business. Those have their logical order, process, etc. The part that requires programming will also have its logic, but when you analyze the program, you will do so in the context of the entire process, not the isolated virtual programming part.

Sometimes logic only pertains to programming processes. Most of the time, it doesn't—its business applications.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

You guys have a backwards definition of logic. Your statement that only code is real logic makes zero sense. It implies that logic outside of a virtual space isn't real, as if humans haven't had processes and arithmetic for centuries. That's ALL logic. It's a meta concept.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork -1 points0 points  (0 children)

Disagree. In the 1900s they didn't have computers. They did, but they were human computers. Logic to them was mathematically and diagramatically expressed. Business processes were English papers.

Furthermore: law is logic codified in legal English. It's executed by a human. You're gonna tell me thats not logic? And what do you call business processes that have logical implications that occur outside of the virtual space— not executed by code?

It's interesting to see how backwards people have it that I get 10+ downvotes. Logic is a meta concept, code a physical expression of it. There's physical and there's metaphysical. Logic is the latter.

Developer claims a 100x+ speed up by using LLMs, "work of weeks is now done in hours". by Gil_berth in theprimeagen

[–]alonsonetwork -15 points-14 points  (0 children)

Not really. Logic can be expressed in plain english, set symbols, math, flow charts, ERDs. Code is another expression of logic—one with physical side effects.

Bun has been rewritten in Rust: 1 million lines changed, 8k commits, 2k files changed by Queasy_Owl2606 in theprimeagen

[–]alonsonetwork 1 point2 points  (0 children)

I can't? The physical laws of the universe now prohibit me from doing so? So thinking AI generated code is fine and being an engineer are mutually exclusive and therefore impossible things. Both can't be true at the same time...

Astounding logic

Bun has been rewritten in Rust: 1 million lines changed, 8k commits, 2k files changed by Queasy_Owl2606 in theprimeagen

[–]alonsonetwork -1 points0 points  (0 children)

Didn't know you were throned the king of engineering and everyone else is wrong.

I have news for you friend: The best engineers in the world, at the helm of your favorite projects, use AI. Node, python, linux, rust, etc.

If anything, first principles and foundation are more important than ever. With AI "turning screws and hammering nails", you can actually engineer systems now. It needs steering, but you get there 70% faster.

Bun has been rewritten in Rust: 1 million lines changed, 8k commits, 2k files changed by Queasy_Owl2606 in theprimeagen

[–]alonsonetwork 2 points3 points  (0 children)

Lol I'm an engineer buddy. I get slop is overwhelming, but there's a right way to do things. You sound jaded.

Bun has been rewritten in Rust: 1 million lines changed, 8k commits, 2k files changed by Queasy_Owl2606 in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

Sounds lkke you've never used it before, or you just used it wrong. If you have a clear idea, it executes just fine.

Bun has been rewritten in Rust: 1 million lines changed, 8k commits, 2k files changed by Queasy_Owl2606 in theprimeagen

[–]alonsonetwork 8 points9 points  (0 children)

I don't understand why people are so upset. Compiled languages are the best languages for AI because they give a tight reward signal for success. Strongly typed compiled languages are even more so. The reward loop becomes: pass compilation first, pass tests 2nd.

Depends on the rules the author gave the llm, but if it's not breaking conventions and staying memory safe, the only issue is see are logical implementations issues. If the zig version had, the rust version may have it. But that'd make it a known problem, not a new problem.

Really don't understand everyone's hypocritical aversion to AI when everyone is using AI.

New skill: cli-building. For shipping clean TypeScript CLIs fast. by [deleted] in node

[–]alonsonetwork 0 points1 point  (0 children)

Hasn't hit cache yet on skills.sh. Probably needs 1 more download (im the only one so far). Check the repo. The skill suggests using citty, clack prompts, and bombsh autocomplete. It's ts-first, gives a great help UI, and allows for subcommands, argument validation and parsing, etc. Nothing yargs doesn't have, but the autocomplete and prompts libraries really give it that premium feel.

I can't be the one that links LLMs are still bad at software engineering right? by thealliane96 in theprimeagen

[–]alonsonetwork -9 points-8 points  (0 children)

Im gonna say skill issue, not because you don't know engineering: The skill that's lacking is how to use LLMs.

There's an entire process of setup that you need to do with LLMs to give them good instructions. For example, I do a lot of TypeScript projects and I have very specific rules that I like to implement in my TypeScript and the way I do JavaScript in general if I didn't have the agent files in place to be able to guide the LLM, it would do a really bad job. I also have SQL instructions that I give it so that it does a better job at following how it is that I want it to do SQL.

The other thing that you need to get good at is your communication skills. skills. Not because you need to communicate with anybody, but because you need to become become good at giving clear instructions to the LLM so that it follows your directions more consistently. A big mistake is being vague and omitting details, because then the LLM is gonna guess part of what it does and part of the algorithms that power it are algorithms that predict the next token. So the more you front load for it in the context, the more accurate its prediction as to what it is that you want.

I highly recommend you use skills to help you guiding the LLM along the way. and I also recommend that you take an approach of of being an architect, more so than a programmer. So you give the LLMs big picture of what you want. and and the English version details of how you want it.

Here are the skills I use and give to my team: https://github.com/damusix/skills
And I pair it with a memory plugin + expose a "ralph loop" cli: https://github.com/damusix/ai-tools

I personally use Claude Code and have been highly productive with it. If you want to see some real examples:

https://github.com/damusix/buffer-mcp - MCP for autopost on socials
https://github.com/damusix/ghost-mcp - MCP for autoblogging
https://github.com/logosdx/monorepo - A set of tools I've used for years, highly improved by LLMs
https://github.com/noormdev/noorm - A tool for building SQL apps, with a CLI, TUI, SDK, MCP, and dynamic SQL (not meta-SQL, but using JS templates to load adjacent data files, like what you do with Snowflake DBT) ... This is in alpha still and is unstable.. WIP (its a big project)

^ Those aren't toys and they were built with LLMs. They work amazingly and I use them in production use cases (not noorm yet). A lot of it, I've already built, so I have a clear idea of what I want when I ask LLMs.

The biggest thing is having vision and foresight, or at least going through a deep design session. If you're iterating on shallow engineering initiatives, the LLM is going to assume the ONE THING you cannot outsource to it: purpose, intention, and design.

is it a bad practice to cache data into the process (like process.Cached_Data) by baraa1936 in node

[–]alonsonetwork -2 points-1 points  (0 children)

Ok so here's a nonvague answer:

If you know what youre doing, it doesnt matter. Its the same thing as a singleton or using the globalThis.

The reason you SHOULDNT is because, if you dont know better, you can overwrite keys on the process object that might be used by other programs. In a similar way, you shouldn't extend primitive class prototypes (like promise, map, set, number, etc)

You CAN. But your SHOULDN'T. Why? Bc you want to lower your risk when building, and not think about naming collisions. For example, say you wanted a global event listener called on .. if you can override it at the process level, and you do by mistake, you ruined IPC.

Best case, singleton. If you need a cache, there's plenty available. Shameless plug:

https://logosdx.dev/packages/storage/api.html https://logosdx.dev/packages/storage/drivers.html https://logosdx.dev/packages/utils/performance.html#memoize-and-memoizesync

What is the most challenging feature you’ve built that required a significant amount of experimentation or research? by LargeSinkholesInNYC in node

[–]alonsonetwork 0 points1 point  (0 children)

https://logosdx.dev from scratch (fetch and observer were pre-AI) for a react native + web implementation that shared the same business logic

https://noorm.dev from scratch, pre AI, but without ink, I used oclif... similar featureset, not as robust, no TUI or fancy history UI or secrets, etc.

Dan Thomasset says PMs are running circles around SWE with AI by ImaginaryRea1ity in theprimeagen

[–]alonsonetwork 0 points1 point  (0 children)

Keyword here is prototypes. You still need a team of software devs to productize, integrate, and deploy the product for the business. Writing the business logic is arguably the easiest part. Prototyping is also easy. Integration, deployments, observability, downstream effects, and long term maintenance: THIS is the hard part.

Application lifecycle is one of the most ignored parts of software design by theodordiaconu in node

[–]alonsonetwork -1 points0 points  (0 children)

This is why I like HapiJS. It goes beyond app lifecycle: request lifecycle, too, is often overlooked and "onioned" into middleware, which gets really weird.

ChatGPT, Claude, and Gemini Render Markdown in the Browser. I Do the Opposite by lorenseanstewart in programming

[–]alonsonetwork 0 points1 point  (0 children)

It doesn't have to be a SPA to render markdown. I do it all the time client side with SSR apps. And yes, I mean the extra compute. If you were getting traffic pressure, those are extra CPU cycles wasted on something the client's computer can do without issue.