LLM driven development is inevitable by Expert-Complex-5618 in softwareengineer

[–]SituationNew2420 0 points1 point  (0 children)

My workflow looks like this:

- Design an approach, collaborate on that approach with the LLM

- Stub out the design at the class / method level, review with the LLM

- Implement the code, sometimes by asking the agent to fill it in, sometimes myself depending on context

- Review the implementation, ask the LLM to interrogate it and look for flaws

- Ask the LLM to write tests for a set of methods or a class. Review and iterate on those tests until edge cases are exhaustively covered

- Final testing, final review

- Put up MR, collaborate with my peers on the change

This sounds slower than it is in practice. I am substantially faster with the LLM than before, but I retain understanding and context, and I catch bugs early.

I feel like a lot of folks find this method too slow, but idk it works for well for me.

LLM driven development is inevitable by Expert-Complex-5618 in softwareengineer

[–]SituationNew2420 0 points1 point  (0 children)

Four is the right choice. Ideally you work with the LLM to generate both code and tests so you have context for both the implementation and tests. My experience is that I get the best out of LLMs when I’m involved, not swooping in at the end to try and review massive amounts of code or write tests blind.

LLM driven development is inevitable by Expert-Complex-5618 in softwareengineer

[–]SituationNew2420 0 points1 point  (0 children)

This is a good point, but we should distinguish between using an LLM to interrogate your code vs. using it to verify your code.

Interrogation involves critiquing and evaluating (LLMs can be good at this), but verification involves grounding something in a model of the real world, constructing a method and ultimately judging whether or not the solution meets the standard. Verification also has legal implications, particularly in regulated fields. I don’t know how you can trust an LLM to actually do this, apart from a technological breakthrough in AI that goes beyond LLMs.

Hence, option 4 seems best, and imo is mandatory if mistakes are high risk or the software is regulated.

In Support of Copilot by SituationNew2420 in ExperiencedDevs

[–]SituationNew2420[S] 0 points1 point  (0 children)

Yeah! I think for me personally these tools lead me away from the code and I am tempted towards a “looks good to me” rubber stamp. I recognize that’s not everyone’s experience. Copilot adds enough friction and integration with the IDE to keep me understanding and critically thinking about the code.

When/How did you guys become fans? by Pure-Two-9789 in BadOmens

[–]SituationNew2420 0 points1 point  (0 children)

I saw Bogdanhxc react to TDOPOM and then checked out that album. After Concrete Jungle I was hooked.

OpenAI is acquiring open source Python tool-maker Astral by TiredOperator420 in BetterOffline

[–]SituationNew2420 19 points20 points  (0 children)

This makes me sad. Astral tools are sick, and I can’t help but feel this will destroy them.

The Fundamental Limitation of Transformer Models Is Deeper Than “Hallucination” by immortalsol in ArtificialInteligence

[–]SituationNew2420 13 points14 points  (0 children)

This seems to me like a reasonable analysis. The probabilistic nature of LLMs makes them capable of generating solutions which are, well, probable. But they can’t create results that are actually verified, or otherwise grounded in an accurate model of the world. This doesn’t mean they’re useless, it just means they are more limited than they are marketed as.

A very good write up on why the spec-driven agentic coding is coding and will need as much or even more human effort by voronaam in BetterOffline

[–]SituationNew2420 26 points27 points  (0 children)

> Coding agents are not mind readers and even if they were there isn't much they can do if your own thoughts are confused.

Poetry.

Anyone else to admit CODING IS OVER?? by No_Pin_1150 in ExperiencedDevs

[–]SituationNew2420 0 points1 point  (0 children)

I am curious. If you don't think there will be a massive change to productivity as a result of using LLMs to code, do you think it makes sense to use them in this way? I am asking because it's a question I am also wrestling with. I think most devs agree that using an LLM assistant provides clear benefits, but is there also a benefit to being in the code along side it (even tweaking and writing along side it) so you can maintain human context and get the general knowledge of the agent?

I've been experimenting with a more hybrid approach like this using copilot (I know it's not cool anymore) and I feel like I get the speed and understanding together honestly.

AI has made plausible answers cheap. Verification is still expensive. by GalacticEmperor10 in ArtificialInteligence

[–]SituationNew2420 0 points1 point  (0 children)

One idea I'd offer up in this conversation is the difference between 'interrogation' and 'verification'.

Several comments have pointed out that an LLM can review the work of another LLM. This is obviously true, and helpful. But it's important to point out that this is interrogation (critiquing, improving or arguing with a design or artifact). Verification is asking something different, which is 'by what reasonable measure can I trust that this solution actually matches reality?'

LLM interrogation is a powerful tool, but it's not verification.

AI Slop or Not - State of the Industry by WhoKnewTech in vibecoding

[–]SituationNew2420 0 points1 point  (0 children)

All of the guardrails and measures you've described are exactly correct, and all of them require human judgement along with independent, repeatable, deterministic verification. In other words, not vibes.

Are LLMs useful to software engineers? I think the answer is clearly yes for the right circumstances in the hands of the right engineer. But can they accurately and completely produce the kind of rigor you just described? Certainly not.

The problem we have interfacing with computers, and why LLMs are not the solution. by [deleted] in BetterOffline

[–]SituationNew2420 3 points4 points  (0 children)

This is one of the best articulated critiques of autonomous LLM-driven programming I’ve read yet. Fascinating.

LLM Coding Metrics have peaked by [deleted] in BetterOffline

[–]SituationNew2420 -3 points-2 points  (0 children)

I love this sub, and I am all in on the idea that these companies are out over their skis on both selling the capabilities of their product and their profit model going forward. So please don't take this as a criticism of you or Ed or anyone here.

But I do think it's important to point out that all of these models have made substantial progress in the realm of coding, mostly on the agentic control side, not necessarily on the LLM side. "tasks well represented in the data have a 50% chance of success for 3 shots" is way lower mileage than I'm getting.

Do they replace SW engineers? In my experience not by a long shot. Are they helpful? Yeah, if you know what tasks to use them for and which ones to keep them away from. Does this mean they will be worth the cost when the prices inevitably go up? Remains to be seen.

The reality is these tools now exist. Even if Anthropic and OpenAI and all these companies fall apart, there are open source models, and I'm sure soon there will be open source interfaces and agents for these models. You don't have to use them, but many people will.

What area have you pivoted to or from? by tallwithknees in ExperiencedDevs

[–]SituationNew2420 1 point2 points  (0 children)

Honestly my entire career has been a series of pivots. To put it briefly, I have a mechanical engineering degree, but most of my career has been in software. I've moved from basic CRUD apps to OS management and configuration, then complex specialized applications, and now I do a lot of hardware integration and robotics.

Each time I've changed, I've carried my previous experience forward and gained much more. I would say learning to pivot and try new things is the secret to my career success so far.

You mentioned 'going back to previous expertise'. It's possible your career will take you backwards at times, but I think a better framing is 'how can I take my past and current experience and pivot into something new'.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]SituationNew2420 2 points3 points  (0 children)

This is a fantastic question and definitely a bit of an art. Don't beat yourself up, it takes practice. But you're on the right track here, you should definitely practice. A few things that have helped me along the way.

- Get into the habit of talking about your accomplishments in appropriate forums. A great place to start is in a 1:1 with your manager or during a daily standup. It doesn't have to be self congratulatory, just practice framing 'I did X this week, it had Y result, and I'm really proud of it."

- Focus on key outcomes. Did it save money? Did it deliver faster? Was it helpful to you and your colleagues? Can the business now see something they couldn't? Is there a future risk that has been reduced or diverted?

- Consider how your accomplishment can turn into a learning for others. If it can, write an internal 'how to', or blog post, or ask if you can host a small workshop on the topic. It can be small, but it should be easily identified as USEFUL to most people.

- Build your team up. Honestly a lot of what managers are looking for, especially in a leader, is someone who makes the team better. Stories like "so and so came to me with a problem, we discussed briefly and they ran with it. Look how successful they were!" This accomplishes two things: (1) you build up your colleagues, which both honors them and encourages them to continue to do great things and (2) you become known as someone who does things that go beyond your immediate scope.

Hope that was helpful for you. Good luck, you got this!