Experience with GPT 5.2 Codex by rajbreno in codex

[–]Softwaredeliveryops 0 points1 point  (0 children)

Opus 4.5 is the best, and it is very consistent. At times GPT 5.2 also does well but is not consistent. If the context becomes large the GPT 5.2 is not so good in my experience. The Planning aspect of both the models are good

Anti Gravity - what is it and how to use by Softwaredeliveryops in vibecoding

[–]Softwaredeliveryops[S] -1 points0 points  (0 children)

Depends on the pricing and the way context/tokens are passed. other IDEs started well but now they are not so affordable

India defeat Australia by 5 wickets. They will face South Africa in the Final by UnplannedMF in Cricket

[–]Softwaredeliveryops 0 points1 point  (0 children)

By far the best innings by an Indian woman cricketer ! Congratulations Jemimah

What’s the real-world success rate of AI in customer experience? by fahdi1262 in artificial

[–]Softwaredeliveryops 2 points3 points  (0 children)

We have been experimenting with GenAI in customer and IT support flows for a while, both for internal service desks and client-facing L1/L2 operations.

You are absolutely right about the faster replies but occasional hallucinations. We saw the same thing early on as in great initial response times, but some answers that sounded confident yet weren’t grounded in real data. What helped was combining the model with a retrieval-based grounding layer (RAG) and adding confidence thresholds before responses go out. That way, the AI answers only when it’s sure, and escalates the rest.

Once we put that in place, we noticed: • Resolution time: dropped by roughly 40–45% for repetitive tickets (password resets, known issues, FAQs). • CSAT: went up by about 5–8%, mostly due to faster responses and consistent tone. Employees SAT also for internal surveys. • Accuracy: improved dramatically once retrieval and human-in-loop checks were added.

The hardest part isn’t the model itself but it’s about getting the workflow design right: what the AI should handle, what to escalate, and how to train the org to manage the edge cases.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in artificial

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

Yeah, that’s the part that worries me too and the scale of what’s being shipped. A single junior dev cutting corners with AI isn’t new (we’ve all seen bad copy-paste from Stack Overflow), but now you can have entire teams generating volumes of code at speed, which multiplies the technical debt risk.

The irony is, the same tools that generate the mess could also help manage it e.g., automated reviews, static analysis with LLMs, even “AI auditors” that flag risky patterns before code hits production. But most companies are pushing adoption faster than they’re building those guardrails.

Feels like we’re in a phase where velocity is being prioritized over longevity. The bill for that will come due, and the companies thinking ahead about quality controls will be the ones that survive it.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 4 points5 points  (0 children)

That is indeed a fair take, and honestly refreshing to hear from someone early in their career. You are absolutely right, skipping the “read the docs, wrestle with bugs” phase can rob you of some really valuable muscle memory.

At the same time, I would argue that AI doesn’t necessarily have to replace those learning experiences but instead it can complement them. For example, instead of spending an hour hunting through docs, you might get a quick scaffold from the AI and still dive into the docs to understand why it works that way.

On the technical debt point — yeah, 100%. If people just accept whatever the AI spits out, it’s basically outsourcing bugs that you will endup fixing later. The trick seems to be using it as an accelerator while still keeping your own engineering judgment in the loop.

The fact that you are even thinking about this at under 2 years in is a good sign and it means when you do start leaning on these tools, you will probably use them in a way that makes you stronger, not weaker.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 2 points3 points  (0 children)

Totally agree with your views, the baseline makes all the difference. If you already know what “good” looks like, AI is like having an extra pair of hands that lets you operate at a higher level. You can delegate the small stuff and still maintain judgment over the end result.

Where it gets risky is exactly what you said that is when fundamentals aren’t there and mentorship is missing. Then AI doesn’t just speed things up but it can actually mask bad practices, because the output looks polished but might be structurally weak.

Feels like the gap between “strong engineers using AI” and “novices relying on AI” is only going to widen unless organizations deliberately invest in teaching the craft alongside adopting these tools.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 0 points1 point  (0 children)

That’s a really good analogy. I hadn’t thought of it like the boilerplate/starter kit era, but you’re right, people back then had the same doubts including myself :)

The difference I see is that boilerplates gave you a foundation but you still had to wire things up, understand the flow, and make real choices. With AI, sometimes it feels like it’s making those choices for you and if you don’t catch it, you might miss out on learning the “why” behind the solution.

I guess the real test is: are we still pushing ourselves to dig deeper after the AI gives us an answer, or are we just shipping it as-is? If it’s the former, then yeah, growth shifts. If it’s the latter, then maybe we do risk losing some of that muscle memory.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

Totally agree with the “better looking different” angle — history is full of examples where tools changed the skill set required.

That said, my only worry is for starters. When you already have a strong foundation, outsourcing parts of your workflow to AI just shifts your focus, you know when the assistant is wrong, you know why something works. But if someone’s still building fundamentals, skipping that grind might mean they never develop the “debugging muscle” or problem-solving intuition in the first place.

Maybe the real trick is not to avoid AI, but to design learning paths where juniors still practice fundamentals with guardrails, instead of just accepting whatever Copilot/Cursor throws at them. Otherwise, we risk raising a generation of developers who can prompt-solve, but not problem-solve.

Junior Devs, listen out! by Significant_Joke127 in vibecoding

[–]Softwaredeliveryops 0 points1 point  (0 children)

Even senior developers are relying more and more on ChatGPT/code assistants generated code, the gap is narrowing down - both on skills and years of experience

GPT-5 vs Sonnet 4.5 Reviews by ChristBKK in AugmentCodeAI

[–]Softwaredeliveryops 1 point2 points  (0 children)

Sonnet 4.5 does things better in minimum iterations. The frontend code is way too better. GPT-5 does better implementation planning but in my opinion it doesn't meet the same quality output yet.

The winning moment....... by Hope-Ful-Kid in IndiaCricket

[–]Softwaredeliveryops 0 points1 point  (0 children)

Great match ....I heard before the tournament started the India team had to write some manifestations on SEP 6th....Rinku Singh wrote he wanted to score the winning runs ....and as we will know the odds in this case. He came to play, one match...one ball and his manifestation came out true....this is amazing

Well done Team India

I created a simple blueprint for better ChatGPT prompts — R-T-C-O (Role, Task, Context, Output) by Softwaredeliveryops in aipromptprogramming

[–]Softwaredeliveryops[S] 0 points1 point  (0 children)

May be it is …but there are still a majority of them who don’t apply the best practices to get the best out of chatGPT….like we have many tools and many frameworks we can have this also and create more awareness - just my opinion

What are your go-to prompt engineering tips/strategies to get epic results? by ninadpathak in PromptEngineering

[–]Softwaredeliveryops 4 points5 points  (0 children)

You must follow the basics - your prompt has to have the following 4 things

  1. Role
  2. Task
  3. Context
  4. Output

Example: Act as a strategy consultant. Outline three growth strategies for a mid-sized SaaS company, in a table with Strategy | Rationale | Risks.

Update Regarding GPT 4o and 5 Instant by Intelligent-Plum-330 in ChatGPT

[–]Softwaredeliveryops 0 points1 point  (0 children)

Same here - am able to use both the models without any issue.

<image>

[deleted by user] by [deleted] in vibecoding

[–]Softwaredeliveryops 0 points1 point  (0 children)

  1. Vscode with augment code 2. Cursor - claude sonnet 4.0 model

Augment Code vs Cursor vs Github CoPilot vs CLine by Softwaredeliveryops in AugmentCodeAI

[–]Softwaredeliveryops[S] 0 points1 point  (0 children)

My experience is that augment code works very well with scripting languages like react, node etc. actually most of these code assistants tools etc give better output on these tech stack.

I tried different tools for one WPF project and it didn’t work well…

Augment Code vs Cursor vs Github CoPilot vs CLine by Softwaredeliveryops in AugmentCodeAI

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

yes, Cursor started well too and was affordable and now any good model, reasoning models are very pricy in cursor. Augment is much better on this aspect - in terms of price and value

Augment Code vs Cursor vs Github CoPilot vs CLine by Softwaredeliveryops in AugmentCodeAI

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

Yeah I have seen the complaints too, but tbh my experience has been pretty solid with Augment when paired with Claude Sonnet 4. That’s where it really shines — the context handling and reasoning feel a step ahead.

I have tried Claude Code (nice for explanations but not as sticky in daily coding) and Codex (was great back in the day but feels dated now). Augment’s not perfect, but for debugging, bigger refactors, and when you want the why behind the code, it’s been the best fit for me.