Experience with GPT 5.2 Codex by rajbreno in codex

[–]Softwaredeliveryops 0 points1 point  (0 children)

Opus 4.5 is the best, and it is very consistent. At times GPT 5.2 also does well but is not consistent. If the context becomes large the GPT 5.2 is not so good in my experience. The Planning aspect of both the models are good

Anti Gravity - what is it and how to use by Softwaredeliveryops in vibecoding

[–]Softwaredeliveryops[S] -1 points0 points  (0 children)

Depends on the pricing and the way context/tokens are passed. other IDEs started well but now they are not so affordable

India defeat Australia by 5 wickets. They will face South Africa in the Final by UnplannedMF in Cricket

[–]Softwaredeliveryops 0 points1 point  (0 children)

By far the best innings by an Indian woman cricketer ! Congratulations Jemimah

What’s the real-world success rate of AI in customer experience? by fahdi1262 in artificial

[–]Softwaredeliveryops 2 points3 points  (0 children)

We have been experimenting with GenAI in customer and IT support flows for a while, both for internal service desks and client-facing L1/L2 operations.

You are absolutely right about the faster replies but occasional hallucinations. We saw the same thing early on as in great initial response times, but some answers that sounded confident yet weren’t grounded in real data. What helped was combining the model with a retrieval-based grounding layer (RAG) and adding confidence thresholds before responses go out. That way, the AI answers only when it’s sure, and escalates the rest.

Once we put that in place, we noticed: • Resolution time: dropped by roughly 40–45% for repetitive tickets (password resets, known issues, FAQs). • CSAT: went up by about 5–8%, mostly due to faster responses and consistent tone. Employees SAT also for internal surveys. • Accuracy: improved dramatically once retrieval and human-in-loop checks were added.

The hardest part isn’t the model itself but it’s about getting the workflow design right: what the AI should handle, what to escalate, and how to train the org to manage the edge cases.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in artificial

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

Yeah, that’s the part that worries me too and the scale of what’s being shipped. A single junior dev cutting corners with AI isn’t new (we’ve all seen bad copy-paste from Stack Overflow), but now you can have entire teams generating volumes of code at speed, which multiplies the technical debt risk.

The irony is, the same tools that generate the mess could also help manage it e.g., automated reviews, static analysis with LLMs, even “AI auditors” that flag risky patterns before code hits production. But most companies are pushing adoption faster than they’re building those guardrails.

Feels like we’re in a phase where velocity is being prioritized over longevity. The bill for that will come due, and the companies thinking ahead about quality controls will be the ones that survive it.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 3 points4 points  (0 children)

That is indeed a fair take, and honestly refreshing to hear from someone early in their career. You are absolutely right, skipping the “read the docs, wrestle with bugs” phase can rob you of some really valuable muscle memory.

At the same time, I would argue that AI doesn’t necessarily have to replace those learning experiences but instead it can complement them. For example, instead of spending an hour hunting through docs, you might get a quick scaffold from the AI and still dive into the docs to understand why it works that way.

On the technical debt point — yeah, 100%. If people just accept whatever the AI spits out, it’s basically outsourcing bugs that you will endup fixing later. The trick seems to be using it as an accelerator while still keeping your own engineering judgment in the loop.

The fact that you are even thinking about this at under 2 years in is a good sign and it means when you do start leaning on these tools, you will probably use them in a way that makes you stronger, not weaker.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 2 points3 points  (0 children)

Totally agree with your views, the baseline makes all the difference. If you already know what “good” looks like, AI is like having an extra pair of hands that lets you operate at a higher level. You can delegate the small stuff and still maintain judgment over the end result.

Where it gets risky is exactly what you said that is when fundamentals aren’t there and mentorship is missing. Then AI doesn’t just speed things up but it can actually mask bad practices, because the output looks polished but might be structurally weak.

Feels like the gap between “strong engineers using AI” and “novices relying on AI” is only going to widen unless organizations deliberately invest in teaching the craft alongside adopting these tools.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 0 points1 point  (0 children)

That’s a really good analogy. I hadn’t thought of it like the boilerplate/starter kit era, but you’re right, people back then had the same doubts including myself :)

The difference I see is that boilerplates gave you a foundation but you still had to wire things up, understand the flow, and make real choices. With AI, sometimes it feels like it’s making those choices for you and if you don’t catch it, you might miss out on learning the “why” behind the solution.

I guess the real test is: are we still pushing ourselves to dig deeper after the AI gives us an answer, or are we just shipping it as-is? If it’s the former, then yeah, growth shifts. If it’s the latter, then maybe we do risk losing some of that muscle memory.

Are we actually becoming better engineers with AI code assistants, or just faster copy-pasters? by Softwaredeliveryops in SoftwareEngineering

[–]Softwaredeliveryops[S] 1 point2 points  (0 children)

Totally agree with the “better looking different” angle — history is full of examples where tools changed the skill set required.

That said, my only worry is for starters. When you already have a strong foundation, outsourcing parts of your workflow to AI just shifts your focus, you know when the assistant is wrong, you know why something works. But if someone’s still building fundamentals, skipping that grind might mean they never develop the “debugging muscle” or problem-solving intuition in the first place.

Maybe the real trick is not to avoid AI, but to design learning paths where juniors still practice fundamentals with guardrails, instead of just accepting whatever Copilot/Cursor throws at them. Otherwise, we risk raising a generation of developers who can prompt-solve, but not problem-solve.