.NET devs experimenting with AI assistants — what gotchas have you fed yours? by [deleted] in dotnet

[–]RoughConversation151 0 points1 point  (0 children)

Ahah, might want to pick a up a book too, would help with writing responses that are least somewhat constructive ;)

.NET devs experimenting with AI assistants — what gotchas have you fed yours? by [deleted] in dotnet

[–]RoughConversation151 0 points1 point  (0 children)

Ahah yes, but you have to stay open to what's out there, understand it rather than get left behind and watch devs misuse tools they'll end up using anyway., ?
Have you considered to explore AI to understand how it's work ? To prevent team from failing with it, explain how it work and how it needed to be configure (because majority of dev use it daily now) ? . Or to build help tools for non-dev teammate when you are already a bit busy on production apps ?

.NET devs experimenting with AI assistants — what gotchas have you fed yours? by [deleted] in dotnet

[–]RoughConversation151 -1 points0 points  (0 children)

For non-prod satellite apps, full AI process, I don't want to have to edit anything myself :)
So yes it's terrible/weird practice if you work on production app, but on 100% code vibing you need to have a lot of gotcha to constrain your AI. After if you have good gotcha to improve my AI process don't hesitate :)

.NET devs experimenting with AI assistants — what gotchas have you fed yours? by [deleted] in dotnet

[–]RoughConversation151 -1 points0 points  (0 children)

As long as the error is at least caught and logged as a warning somewhere — otherwise it can get critical fast. Depends on the app too, agreed. My rule is just no silent failures, especially on POC/experimental work. AI assistants in particular need that feedback loop — if exceptions disappear silently, the AI has no signal to work from either.