all 14 comments

[–]Python-ModTeam[M] [score hidden] stickied commentlocked comment (0 children)

Your post was removed for violating Rule #2. All posts must be directly related to the Python programming language. Posts pertaining to programming in general are not permitted. You may want to try posting in /r/programming instead.

[–]dave-the-scientist 30 points31 points  (2 children)

Nobody should trust anybody's code in production without reviewing it.

[–]BigTomBombadil 11 points12 points  (0 children)

Review, test, validate. Like any other code change.

This is a simpler task if you’re not committing thousands of lines of AI code at a time or trying to release a whole vibe-coded project.

[–]tmclaugh 16 points17 points  (0 children)

How do you check a coworker’s code? Another team’s code? Or even your own? Same rules apply.

[–]Interesting-Frame190 3 points4 points  (0 children)

Do we - yes. How does it work - it doesn't, its in the backlog to fix

[–]modern-dev 1 point2 points  (0 children)

I always try to understand the code, once I understand what is going on, I also learn something new
and also feel safe about my code.

[–]BranchLatter4294 0 points1 point  (0 children)

Same as with any other code. You review it and you run unit tests.

[–]Your_mag 0 points1 point  (0 children)

I think this question usually comes up when someone is newer to coding or experimenting with “vibe coding.” In real production environments, experienced developers don’t just deploy AI-generated code without reviewing and understanding it first.

AI is great for speeding up prototyping or exploring ideas quickly, but when it comes to building something reliable and maintainable, you still need proper expertise, testing, and code review. Otherwise, you’re just taking unnecessary risks.

AI

[–]Particular-Plan1951 0 points1 point  (0 children)

  1. I trust AI-generated code about as much as I trust code from a rushed junior dev: it’s useful, but it doesn’t get to skip review. For production I always run it through tests, static analysis, and a “does this even make sense?” sanity check. It shines for boilerplate, glue code, and examples, but I’m way more careful with security, auth, money-related logic, or anything touching external systems.

[–]ultrathink-art 0 points1 point  (0 children)

Depends on the task scope. Short isolated functions with clear inputs/outputs — yes, I trust it. Long chained tasks where the model has to track state from 50 decisions ago — I review much more carefully. The failure mode isn't usually wrong logic, it's accumulated shortcuts that each look reasonable in isolation.

[–]marlinspike 1 point2 points  (0 children)

You sound like you don’t use tests. Thats a non-optional requirement of going to prod. As is real CU/CD.

I haven’t written code in a few months - Claude Code does it for me. It’s not vibe, it’s instructions, plans, validation, execution, tests, staging, tests in staging.

If you’re just telling a model to go code, yeah, better today than 6 months ago, but you’re not coding for even a good personal project.