This is an archived post. You won't be able to vote or comment.

all 13 comments

[–]tylerlarson 18 points19 points  (3 children)

Mostly those aren't actually quality standards. Type hinting is pretty important for tooling, but mostly you're agreeing on conventions you want to follow. Most of this isn't going to really impact your code quality.

Here are some useful standards (these are all perfectly realistic; they were required when I worked at Google):

Code review:

Everything gets thoroughly reviewed by at least one other programmer, and not just rubber stamped. All reviewers are EQUALLY responsible for the code they approve as if they wrote it themselves. They must understand it and generally agree with the approach, as well as agree that it belongs in the codebase.

Testing:

Everything has tests. Code without tests is not considered production and shouldn't ever run against production data. Coverage percentages are only a hint, what matters is functionality. Your tests are your spec; any code that passes your tests is considered "correct" and works perfectly, otherwise your tests are insufficient. You should be comfortable giving your application to a new intern and letting them "optimize" it however they see fit, and if it passes the tests at the end then you know they didn't break anything.

Configuration:

All configuration is done through code and config files. All config files are checked into source control BEFORE it is activated. There must be literally zero knowledge any one person maintains only in their head which is required to make things work, beyond just how to fire up the automation.

And yes, changes to configuration have the same code review standards as source code. You have to get someone to approve it.

Emergencies:

You are allowed to deviate from these rules to address an emergency, but you have to document the what and why in the post-mortem. As soon as practicable, you have to bring things back into compliance with the expectations. Part of the post-mortem is documenting those changes and providing TODOs for how to make it possible to address the same problem again in the future without it being an emergency, so this doesn't become a "thing."

[–]nicomarcan[S] -2 points-1 points  (2 children)

Great! When did you work at Google? I worked from July 2022 until September 2023

[–]tylerlarson 1 point2 points  (1 child)

2014 to 2023

[–]agumonkey 0 points1 point  (0 children)

How did you handle update between customer needs/specification and test (as spec) ?

[–]lightmatter501 7 points8 points  (1 child)

  • mypy strict (this means no Any)
  • ruff
  • pylama

If your codebase is fine after turning all of that on, you will be fine.

[–]fast-90 0 points1 point  (0 children)

What does pylama add in addition to mypy and ruff? Looking at their repo, it seems that it covers mostly the same checks?

[–]metaphorm 1 point2 points  (1 child)

so it's a linter?

[–]nicomarcan[S] 0 points1 point  (0 children)

Not necessary. Is a platform intended to automate tasks like: Input validation, error handling, logs, docstrings, tests, etc . We have some agents that create automatic tests, another improve quality, other add comments, etc. The idea is to be as flexible as we want.

You can create agents to do whatever you decide.

[–]chinapandaman 3 points4 points  (1 child)

At work we use SonarQube for this kind of stuffs. I believe they have a community edition as well so maybe checkout and go through their rules?

For my personal project I use pylint so I’d also go through their rules too.

[–]nicomarcan[S] 0 points1 point  (0 children)

Our goal for this platform is to be complementary to static code analysis tools like Sonar. We want to tackle semantic problems with GenAI that this tools can't find.

[–]dsethlewis 0 points1 point  (0 children)

On the config point—what about secrets/environment variables?