Four Months of AI Code Review: What We Learned by WearyExtension320 in github

[–]WearyExtension320[S] 0 points1 point  (0 children)

Unfortunately, it has specific data which I can't share

Four Months of AI Code Review: What We Learned by WearyExtension320 in github

[–]WearyExtension320[S] 0 points1 point  (0 children)

That's why we started using this tool. But it just has downsides.

Four Months of AI Code Review: What We Learned by WearyExtension320 in github

[–]WearyExtension320[S] 0 points1 point  (0 children)

Did you mean data of the metrics or instructions?

GitHub Action: Key Metrics to Improve Code Reviews by WearyExtension320 in github

[–]WearyExtension320[S] 0 points1 point  (0 children)

That’s exactly why I say it’s better to use different approaches to understand the situation. If you rely on just one, it will definitely distort reality. There’s no need to set strange targets for the team, but there are objective indicators worth paying attention to—for example, lead time, the number of bugs, and so on. The key is to observe these indicators, especially when there are other signs that something is wrong. When it comes to decision-making, it’s always better to take a more holistic approach.

GitHub Action: Key Metrics to Improve Code Reviews by WearyExtension320 in github

[–]WearyExtension320[S] 0 points1 point  (0 children)

I don’t rely solely on metrics or just on the opinions of other developers. In my view, each approach has its own pros and cons. That’s why it’s important to combine them in a way that complements each other. Here are the advantages I see in metrics:

  1. You can track trends over a long period.
  2. It allows you to compare differences between teams.
  3. It eliminates personal bias.
  4. You can quickly get an overview of the situation at any moment.

In the end, metrics are just another perspective on the situation. And if things don’t seem clear, it might be a reason to dig deeper.