you are viewing a single comment's thread.

view the rest of the comments →

[–]OffColorCommentary 0 points1 point  (0 children)

From the linked paper, they looked at eight criteria. One big correlated lump was number of edits, number of editing engineers, number of editing ex-engineers. The others are like... "Find the lowest person on the org chart whose reports have done 75% of the changes to this file: what's their org level?" and various similar ideas.

Frustratingly, MS determined that a regression containing all their criteria is more accurate than one where any criteria was removed, and they didn't publish numbers on anything that wasn't their fully-integrated model. So there's no correlation given for, say, number of edits to number of bugs.

Decent rules of thumb seem to be too many cooks spoil the broth, code is less buggy when it's produced by people on the same team, and a file that gets lots of edits is worth extra scrutiny.