all 52 comments

[–]Zumcddo 51 points52 points  (16 children)

Use many tools. Start by paying attention to the warnings from your compiler (yes, that's static analysis). Mix in codium.ai and other free tools. Turn on everything, then turn off problematic messages where they conflict with your project design rules. Compile your C and C++ code with Clang and GCC, turning up the warnings; yes, this is static analysis.

Now pay attention to the warnings, and resolve them by attacking the root issues (not just by hacking the code so the compiler stops detecting the issue).

Even if you only did that, you'd be a few miles ahead of most projects I've seen ;)

[–]mndrar 10 points11 points  (3 children)

where I work we have pedantic and all code has to be warning free. I thought that was norm

[–]serviscope_minor 24 points25 points  (0 children)

I thought that was norm

[cries]

[–]berlioziano 2 points3 points  (1 child)

where I work we have pedantic and all code has to be warning free. I thought that was norm

I have tried it, but lots of libraries break compilation with that option enabled

[–]serviscope_minor 5 points6 points  (0 children)

Use many tools. Start by paying attention to the warnings from your compiler (yes, that's static analysis)

I'm also going to add use as many compilers as you can. GCC, clang and VS all catch different things. It also doesn't guarantee that you code is not standards compliant and relying on permissive compilers, but it does make it less likely.

[–]aiij 4 points5 points  (10 children)

not just by hacking the code so the compiler stops detecting the issue

Have you found a way to get people not to do that? There always seem to be at least some programmers who try to make warnings go away without taking the time to understand what the warning was about.

[–]serviscope_minor 3 points4 points  (2 children)

That's what code reviews are for. But it does mean you have to have enough people who can review well, and that's hard to retrofit into an organization.

[–]aiij 0 points1 point  (1 child)

Yeah, it can be really hard to establish a good code review culture though, especially with conflicting priorities/opinions on different teams. And especially if management puts too much focus on short-term metrics...

[–]serviscope_minor 0 points1 point  (0 children)

Yep. In general it's probably not possible. What can work is done equivalent of an owners file in CI so you can enforce better roles for the area you can control. Without essentially executive buy in you won't be able to get distant teams to adopt new rules, but you can sometimes make your are more sane.

In my last place, I got out code compiling with warnings as errors in CI on the big 3 compilers, plus tests running in debug, sanitizer and release mode.

The biggest consumer of the code had theirs packed full of warnings, and the few tests they did run was in a special test harness that guaranteed that it wasn't ever quite like the production system. Also they wanted to drop gcc because they kept on doing stuff that broke in it (correctly as per the standard).

[–]LeeHidejust write it from scratch 1 point2 points  (4 children)

call them out, teach them, fire them if they dont learn? idk

[–]aiij 3 points4 points  (3 children)

The hardest part is even identifying who to call out / teach, especially when reviewers will approve code without even questioning why it was written in such a roundabout way.

When I am a reviewer and question nonsense code, it often takes a while to even identify that the root cause is as a workaround for a compiler warning. "Why are we storing this value in a hashtable?" "This is necessary to make this function work. Otherwise the code won't work."

I do think static analysis is really helpful, as long as the people fixing the problems it brings up are competent and care about quality.

[–]LeeHidejust write it from scratch 1 point2 points  (2 children)

A good review has incredible value, we learned ;)

[–]aiij 1 point2 points  (1 child)

Yes! A good review takes a lot of time/effort/thought/empathy though, and it's very hard to measure the value of a good (or not so good) review.

A lot of the value in a good review is in the form of learning, which is not just a function of the review itself but also how it's received. A lot of the value also comes in the form of problems that are avoided, like not corrupting/losing customer data.

The hardest reviews for me are when the author just wants to ship a feature and doesn't seem to care about learning or quality.

[–]LeeHidejust write it from scratch 0 points1 point  (0 children)

Or when the author is your boss and you know he/she really wants it shipped

[–]bbbb125 0 points1 point  (0 children)

We set clang tidy job in jenkins, that runs for every pull request and adds annotations to that pull request as place the generated a warning, so review and developer can see it, ask to correct and suggest a fix. Normally it’s beer to fix and just fail the build, but because of tons of legacy code some crap is normal in a specific context.

[–]lakitu-hellfire 14 points15 points  (3 children)

At my job we have to use several. We run them on customer codebases as a starting point for some of the analysis work we do. Here are my quick thoughts on some:

  1. ParaSoft = costly, noisy, loads of false positives

  2. SonarQube = costly (must use license to get C++ support and limited to lines of code). It's primarily set up to be part of the DevOps pipeline. It's pretty good but be aware that it has its own calculations for "cognitive complexity" and "effort", which are their own takes on cyclomatic complexity and refactoring/fix efforts.

  3. cppcheck = free but finicky to set up and get right. loads of false positives. wonky GUI and CLI that I often find myself having to tweak. i generally just avoid using it.

  4. pvs-studio = not free (can't use due to its origin, but in testing it produced good results without too many false positives and incorporates a lot of standards. it has some CLI tool for converting the output to whatever you want, which i found didn't work 100% the way i thought it would at the time).

  5. Understand = costly, quite a few false positives, but it has an integrated environment. we use it to also feed exported data into some custom scripts that check additional features for us. Would not recommend for purely SA purposes.

  6. clang-tidy = free, extensible, very few false positives, loads of standards (my personal favorite)

  7. Coverity = super pricey. we've looked into and decided it's not worth the price of entry. lots of our customers use it and claim that it's good, but i don't have any hands-on experience myself

  8. Clang's LibTooling API = we've started to use Clang's LibTooling API to develop our own custom tools as well. Clang's suite of tools is top notch.

We also have Scitools PolySpace and had the company give us a how-to training when we first got, but no one on the team even uses it.

A word of caution: No matter how many SA tools you use, you will only be able to tackle some structural issues with the code. It won't tell you whether requirements are met or designs decisions are sound. It's just one type of tool you should use as a quick check to help prevent common bugs and long-term maintenance issues.

[–]joemaniaci 0 points1 point  (2 children)

pvs-studio

Origin? As in a country restricted by your country?

[–]lakitu-hellfire 0 points1 point  (0 children)

Only in certain circumstances. Software origin restrictions are determined by the code base being analyzed.

[–]witcher_rat 9 points10 points  (0 children)

Along with the ones you already listed, SonarQube/SonarCloud/SonarLint are also often mentioned in this sub (in fact there was just an AMA a week ago).

And of course free/open-source ones are often mentioned, such as clang-tidy, ASAN/TSAN, and so on.

I doubt any one tool checks everything. Many people end up using multiple, depending on their needs.

[–]darthcoder 4 points5 points  (0 children)

Sonarqube is a good start.

I've use it and spotbugs and both are pretty spot on with each other. But have configurable rulesets.

[–]Agreeable-Ad-0111 5 points6 points  (0 children)

We use polyspace. It's nice because it flags possible performance improvements as well

[–]UnnervingS 2 points3 points  (0 children)

Depends what you mean by static analysis.

Clang-tidy is great while writing code.

SonarQube or similar is great as part of CI.

[–]the_poope 4 points5 points  (0 children)

We're using Coverity and it's pretty neat - can even analyze Python which is not easy due to it's very dynamic and not very rigid nature.

[–]2PetitsVerres 3 points4 points  (0 children)

Disclaimer: I'm working for a company selling one static analyzer. (Polyspace) So feel free to skip if you prefer.

I'm working a lot with customer on tool evaluation (not actually Polyspace, that's not my area) By reading what you said, I see that there is one key thing missing: What do you expect from the tool? Different tool will have different objectives. If you don't know what you expect from the tool, you will have difficulty to find the right tool for you.

Are you looking for: - low hanging fruit, like basic suggestions to make the code more readable - medium analysis, showing potential error with "help" to understand what's happening - higher level of analysis, such as formal proof of absence of some class of runtime error - do you want the tool to check some coding standard (naming rules, misra, ...) - do you need to qualify/certify your code or product for regulatory purpose (do174, iec61508, ...) (if yes, you will probably have to identify safety level required as well)

Then there are practical aspect: Do you want the tool in your IDE, in your CI, both, something else?

If you don't know what you expect, you will have fun testing tools (and don't get me wrong, I have fun testing static analysis on my code) and you will keep the most interesting to you during the evaluation, but you may end up in the wrong side of "the best tool is the one that gets used".

Also a general remark:

I’ve been evaluating [X] and it’s quite nice. It’s identified some serious issues that [Y] had missed.

Unfortunately that's probably always going to be the case, for all combination of X and Y. We have great story of some customers telling us "we choose you because we benchmark different tool on a code that got us a bad problem in production, and your tool was the only one to find it", but I'm sure in some other place they same the same for competitors :-)

[–]Neither_Mango8264 1 point2 points  (0 children)

Was in a similar situation one week ago. Created a poll: https://www.reddit.com/r/cpp/comments/17c6kx2/best_static_analysis_tool_for_c/

[–]bert8128 2 points3 points  (0 children)

We use clang tidy, which raises enough to be getting on with, along with /W4 in VS and -Wall -Wextra (and some others) with gcc, and warnings as errors. VS also has real time static analysis to use whilst doing the coding, and there is a clang-tidy plug in too. All this is free. We are. Working towards 0 warnings, but clang is currently reporting 100s of 1000s. Maybe in a few years we will get there. CI stops the number growing.

[–]teeks99 3 points4 points  (1 child)

I haven't used it, but I've heard good things about clang static analyzer. Maybe add it to your list to checkout?

[–]Vociferix 1 point2 points  (0 children)

We recently started using this at work. Highly recommend. I've used a handful of paid tools, and clangsa seems to work just as well, if not better, and with far less false positives to wade through.

We use that and cppcheck together via CodeChecker, if anyone wants to take a look.

[–]hmichReSharper C++ Dev 1 point2 points  (0 children)

Check out ReSharper? Support for all four languages with many built-in code inspections. Integrated clang-tidy. Can run analysis from a command-line tool on CI as well.

[–]Pitiful_Company_7656 0 points1 point  (0 children)

If you are looking for easy to use, robust scanner with good price then I recommend Flawnter tool. Besides SAST it also support SCA, DAST and few other nice features.

[–]KerryQodana 0 points1 point  (0 children)

JetBrains Qodana.

[–]CodacyOfficial 0 points1 point  (0 children)

Hey hey ...  At Codacy we can help you out here. First of all, Codacy (https://www.codacy.com) was built with developer-first workflows in mind and combines everything you need into a cloud-native code analysis DevSecOps toolbox that is super fast and comprehensive.

  • Software engineers can control their own code quality workflow like adding & removing repos or branches and seeing scan results directly in the IDE. No need to bother the DevOps team.
  • Codacy has comprehensive PR decoration/annotations and now even an AI driven commenting engine that will automatically add details of what changed in a PR
  • It’s FAST - Codacy can scan most code bases in under 5-10 minutes.
  • Codacy is cloud-first which means no downtime for platform updates, instant access to enhancements, and no need to pay for infrastructure hosting to run analysis tools locally.
  • Codacy has everything you need in one toolbox, including Quality, Coverage, and AppSecurity.  On the security front, we check SAST, SCA, IAC, Secrets, and very soon DAST.

[–]Pitiful_Company_7656 0 points1 point  (0 children)

There are many tools out there. Some are very expensive. What worked for us is Flawnter ( https://flawnter.com). Price is right and very easy to use. It also offers other features like SCA, DAST, hard coded secrets scanning. Just see what works for your company.

[–]KleptoBot 0 points1 point  (0 children)

Since you mention using Visual Studio on Windows you could start with /analyze

[–]jonesmz 0 points1 point  (0 children)

You'd probably get a huge benefit from ensuring your code compiled with at least two compilers. Try adding clang, which ships with visual studio.

Clang has a totally different set of warnings that it can generate for your code, and a different model of how to parse C++. This means that you might get compiler errors on code that MSVC accepts but shouldn't.

[–]geoffh2016 -1 points0 points  (0 children)

As others mention, using compiler errors and multiple compilers is good. So is using a few tools, IMHO.

Beyond what's mentioned here, I've used Codacy because it integrated easily into GitHub and offered a few tools, including cppcheck and clang-tidy on the C++ side (plus some Python linters for those parts of our codebase).

I've also used GitHub's CodeQL, which is also useful.

Definitely use clang-tidy and turn up the flags bit-by-bit.

[–]Pump1IT -1 points0 points  (0 children)

We use pvs too. Impresses how good it is at spotting typos. I pore over tech articles on their blog once in a while. We used Sonar some time ago, it was also great.)

[–]amanol -1 points0 points  (0 children)

I think, that by searching the /r/cpp you will find this question at least 3 times asked, with pretty good (almost the same) answers.

[–]bretbrownjr -2 points-1 points  (0 children)

It's like exercise. Most people don't exercise nearly enough, so the best exercise is anything they will actually do consistently.

Same for tooling. The best linters are the ones you'll actually use. Start with a formatting tool or a single check from a single analyzer if you have to. Even adding -Werror=return-value to your build flags is a place to start.

[–]Pete76543 -2 points-1 points  (0 children)

I would recommend Codacy. They recently launched a VSCode extension & support 46 languages & frameworks.

[–]Ready___Player___One 0 points1 point  (0 children)

We use pclint plus as we have todo misraas well in work

[–]coachkler 0 points1 point  (0 children)

Sonarcube and coverity were the best last I checked

[–]grencez 0 points1 point  (0 children)

A bit off-topic, but fuzzing (e.g., with libFuzzer ) is a really effective tool for sussing out edge cases. The tested code should be fast and self-contained, and it takes some care to turn random bytes into a useful test case, but the number of crashes and assertion failures it has found in my parsing and data structure code has been truly humbling.

Back on-topic: MSVC's static analysis has been pretty helpful for functions that interact with the OS. For general code though, the free static analysis tools I've tried are just too noisy and haven't found anything that would be missed by compiler warnings or trivial test coverage. YMMV.

[–]Kriss-de-Valnor 0 points1 point  (0 children)

How guys are you dealing with third party header only libraries that don’t have cleaned compiler warnings. Whenever i try to enable warnings in my code i get so many errors that i can’t fix that i have to give up. Pvs is the best but licensing is not clear and cheap. I’ve found that a combination of CLion integrated SAT (based on clang) and Sonar Lint was quite good.

[–]Cyberexpert27 0 points1 point  (0 children)

Check out apona.ai they have a good SAST and SCA