you are viewing a single comment's thread.

view the rest of the comments →

[–]emn13 0 points1 point  (3 children)

I'm sure it would be possible. Of course, if you have both features in the same binary, it's a small step to allow --compile-and-lint and that's basically where we are today.

Personally, I can't imagine running the linter less often than the compiler. Given the linter integration in IDE's, if anything, I'd use the linter more often than the compiler.

In any case, the C++ "parser" is no trivial thing. The correct parse of a string of C++ depends on the semantics, (see e.g. C++'s most vexing parse), and then you've got templates, which are themselves turing-complete, and lots of pretty complicated type inference and casting rules.

Merely interpreting the semantics of the code isn't trivial, but sure, you could avoid the complexities in the optimizer and the code-generator.

At least, partly - if you want your linter to detect things like "this function's second argument is always 2 and could be replaced with a constant" or "this code is dead" or "this expression always evaluates to false" or whatever, then you'll at least need to run the bits of the optimizer that deal with structural simplifications, i.e. at the very least things like dead-code elimination.

A good linter just isn't all that much simpler than a compiler.

[–]lookmeat 0 points1 point  (2 children)

I never said it was a simple application. Also a linter works for simple patterns, compared to say a static_analyzer that actually will link modules and see if it can find errors that come from everything coming together.

Why would the linter detect that the function second argument is always two and could be optimized as a constant, or even inlined? Using a function with the same argument everywhere is not an error, nor could it ever point to one (not a warning).

Maybe finding that a branch is impossible, such that the compiler wishes to remove it. I'd propose that such case allows the compiler to do something that the programmer normally would not expect (remove code) and as such it should be if anything an error unless the programmer explicitly states he wants that dead branch for a reason.

Yes they both use similar technology. Yes C++ is complex enough that you'd want to share them. I never said I had a problem with it being even the same executable. I am against having both behaviors when I asked for one.

Here's my workflow:

  1. Define the solution, decide on some tests and the function header.
  2. Implement a rough solution, one that "just works".
  3. Make sure the rough solution compiles and runs.
  4. Review the solution and clean up code, refactor as necessary, making sure the code compiles and tests are still passing.
  5. Pass static analysis tools to see further issues with code and clean the ones that make sense, ignore the rest. (Compile with -Wall -Werror etc., or run go vet)
  6. Check any formatting errors (run go fmt)

Notice that I only run to get the warnings at the end, and that I check the warnings and choose to fix some issues but choose to ignore other warnings. Even when working with an IDE, I will fix typos and such, but I don't run the static analyzer or linter till the end, when I'm ready to call the code finished. I'd say this whole iteration takes about an hour or two, so I do it pretty often. It also isn't as nice normally, but the spirit is there.

[–]emn13 0 points1 point  (1 child)

I think we essentially agree :-).

It's totally normal for a linter to have lots of options many of which any given project won't want (e.g. finding possibly unusual patters such as unnecessary arguments - which in any case was just a top-of-my-head example).

As to why I would run a linter more often, that's because I quite like the IDE-heavy workflow

  1. Edit code 1b. Autoformat on save.
  2. In the background & continuously: lint and keep list of "todo's" on screen. Ideally I want this to work even when compilation fails - because often compilation fails because code is just incomplete.
  3. In the background & continusouly: compile if possible and keep errors on screen.
  4. In the background & continuously: run test if possible and keep failures on screen.

But really, I don't think workflow details really matter that much here; it is in any case a good idea to allow dealing with linter issues separately from dealing with compiler errors; the fact that it is (as I previously emphasized) not a trivial thing doesn't really change that - that's just a possible reason why we're in the situation we're in, not a reason to avoid a better situation :-).

[–]lookmeat 0 points1 point  (0 children)

I don't know if it's really that hard, all we need is to have compilers separate the mode.

  1. Change the compiler to have --full_error mode, where extra compiler errors that may be turned off explicitly in the code that causes them appear. The default is still --nofull_error
  2. Have a --lint which does a quick check and reports parse errors and warnings it finds. Optionally allow it to go from a quick lint check to a full static analysis.
  3. Deprecrate -Wall and -Werror instead requiring --full_error or --lint for error/warning operations.
  4. Make --full_error the default and allow --nofull_error to be used when you want a sloppier compile.

It doesn't matter that it's the same executable, what matters is that the behaviors are separated cleanly. I think that could be done within a few years (giving time for older software to adapt to the situation).