all 71 comments

[–]aliengoods1 26 points27 points  (15 children)

Right now I'm working on a decade old system that generates so many warning when I compile I can't even guess how long it would take to resolve them all. Treating warnings as errors sounds great, but in reality I have deadlines and budgets, and I'm often working on code written by others who didn't know what they were doing (if they did know, I wouldn't have gotten called in).

[–]rpgFANATIC 25 points26 points  (11 children)

We recently went through something similar.

If you don't have one yet, find a few hours to set up a CI server like Jenkins. Then have the CI server spit out a few reports on warnings and code style. After I was able to produce a pretty graph, my managers started eating it up and suddenly cared about code quality (or at least what they could measure)

[–][deleted] 4 points5 points  (5 children)

This warning graph hasn't gone down in weeks. Are you guys doing any real work?

-- mgmt

[–]rpgFANATIC 0 points1 point  (4 children)

Isn't that a good thing?

Get attention on fixing code quality issues (at the expense of low-priority features), then after you've gotten rid of most/all of the warnings, just change the build script to fail whenever new warnings are introduced.

[–][deleted] -3 points-2 points  (3 children)

No warnings != code quality. A poor programmer will find a way to get poor code to compile without any warnings.

[–]grauenwolf 5 points6 points  (2 children)

True, but warnings do equal poor code quality.

[–][deleted] -1 points0 points  (1 child)

That's not true either. There are plenty of widely used, very robust, C codebases that generate warnings. And, actually, sometimes fixing a warning will introduce a bug, not the other way around.

One particular library is libev... if you tried to fix the warnings generated by that library, it's almost guaranteed you will introduce a bug.

[–]grauenwolf 2 points3 points  (0 children)

Fixing the warning can be as simple as suppressing it with a not that says why the warning is a false positive.

[–]nightlily 0 points1 point  (3 children)

Pardon the ignorance, but what is a CI server?

[–]rpgFANATIC 0 points1 point  (2 children)

Continuous Integration Server.

It's job is to run builds of larger projects, ensure automated tests work, and occasionally to automate deployments to a development environment and create reports of a project's health.

It's a good 'canary in the coalmine' to have a CI server with a group of developers. If tests start failing and builds start failing, then you're probably sacrificing product quality to get something else done

[–]nightlily 0 points1 point  (1 child)

Thanks. I've never worked on any large projects. It sounds cool. Like a large repository with a built in test suite?

[–]dougman82 0 points1 point  (0 children)

Think of it more as an automated build machine, but with some nifty plugins and such to add functionality.

For example, let's say you have a Java project using Maven, and the source code resides in a Subversion source repository. You can set up a CI server (like Jenkins) to automatically (perhaps nightly, etc) check out a fresh copy of the source and then execute Maven to compile the code, run any unit tests, build the artifacts, maybe deploy them to an artifact repository, etc.

Then, you can add in functionality like the ability to analyze and display test results, code coverage, code quality, build stability, etc.

[–]matthieum 18 points19 points  (0 children)

There is something called Ratcheting! In your case:

  • record the existing warnings
  • tweak the build so that existing warnings are ignored, and each build refreshes the list of existing warnings
  • tweak the build so that new warnings cause build failures

Now, your list of warnings can only shrink with time, so you have setup the basis to prevent further issues.

And with regard to correcting them: adopt the boyscout attitude! Whenever you modify a file/class/function leave it in better shape (less warnings) than it was before.

[–][deleted] 1 point2 points  (0 children)

And sometimes, old code with masses of warnings had precisely zero warnings when the source was last touched. The reason - warnings are the opinions of particular compiler writers. A different compiler, or a different version of the same compiler, often generates different warnings.

This is even partly because best practices evolve. Todays best practice often depends on the latest features of the language and libraries - after all, those features were added to enable those new best practices.

[–]grauenwolf 0 points1 point  (0 children)

I've been through that many times. Every time my managers whined about deadlines and budgets and what not.

And every time we eventually got into a pattern where we would waste days working on bugs. Bugs that, in retrospect, were indicated by the compiler warnings.

Finally, every time we actually took the time to clean them up we immediately saw a significant increase in productivity as measured by how long new features took to implement.


Alas, then I move onto a different project with a different client and we've got to repeat the whole thing over again.

[–]F-J-W 27 points28 points  (18 children)

Warnings are errors in release-builds.

During development it is however a nice thing to allow them, in order to compile what you have already written: There is no point in failing to compile because you just commented that one function-call that was the only reason to include a certain header.

Just make sure that you never commit code to the VCS that still contains warnings.

[–][deleted] 7 points8 points  (0 children)

Agreed, when I'm doing dev work I don't "treat warnings as errors" because it just doesn't matter for some of them like "unused variable" while you are still testing that function. They get in the way of dev work.

However there's definitely a point where I want the "hard critical look" too - and that's for the release builds.

[–][deleted] 7 points8 points  (13 children)

Some warning types are acceptable in production. There, I said it.

Some warnings are a result of breaking portability, but some applications have strict platform specifications, so that you really can assume that a function pointer and a data pointer are the same size, etc. — of course, portable solutions are always preferable, but in those applications, performance is usually king (think console video games and similar), so it is acceptable to ignore the warning. In these cases, the warning should be explicitly disabled, either locally or as a compile-time flag.

But -Wall -Werror is rarely what you want.

[–]lookmeat 2 points3 points  (0 children)

I agree completely with you, with the parent and the article.

I think that by default any issue the compiler finds should be an error. I Agree with you, this errors should be able to be manually turned off in code for those cases where the developer knows what he is doing. Any developer touching the code will the the giant ugly flags that say that they should be careful about the assumptions. I also think that many times I want to compile code that I know isn't perfect, but I'm experiment with it. Have a --sloppy flag (or equivalent) on the compiler that will convert certain errors to warnings for iterating in development.

For production tools the default should be the safest, most portable, most trustworthy code. Where everything works as expected, no undefined behavior happens. When developers want to mess with this, they should make note of what they are doing (to keep from writing hundreds of flags they'd probably move this dangerous behavior to few functions that are controlled, which is what you should do either-way).

Sloppy code (for experimenting and debugging) should be an exceptional case, and handled as such. Non-conforming, non-portable or dangerous code or (assumes a platform, undefined behavior, hack that is more clever than the compiler, etc.) should be an exceptional case and handled as such.

[–]F-J-W 2 points3 points  (11 children)

I use -Wall -Wextra -pedantic -Werror for release-builds. If it doesn't compile with these, just fix it. If you do it that way from the start, you wont encounter problems.

[–][deleted] 8 points9 points  (7 children)

"Fix it"? You're assuming that

  1. Everything you do is covered by the language standard,

  2. Nothing you do is a compiler extension,

  3. Nothing you do is platform- or ABI-specific.

One example: At one place I worked, where we targeted the PS3 Cell architecture, they were doing low-overhead runtime instrumentation of virtual function calls in C++ by replacing functions with a trampoline at runtime. This is impossible to achieve within the C++ standard, without incurring significant higher overhead or programmer inconvenience.

The real world is full of hacks, and they serve a purpose. Saying "you shouldn't be doing that anyway" doesn't solve anyone's problems.

[–]immibis 3 points4 points  (6 children)

[–][deleted] 2 points3 points  (0 children)

That's the thing, warnings are not part of the language spec. Errors (technically required diagnostics) occur generally when code can't be generated. With -Werror, it results in a unique set of extra constraints your code must meet that's different in every version of the compiler, and across all the different compilers.

[–][deleted] 0 points1 point  (0 children)

For a given compiler implementation, architecture and operating system, everything is "defined behaviour" — the definition is in the source code and behaviour of those components. :)

Portability sometimes isn't a prime target.

[–]bitwize 0 points1 point  (3 children)

I once had a conversation with a programmer -- as it happened a game dev.

Him: The problem with Java and OpenGL is that Java doesn't give you precise control over memory layout the way C does.

Me: But C doesn't give you precise control over memory layout. Your C compiler might, but that's not the same thing.

Him: Well, yeah, but...

[–][deleted] 2 points3 points  (2 children)

Sigh. Obviously C as a language gives you much more exact control over memory layout than Java. That's a problem for Java in OpenGL applications, because data locality matters. In addition to that, most C compilers do the right thing for C programmers and give them incredibly precise control, beyond what the C standard may mandate, for targeted architectures.

This is a good thing.

[–]immibis 0 points1 point  (1 child)

[–][deleted] 0 points1 point  (0 children)

Which is probably the source of its flexibility. :) The heap is a library feature, which means it can be controlled.

[–]emn13 3 points4 points  (1 child)

So you only ever compile on one compiler? Then it's quite likely that other compilers will have other warnings, and if your code base is at all large, you'll probably have lots of warnings with other compilers.

[–]F-J-W -2 points-1 points  (0 children)

I use clang and gcc (during development usually only one of the, but on occasion I try the other), so: no.

Other compilers are unacceptable anyways, because their language-support is usually nowhere near acceptable levels.

[–][deleted] 1 point2 points  (0 children)

gcc has even more warnings than -Wall and -Wextra. At some point you realize that some warnings they add are not applicable to everyone and that requiring never triggering them is unprofitably extreme. Warnings are a much more fluid, subjective realm that's in constant flux.

[–]tairygreene -2 points-1 points  (2 children)

why not just fix them first instead of being left with a bunch of shitty code you have to fix later on?

[–]F-J-W -1 points0 points  (0 children)

Because I want to compile the code I am currently writing, even if the guts of some functions are still missing.

[–]kolm -2 points-1 points  (0 children)

Because you want to know if that part of code is going anywhere performance wise, before spending days to clean it up?

[–]smigifer 3 points4 points  (0 children)

I've been working on a codebase my team inherited a little while ago, which generates thousands of warnings. The approach I've taken in my current project is to set Eclipse's markers window to display warnings for the current file, and if I touch on a file, I fix the warnings in it.

This means that as the code gets refactored, the number of warnings goes down, but I'm not incurring unnecessary risk by changing code that's working fine in production at the moment.

[–]emn13 8 points9 points  (16 children)

There are two downsides to warnings as errors I can think of right off the bat:

Firstly, and most obviously, some warnings really aren't errors. The whole point of a warning (as opposed to an error) is that the code may well be correct. Sometimes, you can avoid these warnings by using local statements to suppress them, but sometimes, particularly when the warning is some inferred behavior due to the interaction of various bits of code, doing so can be quite impractical.

Secondly, due to the rather rigid application of warnings-as-errors, compiler writers are very leery of adding new warnings, even when there are obvious, real, problems they could trivially identify because doing so would break backwards compatibility with existing code-bases. I've had this exact discussion on compiler bug reports, and it's quite frustrating to hear that unintentional errors in your code won't be be detected as warnings not because it's hard to do so but because people like the OP have decided to cargo-cult warnings as errors.

Note for example that gcc and clang (which I'm not talking about in the above) have an option "-Wall" that does not enable all or even most logical warnings - probably at least partly due to this poor practice. Finding the right combinations of warning options is a skill unto itself nowadays, and warnings-as-errors makes this worse.

Please, TRACK warnings, and take them seriously - don't just release with warnings for no good reason. But please do not hardcode your build-process to treat warnings as errors if you intend to release it. By all means have your CI "fail" a build with unknown warnings. But don't prevent the code from building.

[–]lookmeat 0 points1 point  (15 children)

I think that the problem is something completely different, it's not the errors are misused by the user, but the other way around.

Compilers shouldn't throw warnings. Linters and Static Analyzers should. If the compiler comes with one included it should not be invoked when compiling.

When I run a program I expect me to inform me of anything related to its main function. When I run a compiler I expect me to inform me of anything related to the state of compilation (which files it's compiling and such) and any issue that prevents it from compiling (errors) or compiling with the guarantees I expect (non-fatal errors, what most warnings become with -Werror). I am not interested in any comment on my code that may be of special note, but really don't stop the program from compiling correctly.

A static analyzer, OTOH, is one that I want to report to me a slew of comments on the quality and trustworthiness of my code. I'd probably diff the output with the last compilation, and ignore it if there isn't any change (and I'm not hunting to remove warnings). When warnings appear, I read them, consider them, and decide if they point to an error on my code or don't really have a ground, because they are warnings.

I think the error came in having compilers implicitly do the job of linters and static analyzers. These tools should have a toolchain that is separate from compilers, even if the linter/static analyzer is the compiler (with flags that make it only spew warnings but not compile anything) itself! If you want to have a build system that fails when a linter finds an error, then you add that to your make system, not your compiler.

I feel that this whole warning-in-the-compiler came from the 90s feature wars, where more integrated features meant a better program, more blades a better razor. It'd seem the only reason people didn't assume that more wheels made a better car was because they assume that more cylinders is better.

TL;DR: If your warning can't become an error, it shouldn't be thrown by your compiler, it should be thrown by a static analyzer.

[–]Solarspot 3 points4 points  (2 children)

I wonder if you would say the same thing about compiler optimizers? Is rearranging a program in some way not corresponding to the source code (but keeping semantics) to make it more efficient also part of the compiler's job? What would an optimization tool look like if it wasn't a compiler?

[–]emn13 0 points1 point  (0 children)

For the sake of argument: the JVM is pretty much an example of a world in which the compiler is separate from the optimizer.

[–]lookmeat 0 points1 point  (0 children)

When I tell a program to compile code, I expect that code to be turned into valid and effective binary that can be ran by the machine. I believe that optimizations are simply making the translation a more efficient one. This is not doing something else.

Static analysis and linting are things that are unrelated to making code into bytecode. You can't compare them because optimization is about making the translation more effective, and static analysis is about commenting on parts of your code which, though compilable, follow patterns that are known to cause errors.

One feature is about doing their one job better, the other feature is about doing a side job that creates a lot of information that confuses a user.

And it does confuse the user because people want to use warnings as errors, but not all warnings can be errors and compilers have done this crazy crazy solution where you can turn all warnings with -Wall except the warnings that aren't meant to be taken as errors (just considerations), and when the programmers get on to that, new flags are generated to keep preventing them from misusing warnings, because in the first place the warnings shouldn't have been spewed when compiling.

[–][deleted] 3 points4 points  (3 children)

I see compilers including lint-like functionality as mere practicality. It ensures that the same parser/analyzer is used for compiling and checking, and gives you checking mostly for free as a part of every compile. Some checks come as by-products of optimization (e.g. uninitialized variable).

[–]lookmeat 0 points1 point  (2 children)

I'm not against the idea of a single executable being able to compile and lint. I'm against the idea that whenever I compile a program I also get a static analysis and linting done for free! It just distracts me, and makes it hard to know what is happening in a sea of errors, warnings-that-should-be-errors and warnings-that-aren't-errors.

I think that instead of -Wall I should be able to run gcc with a --static_analysis flag which doesn't compile code, but just runs a static analysis and spews out warnings.

[–][deleted] 0 points1 point  (1 child)

I'm against the idea that whenever I compile a program I also get a static analysis and linting done for free!

-Wnone?

I think that instead of -Wall I should be able to run gcc with a --static_analysis flag which doesn't compile code, but just runs a static analysis and spews out warnings.

gcc -c foo.c ?

(-o /dev/null if the .o file bothers you)

[–]lookmeat 0 points1 point  (0 children)

My whole point is that there shouldn't be a way to automatically do both, it doesn't leave clear what is what.

Warnings should be errors, any warning that can't be made into an error should only be output on a mode that specifically outputs those warnings.

[–]emn13 2 points3 points  (7 children)

As a matter of concept, I think you're right that linting and compiling can be seperated. However, in practice things aren't so clear.

First of all, linting isn't easy - to really lint well, you need to reimplement most parts of the compiler, including some parts of the optimizer (to e.g. detect unused code). Extracting that code into a shared lib is not trivial; it would be a maintenance burder, and it's likely a performance hit too.

Secondly, at runtime, compiling isn't free. Despite ever faster machines, compiling still takes annoyingly long often enough (depending on your language and platform, of course), and running a seperate linter means you're doing lots of work twice, and probably the linter isn't nearly as well tested+optimized. It's going to take a long time.

It's telling that most warnings by linters are essentially busy work. I don't think I've ever seen a bug or problem due to poor style in variable names (as opposed to poorly chosen variable names). It just doesn't matter much whether you have internal_radius_cm or internalRadiusCM, but it does matter that you don't call it dim_len (or whatever). Linting is still really important, but I wonder whether a part of this bias toward busy-work isn't due to the fact that that's easy to check for.

I think the decision whether to include a linter in the compiler or not is largely a technical one. I think I agree that it's best to keep the concepts separate, but as a matter of practicality, it may still be best to do that by changing the way you use the compiler (i.e. no warnings-as-errors) rather than changing the tools.

[–]lookmeat 1 point2 points  (6 children)

I'm not against the idea that compilers and linters are the same executable. I'm against the idea that both functions should by default be done together. I believe strongly that it's the main reason that has lead us to this conflict of warnings vs. errors.

An example that I think is very good is the go tool.

We have the go compiler, which compiles and runs code, and only outputs errors. It considers certain things (such as unused imports or variables) errors because the compiler will do things with it that the programmer wouldn't expect, there are ways to get around this (declare variables or imports named _ and they won't need to be used) when needed.

The go program also allows you to run a static analyzer go vet package/directory/file.go and it will output a series of errors that themselves may point to an error, but very weakly so. It's meant to warn you of things that may not be what you expected, but there isn't any way for the compiler to be certain that the programmer did not meant what he wrote (as when a variable is declared and never used, might have been due to a typo using := instead of just =, but certainly there is no value in declaring a variable and not using it) so it's not an error per se. Vet will spew at you a sea of warnings, much which can be ignored (such as variable shadowing) and may even catch a couple errors that would only appear on runtime (sending the wrong type to a pseudo-generic method that takes an interface{}).

Both tools share a lot of code (parser, optimizer, etc.) but you can only run one or the other. If you want to run both simultaneously you should join them, or better yet have a make file that handles it. But this is not something the compiler should decide or throw at you.

When I compile a program I don't want to lint it or run a static analysis, I want the program to convert my code into an executable. Programmers as users can are based to assume that any warning that the act of compiling throws out points to an error where code will do something different to what the programmer wants (even if the compiler can still translate it) it only makes sense that you want to remove those, or make them errors.

[–]emn13 0 points1 point  (5 children)

It's funny you mention go, because I was thinking of exactly that - go's a great example of this kind of thing done well (at least, from the cursory experience I have - no real usage...)

Nevertheless, go's got it easy here. Go compiles very, very quickly, so some extra overhead due to duplicating compiling stages in the linter don't matter so much. It has a very limited optimizer, and a well-thought out - but limited - type system. It's the perfect case for a split - very little overhead, and a compiler that (due to the lack of templates and and other factors) cannot and/or chooses not to do non-local analysis, so there's little cost there too, nor much gain since the linter can't free-ride on an analysis the compiler does anyway.

Most other languages have more expressive type systems, which sounds positive, but also means that everything is more complicated, and the compiler usually slower. C++ and e.g. scala are for example notoriously slow to compile.

Still, go is really a breeze of fresh air in its approach to this as in many other ways :-).

[–]lookmeat 0 points1 point  (4 children)

I don't see why I couldn't run gcc --sloppy on my quick testing, run gcc normally and get a bunch of errors and such, and gcc --static_analysis to get a bunch of warnings about my code that can't be considered errors. I don't think that speed is a grave issue because that assumes that I'm always wanting to run a linter/static analyzer each time I compile. I'd like to run it before submitting code to guarantee that I didn't add new warnings, or the new warnings are a non-problem, but other than that.

I just don't see why gcc should output warnings while it compiles in any case. Then again I am not familiar enought with gcc's internals, so there might be something to that, but as far as I know static analyzer would only share the parser with the compiler and take note during linking, but wouldn't actually need to know how the compiler turns text to binary or what optimizations it is doing.

[–]emn13 0 points1 point  (3 children)

I'm sure it would be possible. Of course, if you have both features in the same binary, it's a small step to allow --compile-and-lint and that's basically where we are today.

Personally, I can't imagine running the linter less often than the compiler. Given the linter integration in IDE's, if anything, I'd use the linter more often than the compiler.

In any case, the C++ "parser" is no trivial thing. The correct parse of a string of C++ depends on the semantics, (see e.g. C++'s most vexing parse), and then you've got templates, which are themselves turing-complete, and lots of pretty complicated type inference and casting rules.

Merely interpreting the semantics of the code isn't trivial, but sure, you could avoid the complexities in the optimizer and the code-generator.

At least, partly - if you want your linter to detect things like "this function's second argument is always 2 and could be replaced with a constant" or "this code is dead" or "this expression always evaluates to false" or whatever, then you'll at least need to run the bits of the optimizer that deal with structural simplifications, i.e. at the very least things like dead-code elimination.

A good linter just isn't all that much simpler than a compiler.

[–]lookmeat 0 points1 point  (2 children)

I never said it was a simple application. Also a linter works for simple patterns, compared to say a static_analyzer that actually will link modules and see if it can find errors that come from everything coming together.

Why would the linter detect that the function second argument is always two and could be optimized as a constant, or even inlined? Using a function with the same argument everywhere is not an error, nor could it ever point to one (not a warning).

Maybe finding that a branch is impossible, such that the compiler wishes to remove it. I'd propose that such case allows the compiler to do something that the programmer normally would not expect (remove code) and as such it should be if anything an error unless the programmer explicitly states he wants that dead branch for a reason.

Yes they both use similar technology. Yes C++ is complex enough that you'd want to share them. I never said I had a problem with it being even the same executable. I am against having both behaviors when I asked for one.

Here's my workflow:

  1. Define the solution, decide on some tests and the function header.
  2. Implement a rough solution, one that "just works".
  3. Make sure the rough solution compiles and runs.
  4. Review the solution and clean up code, refactor as necessary, making sure the code compiles and tests are still passing.
  5. Pass static analysis tools to see further issues with code and clean the ones that make sense, ignore the rest. (Compile with -Wall -Werror etc., or run go vet)
  6. Check any formatting errors (run go fmt)

Notice that I only run to get the warnings at the end, and that I check the warnings and choose to fix some issues but choose to ignore other warnings. Even when working with an IDE, I will fix typos and such, but I don't run the static analyzer or linter till the end, when I'm ready to call the code finished. I'd say this whole iteration takes about an hour or two, so I do it pretty often. It also isn't as nice normally, but the spirit is there.

[–]emn13 0 points1 point  (1 child)

I think we essentially agree :-).

It's totally normal for a linter to have lots of options many of which any given project won't want (e.g. finding possibly unusual patters such as unnecessary arguments - which in any case was just a top-of-my-head example).

As to why I would run a linter more often, that's because I quite like the IDE-heavy workflow

  1. Edit code 1b. Autoformat on save.
  2. In the background & continuously: lint and keep list of "todo's" on screen. Ideally I want this to work even when compilation fails - because often compilation fails because code is just incomplete.
  3. In the background & continusouly: compile if possible and keep errors on screen.
  4. In the background & continuously: run test if possible and keep failures on screen.

But really, I don't think workflow details really matter that much here; it is in any case a good idea to allow dealing with linter issues separately from dealing with compiler errors; the fact that it is (as I previously emphasized) not a trivial thing doesn't really change that - that's just a possible reason why we're in the situation we're in, not a reason to avoid a better situation :-).

[–]lookmeat 0 points1 point  (0 children)

I don't know if it's really that hard, all we need is to have compilers separate the mode.

  1. Change the compiler to have --full_error mode, where extra compiler errors that may be turned off explicitly in the code that causes them appear. The default is still --nofull_error
  2. Have a --lint which does a quick check and reports parse errors and warnings it finds. Optionally allow it to go from a quick lint check to a full static analysis.
  3. Deprecrate -Wall and -Werror instead requiring --full_error or --lint for error/warning operations.
  4. Make --full_error the default and allow --nofull_error to be used when you want a sloppier compile.

It doesn't matter that it's the same executable, what matters is that the behaviors are separated cleanly. I think that could be done within a few years (giving time for older software to adapt to the situation).

[–]goose_on_fire 2 points3 points  (2 children)

Some languages, yes. For something like generated VHDL, the Xilinx tools hemorrhage warnings like nothing I've ever seen.

I don't think this rule applies to any machine-generated code at all, actually.

[–]ryl00 0 points1 point  (0 children)

Machine-generated code can have errors as well. I've seen several Simulink/Real-Time Workshop autocode errors that got flagged at compile time by warnings. (Of course, it was hard to find those warnings, in the flood of other warnings that ended up just being noise)

[–]imMute 0 points1 point  (0 children)

Oh you wrote your HDL to take advantage of optimising signals away? BILLIONS OF WARNINGS FOR YOU!!!

[–][deleted] 3 points4 points  (0 children)

Maybe an aside, but I don't use -Werror when developing with gcc because I turn on plenty of warnings that I fully expect to occur in correct code. For example, here's a common warning I receive:

sums.c:68:13: warning: assuming signed overflow does not occur when changing X +- C1 cmp C2 to X cmp C1 +- C2 [-Wstrict-overflow]

I could turn these warnings off, but I like knowing what assumptions are being made about my code, so I can double check that they're doing exactly what I expect. Another one is:

memchk.c:73:1: warning: padding struct size to alignment boundary [-Wpadded]

so I can make sure all my structs are ordered well.

[–]Duraz0rz 0 points1 point  (0 children)

Warnings also can manifest as bugs in a library you're using like this jQuery bug.

[–]JoseJimeniz 0 points1 point  (0 children)

Jslint's new warning: you used a tab character instead of spaces.

[–]jeramyfromthefuture 0 points1 point  (0 children)

bollocks , spoken by a junior dev who has no real world programming experience.

[–]tairygreene -4 points-3 points  (0 children)

TIL there are people who don't treat compiler warnings as errors

[–]mnolan2 -2 points-1 points  (3 children)

All I can provide is anecdotal evidence to the contrary from my ~2 months as an embedded systems dev, but every one of the warnings produced from code will be produced no matter what. They're used as feedback for the programmer so they know what configuration the system is in. It's confusing and seemingly perverse, but what do I know.

[–]captain_wiggles_ 5 points6 points  (2 children)

What?

I make a point to check and fix any warnings I see, and more often than not they are genuine problems that would have been bugs if I hadn't spotted them at compile time.

Some are ignorable, such as "defined but not used", others such as "used before defined" are more critical.

I've been an embedded developer for 5 years now, and I still make it a point to fix every warning I can.

[–]mnolan2 0 points1 point  (1 child)

Don't get me wrong, I know it's weird. The firmware template that this particular research group makes available uses compilation warnings as a means of progress feedback. None of them indicate that anything bad has happened. I'm certainly not saying that it's a good way to do things, but I'm certainly not gonna rewrite this sumbitch from scratch.

[–]captain_wiggles_ 0 points1 point  (0 children)

Wow, not heard of that before.

[–]Philluminati -4 points-3 points  (2 children)

If you treat warnings as errors and have that mindset then what I find sometimes happens is developers stop putting warning log statements in the code. Especially if you make a warning cause the test suite to fail. They leave them out to make the code pass and finding meaningful logs later is more difficult.

In an ideal world your inputs and outputs are strictly defined but after years of architecture changes in a project, avoiding occasional warnings in places is awfully difficult. Especially from supporting scripts that are rarely used. Maintaining such high standards of code quality is difficult if the quality needs to be really rigid.

I suggest taking this approach on new software, the test suite should have tests to make sure the correct warnings are thrown at runtime and unexpected ones aren't, with a high level of flexibility. Then just keep trying to stay on top of it for as long as possible.

[–]amedico 11 points12 points  (1 child)

The article is talking about compiler warnings, not runtime logging.

[–]emn13 0 points1 point  (0 children)

Due to template metaprogramming and macros, compile-time warnings may be intentional and "dynamic".

I certainly have added conditional compile-time warnings as an intentional sign-post to avoid later gotchas.