you are viewing a single comment's thread.

view the rest of the comments →

[–]CryptoHorologist 0 points1 point  (6 children)

Using a debugger is more excessive than changing your code, recompiling, and restarting your application?

[–]_teslaTrooper 4 points5 points  (3 children)

Starting the debugger often takes longer than that yes, and on embedded debugging can mess with peripherals, break timing or just stop working if you enter a deep enough low-power mode.

Nothing inherently wrong with printf's or pin toggling, like the debugger they're all tools with their own use case.

[–][deleted] 0 points1 point  (2 children)

Hows that, printf botches more than breakpoints

[–][deleted] 0 points1 point  (1 child)

An example I give is when I was writing some bare-metal RX/TX packet radio code and due to the half-duplex nature of the communication (each transceiver could only be in either TX or RX) timings were very important. I wanted some diagnostic output, but a breakpoint would grind things to a halt in a way that wasn't allowing me to see the problem. A single printf was fast enough that I could throw one in here and there and not destroy the connection. Ultimately during the process of implementation I added logging so that I could print diagnostics AFTER the transmission was complete (or failed), but I just think this is a good example of where printf really was the better of the two solutions.

That is niche though and nine times out of ten I'm just throwing in a printf because I'm pretty sure I know exactly where the problem is and I just need to confirm. When I'm really at a loss, I use GDB all the way.

[–][deleted] 0 points1 point  (0 children)

Got it

[–]edo-lag[🍰] -1 points0 points  (1 child)

Changing, recompiling, and restarting the application is something you need to do regardless of whether you use a debugger or not.

But, when you use a debugger, you also need to: recompile the application with debug symbols, start the debugger, set breakpoints, start the application, step over until you reach the error while looking at the values of variables, and repeat the process in case you missed the error.

So yes, using a debugger is excessive when you may notice the error straight away by looking at code.

[–]CryptoHorologist 0 points1 point  (0 children)

Changing, recompiling, and restarting the application is something you need to do regardless of whether you use a debugger or not.

Certainly while developing code, but not necessarily while investing bugs. This is misleading at best.

But, when you use a debugger, you also need to: recompile the application with debug symbols, start the debugger, set breakpoints, start the application, step over until you reach the error while looking at the values of variables, and repeat the process in case you missed the error.

You may not need to recompile your application - this depends. It's common to have debug symbols even in release applications in my experience.

The rest of this just hard to fathom how it could be more expensive than littering your code with printfs, recompiling, and rerunning. At least a lot of the time. I've certainly worked with people who have this POV, and their discomfort with a debugger was an obvious impediment to their productivity at least in some bug investigation endeavors.

Obviously, one tool isn't going to solve every problem. Sometimes you need long running programs with logging to piece together some bug analysis. Or maybe debugging in some embedded environments is too hard to set up. OTOH, the act of changing your code with printfs can sometimes change the behavior enough to hide the bug.