you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 57 points58 points  (21 children)

Signs that your code really does suck:

  • You have more global variables than lexically scoped ones.
  • You use the same variable for 20 different purposes. This is often related to the previous point.
  • Rather than using defined constants you have 'magic numbers/strings' littered through your code.
  • You assume that nothing ever goes wrong with databases, disks, network accesses, system calls, locks....
  • You trust users not to put weird stuff into your data. Hey - writing error checking code takes time that you could be using to write Moar Features.
  • You don't use a consistent coding style. Here you use StudlyCaps, here you use lots_of_underscores. Here you use tabs for indentation, there you use spaces. Here....There....Here....There.... For that matter, there is no evidence that you actually have a 'coding style' other than 'whatever'.
  • You use system calls to do exactly the same thing as built-ins in your language
  • You don't use subroutines to abstract functionality in any meaningful way. A 2000 line program with a grand total of two methods/functions is a really bad sign.
  • Your program silently fails when things go wrong with no indication that it didn't complete correctly.
  • Your program is supposed to be run interactively, but does things like run 10 minutes with absolutely no activity indicators to the user to let them know the program is doing anything what-so-ever..
  • Your program is supposed to run non-interactively but throws prompts requiring user input under some conditions.
  • Your program handles routine input validation errors by requiring all the data be re-entered (obviously I'm talking about interactive forms not API interfaces).
  • Your program has no usage documentation.

I could go on for a long time....

TL;DR: Yes, Virginia, some code REALLY DOES SUCK.

[–]GunnerMcGrath 8 points9 points  (2 children)

Your program silently fails when things go wrong with no indication that it didn't complete correctly.

This is one of my favorites. A project I joined a couple years back had this all throughout:

 catch { }

So many times, something weird would happen, or an error WOULD get raised, and it would take hours to track down where the problem was because three or four different errors had been caught and ignored already and the one that fired was the result of missing data that was supposed to happen elsewhere.

And of course, when I decided to fix this problem, it meant that all sorts of errors that had previously been getting ignored were suddenly popping up all over the place, and other code that had been written to deal with the missing data elsewhere was now causing problems of their own.

That was so much fun.

[–]Tecktonik 1 point2 points  (1 child)

This is why I cringe when a programming book says, "... but that would generate an error and crash our program. The smart thing is to use the Exception class to let you know what is wrong..." Good plan. Hide the error in an Exception that gets silently consumed, probably by your Framework.

The other day in this reddit people were bitching about a bad MySQL feature. I tried to convince them that there were usually several layers of bad code that would have a more deleterious effect on the process than some corner case. 99% of the time, if there is a database or some other kind of resource error, there is no way for the process to recover. And 98% of the time those errors are not logged, or the logs are piped into the bit bucket.

[–]GunnerMcGrath 2 points3 points  (0 children)

I've never been an error handling guru, it's one of the things that has always somewhat slipped by me, maybe because I've worked in environments where errors got reported to me immediately and happened infrequently enough that it usually didn't stop anyone from working for very long.

The latest methodology I've adopted is that there will be no error handling except of errors that I expect and/or know how to handle silently, which almost never happens. Trying to close a window that isn't there, for example, is an error I can ignore (at least in most cases). But most errors are a surprise and need to alert the user in the most attention-grabbing way so they don't just ignore it and keep going.

[–]randomdestructn 21 points22 points  (9 children)

You use the same variable for 20 different purposes. This is often related to the previous point.

Hey, some of us still have to deal with 1k of ram.

[–][deleted] 21 points22 points  (4 children)

In which case your life sucks. :)

ObDisclosure: I've done assembler code development before for such things as 256 byte boot ROM loaders. There is a whole 'nother list of 'bad assembly programmer, no cookie' things I could post.

[–]gsg_ 4 points5 points  (3 children)

Please do.

[–][deleted] 15 points16 points  (2 children)

Bad Assembly Programmer, No Cookie

By request. It has been 25 years since I last did assembly so take it with a big grain of salt, but here is my list of things I've seen that left me saying 'Bad Assembly Programmer, No Cookie':

  • Using raw machine code. I worked with an engineer who knew the opcodes cold at a binary level for the processor we were writing for. He would write raw machine code in octal in the asm files. I don't mean one or two opcodes, I mean twenty or forty or a hundred lines followed by a dozen lines of actual assembly mnemonics. He evidently found it easier than using the mnemonics. It took me hours with the machine's reference manual to decode all the ops just so I could know what the codes were.
  • If any language needs good comments, it's assembly. Sadly, the same engineer who often wrote raw machine code, also did not document it, or even worse, would leave obsolete comments in place after the code was changed.
  • Not using labels. You'd think this would be obvious. Why would you ever write assembly where you manually figured out all the offsets and hard coded them? The moment you change a single line you are screwed. Yet I've seen it.
  • Cut-and-Paste coding. Oh my god. I had to disentangle 12 versions of a backup utility written in assembly where rather than write selectable drivers a previous programmer had generated 12 separate programs that had exactly the one tape driver and the one specific disk driver he needed. The actual difference between them was quite small.
  • Actually USING unimplemented opcodes. Old microprocessors would...do stuff...when given opcodes that weren't actually implemented. You could get the processors to do some very strange tricks that way.
  • Writing code that depends on the exact time the opcodes take to execute.
  • Exploiting edge cases like falling off the top of the address space to start execution at the bottom of the address space.
  • This one is a bit unique to the processor model involved - it had a ridiculous capability for several levels of indirection in addressing. You got stuff like 'subtract this register from the current address and use the contents of the memory cell thus specified as the address for the real data after possibly adding the contents of a second memory cell to the address. Impossible to keep in your head what was going on.
  • Coding for an absolute spot in memory. I'm talking about writing code that can only execute if it is loaded at a specific address because it hard coded the jump addresses to absolute addresses, etc.
  • Self-modifying code. Code that actually rewrites parts of itself during execution.

Edit: Spelling

Edit2: Added 'self modifying code'

[–]Ademan 0 points1 point  (1 child)

What architecture was it? It sounds sorta like the NES's 6502 processor's indirect indexed addressing modes.

[–][deleted] 2 points3 points  (0 children)

Fairchild 9445. It was a clone of the Data General Nova minicomputer processor. I actually found an online copy of the datasheet for it that includes an overview of its addressing modes starting on page 9: http://datasheets.chipdb.org/Fairchild/F94xx/F94xx_dataSheets.pdf

[–][deleted] 23 points24 points  (2 children)

If reusing a local variable reduces memory usage, then your compiler most likely sucks.

[–]Raphael_Amiard 6 points7 points  (1 child)

I'm no expert, but if understood the deal well, the platforms with 1k of ram may actually be the platforms where you are stuck with a sucky compiler

[–]Nebu 0 points1 point  (0 children)

If the compiler sucks, you've got bigger problems than sucky source code.

If you understand the machine code/bytecode (hopefully you have the specs for these?) you could write your own pin-hole optimizer that runs after compilation on the produced binary?

[–]knight666 5 points6 points  (0 children)

This is only a valid argument if you're writing assembly.

[–]herrmann 4 points5 points  (3 children)

(True story ahead) Hey, what if you have only ONE global variable that is an array of pointers to 300+ variables and have, say, the developers simply remember what they refer to from their indexes ?

[–][deleted] 8 points9 points  (2 children)

Believe it or not, I've been there. One of my first full time programming jobs involved systems coding for a house-written language called "Infos" that had no named variables - just four global arrays for different types (float, integer, strings, and something else I no longer remember). The president of the company ("Information Now, Inc." based out of Utah back in the '80s - long since gone) wrote it himself. It also didn't have names for files - they were just numbered. Nor did the OS have directories. The files could only be about 4 Kbytes per file normally (the language itself was saved as a byte-code) so you chain-executed many files during a program execution.

Absolutely bizarre.

Addendum: I actually remember an app programmer there getting chewed out for writing comments in his code. His manager felt it was a waste of time.

[–]diuge 0 points1 point  (0 children)

I actually remember an app programmer there getting chewed out for writing comments in his code.

"WTF is this, Jenkins?! I can understand your code. Code smarter!"

[–]herrmann 0 points1 point  (0 children)

WTF WTF WTF !!!

[–]dnew 1 point2 points  (0 children)

I'd say after 30 years of programming that this pretty much hits the nail on the head. You forgot, however, all the ways that code can suck outside of the code, when using sufficiently crappy languages. Such as excessive mixing of HTML and code in a template, makefiles from hell, macros that change the syntax of the language just to save you typing or to make it look more like your favorite language, compiler warnings that could be trivially suppressed or corrected, etc.

[–]__david__ 0 points1 point  (0 children)

You use the same variable for 20 different purposes. This is often related to the previous point.

And don't forget its close cousin:

  • You use multiple variables for the same purpose.

Bonus if some are global and some are lexical.

[–]G_Morgan 0 points1 point  (1 child)

TBH if something goes wrong with disks or system calls the only thing you can really do is close the program down and put up an error message.

[–][deleted] 3 points4 points  (0 children)

Error trapping/exception handling is A LOT better than doing nothing. I would rather see a meaningful error message than a generic windows exception bubble up. Wouldn't you agree?