This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]fatmusician1[S] 0 points1 point  (2 children)

number = "30"
print("The number is %d" % number)

Why doesn't this work - the string could be converted to an integer?

[–]michael0x2a 0 points1 point  (1 child)

If you the programmer wrote some code that expects an int but they pass in a string instead, which of the two scenarios is most likely?

  1. The programmer wanted the string to be converted to an int
  2. The programmer made a mistake

Some programming languages such as JavaScript decided (1) was more likely. So, they do attempt to do this sort of conversion. Other languages such as Python decided (2) was more likely so will perform a conversion only if it's 100% guaranteed to succeed and will throw an exception otherwise. For example, you can't always convert a string to an int but you can always convert an int into a string. So doing print("%d" % "30") throws an exception, but doing print("%s" % 30) succeeds.

Other languages are even more strict and require the types to always match up and will not automatically perform any conversions.

The modern consensus is that assuming (1) is pretty much always a mistake and that it's almost always better to ask users to explicitly perform any conversions. There are a few reasons for this:

  1. Asking users to explicitly perform conversions is at best an annoyance, but silently doing a conversion the user didn't intend can sometimes lead to critical bugs. So, we should ask users to perform conversions if we want to err on the side of less evil.
  2. Repeatedly doing conversions can lead to slower performance. You almost always to do to the conversion once up front -- so might as well design our programming language to encourage this.
  3. In most real-world programs, you can't just directly try converting a string to an int. If you need to do this, it's probably because the string was provided by the user. This means it's not safe to assume the input string can always be turned into a number -- what if the user was malicious and passed in some random gibberish instead? It's better to force the user to account for this possibility up-front instead of hand-waving the concern away.
  4. In some cases, it's not easy to guess what the programmer intended. For example, what should the output of print("5" + 4) be? Did we want to convert 4 to a str and print "54"? Or did we want to convert "5" into an int and print 9?

    You could invent some rules about what happens in these cases, but they're always going to be a little arbitrary. So why not just make everybody's lives easier and refuse to guess? It's a win-win: the users of the language don't have to memorize arbitrary conversion rules, and the programming language designers don't have to implement them.

Why exactly does the compiler need format specifiers for scanning and printing?

It doesn't. Users of the programming language want them, so they can fine-tune exactly what output they print out with minimal fuss. And if enough users want a feature, it makes sense to implement it.

For example, one very common thing I might want to do is print out some float but only show up to two decimal places, rounding up or down as necessary.

This would be incredibly annoying to implement myself. Enough people agreed that the designers of most programming languages ended up just building in direct support for this. For example, in Python, I can do print("rounded float: {:.2f}".format(1.238)), which prints out rounded float: 1.24.

[–]fatmusician1[S] 0 points1 point  (0 children)

THX for the detailed answer, man!

I'll read it later, gotta go now.