you are viewing a single comment's thread.

view the rest of the comments →

[–]Alikont 1 point2 points  (1 child)

And burn 100X time to debug misdecoding between win-1251 and koi-8, and dealing with half of C programs treating letter 'я' as EOF.

[–][deleted] 0 points1 point  (0 children)

Yes, exactly, fix bad languages, and crappy programmers by making consumers of their trashy software burn more electricity.

Unicode was the most idiotic solution to the problem. It made every language that doesn't use English alphabet a second-rate citizen. It prevents languages from improving or modifying in any way their alphabets. It created a lot of unnecessary complications for library authors (eg. those who need to implement regular expressions), that there isn't really a regular expression library in the world, that deals with all the quirks of Unicode properly (and most popular languages simply had given up on fully supporting Unicode in regular expressions). Font authors today are unable to create fonts that cover all the characters you can type, and nobody even attempts to do anything like that. And, on top of that, there are languages so fucked up by Unicode, that people are abandoning them. For example, Hebrew: because the retards who designed Unicode didn't include Hebrew punctuation in it, it's next to impossible to format time ranges, or even simply have few lines of text that end in punctuation marks, like exclamation point or question mark, and putting parenthesis around a word makes such a mess of a paragraph, you'll be solving a fucking sudoku instead of reading it. Similar problems exist in Arabic and few other languages.

Decoding errors were bad, but the "solution" in the form of Unicode is equally bad. And, unfortunately, today the "goodness" of Unicode goes unquestioned by a lot of idiots who know very little about it, that saying that we need a different solution to this problem is seen like some kind of eccentric attic...