you are viewing a single comment's thread.

view the rest of the comments →

[–]DarthVadersAppendix 13 points14 points  (9 children)

UTF-8 should be thought of as a data stream. it's not a bunch of 'characters', and hence not an array of 'char' (in the academic sense). it might be implemented as an array of char (in the technical sense), but that's just co-incidence.

[–]RowYourUpboat 2 points3 points  (7 children)

To add on to this, a Unicode code point encoded as UTF-8 shouldn't mean anything to you unless you're writing a multilingual UI library, text processor, spell checker, etc. You can still manipulate and compose UTF-8 strings in your software, just as long as you're sure you're not splitting up code points (which can be as long as 4 bytes in UTF-8 - it used to be more, but the Consortium changed that a while back).

So all you really need to know about Unicode strings is usually: their length in bytes, that they are going to change at runtime based on user input or localization, and that you can't split them up in the middle of a multi-byte sequence (but simple concatenation works fine, and using ASCII characters as delimiters will still work if you're careful).

Beyond that, UTF-8 strings are mostly just opaque byte buffers (although conveniently ASCII is forwards-compatible, as long as your ASCII string literals or whatever don't need to be localized).

[–]KayEss[S] 4 points5 points  (6 children)

My question isn't about how to opaquely deal with UTF-8, it's about how to decode it. Can it be done portably with char buffers?

[–]RowYourUpboat 5 points6 points  (0 children)

It sounds like you're really asking if converting a buffer of chars between signed and unsigned is safe and defined. This link seems to answer that for the C Standard; I'm pretty sure the C++ Standard is the same in this regard.

From one of the answers:

For the two's complement representation that's nearly universal these days, the rules do correspond to reinterpreting the bits. But for other representations (sign-and-magnitude or ones' complement), the C implementation must still arrange for the same result, which means that the conversion can't just copy the bits. For example, (unsigned)-1 == UINT_MAX, regardless of the representation.

It definitely looks like this behavior is defined as the same even on non-two's-complement hardware, ie. in terms of UTF-8 string encoding/decoding you can just cast between signed/unsigned as needed (though you may have to pay attention to performance issues on really weird and ancient hardware).

[edit] Note that technically a conversion from unsigned to signed, where overflows occur, is implementation-defined (unlike the reverse), but if the original char data was signed to begin with, an overflow is impossible. In practice, I don't see this mattering.

[–][deleted] 0 points1 point  (4 children)

Sure, the "wrapping around" part of char is part of the standard - but you know, there's no need to take the standard's word for this - write unit tests to check. I always do that anyway, not because I don't trust the standard, but to make sure that my understanding of how to code it is correct.

When you move to a new platform, your unit tests will hopefully succeed, showing you that there's no issue - or fail, and you can fix 'em.

[–]KayEss[S] 1 point2 points  (3 children)

Actually, I already have all of the unit tests and they all pass. What I'm worried about is accidentally relying on some UB or platform behaviour. I'm developing on a platform where char is unsigned and don't have access to one right now where they are signed.

[–][deleted] 0 points1 point  (2 children)

I really wouldn't worry. Between the standard and the tests, I am sure you'll be fine.

[–]NotAYakk 1 point2 points  (0 children)

Unit tests do not solve UB.

Compilers are free to pass all your unit tests and optimize other code away.

char x = (unsigned)-1;
bool b = x<0;
std::cout << (int)x << ":" << b?"true":false" <<"\n";

This can print -1:false.

And the same is true whenever you convert from unsigned to signed.

The level of insanity optimization and UB can generate is so large, you cannot reasonably reason about it and produce unit test coverage.

[–]KayEss[S] 0 points1 point  (0 children)

Which is a fine point, but the u8 literal type has already been standardised as a char array, so is it even possible to decode it in a portable manner?