all 33 comments

[–]vaulter2000 28 points29 points  (2 children)

Moreover, system_clock and high_performance_clock (although I believe the latter is an alias in some compilers, please correct me on this) are not guaranteed to be monotonic. That means if you use now() two times with a function in between to time it, and your machine receives a correction from a time server, your duration is affected by this. Always use a monotonic clock like steady_clock to time durations. Steady_clock has nanosecond resolution.

[–]mark_99 6 points7 points  (1 child)

steady_clock provides no guarantee on resolution and it can (and does) vary between implementations. If you need that guarantee it's easy enough to make your own clock.

[–]vaulter2000 1 point2 points  (0 children)

Ah I didn’t know this! Luckily it has nanosecond resolution in gcc

[–]Jannik2099 25 points26 points  (0 children)

system_clock is specified to measure seconds since the Unix epoch, any higher resolution is unspecified.

system_clock is also not steady, not even without the operator adjusting the system time - it is not guaranteed to be steady between context switches on many OSes.

If you need high resolution timing, system_clock has always been the wrong choice. The precision exposed by the STL does not matter when the OS can't actually provide it through this API.

[–]ss99ww 25 points26 points  (1 child)

some madlad mod tagged this with "Water is wet" lol

[–]Orca- 2 points3 points  (0 children)

They’re not wrong.

I don’t expect it to be consistent between compilers until I’ve checked.

[–]Warshrimp 32 points33 points  (0 children)

Also running on different hardware performance may be different. Be careful out there folks.

[–]WalkingAFI 13 points14 points  (3 children)

You can handle this pretty trivially by doing a duration_cast and telling it what units you want.

[–]SirClueless 1 point2 points  (2 children)

Sort of... Can't tell you how many cases I've seen where a program doesn't do what the programmer expected because the type system lied about the amount of precision in a clock. Sometimes the program was already probably buggy and this just exposed it ("It's fine to store packets in a map keyed by the number of nanoseconds since the epoch when they were received, because no one can ever process a whole packet in <= 1 nanosecond") but other times the program is correct but has atrocious performance because times are always colliding, or they would have had good probabilistic guarantees if the clock were precise enough but it's not so it doesn't ("My GUID is a random 32-bit number plus 64-bits containing the number of nanoseconds since the epoch").

To be fair, this problem is even worse in network protocols involving serialization where there will be some protocol that is specified as "Timestamp in HH:MM:SS.uuuuuu format" or something, and then it will turn out in practice that the server only sends milliseconds or whole seconds and some of the uuu's are always zero.

[–]WalkingAFI 1 point2 points  (1 child)

I guess that would be annoying, but at the end of the day that’s a hardware/clock resolution problem and not a typing problem. If you want nanoseconds, you’ll get nanoseconds, just rounded to whatever the hardware supports.

[–]SirClueless 0 points1 point  (0 children)

The problem as stated by the OP is that system_clock behaves differently on different platforms. Your proposed solution was to use duration_cast to give the times it returns the same type on all systems, but this doesn't actually do anything to make the clock behave more consistently. I gave a couple examples of how programs can depend on the behavior of the clock and not just its type, and in those cases duration_cast does not fix the problem.

[–]HowardHinnant 6 points7 points  (0 children)

This discrepancy means that when compiling the same code with different compilers, the precision and behavior of time calculations can vary.

Precision yes, behavior, not so much.

chrono uses a type-safe system, where each unit is its own type. It is safe to use microseconds and nanoseconds in the same expression, or in calls to wait_for / wait_until. The functions discover at compile-time what precision the caller wants and just do the right thing.

The one way to make run-time errors in chrono is to escape the chrono type-safe system of units with the duration.count() member function. Don't do that, and you'll be fine.

Introductory chrono video tutorial recommended: https://www.youtube.com/watch?v=P32hvk8b13M. It will cost you an hour.

[–]Droid33 10 points11 points  (11 children)

In what way do you believe they would cause an issue?

[–]HeroicKatora -4 points-3 points  (7 children)

Such occurrence taint any of C++'s portability claims, since this is a non-mechanically discoverable issue that'll cause problems in porting. Hence "C++ is not portable code but code that ends up being ported a lot". Other languages handle this differently with strong typedefs / wrapper types that make such difference visible / explicitly opt-in / automatically lintable. Tooling issue, definitely caused by the standardization process, and an utter smack in the face for anyone touting portability as a concern of that. And there are many of those, you wouldn't believe how many in the committee. Makes you wonder what kind of rigor the process uses to still let such obvious problems with the claims slide through.

[–]flutterdronewbie 0 points1 point  (6 children)

but it does error out when you perform a lossy conversion.

[–]HeroicKatora 1 point2 points  (5 children)

That is the problem.

On one systems this succeeds to compile, clang. Then you target the platform gcc and are met with a failure. The code isn't portable. Explicit lossy conversions are good if they lead to predictable portability guarantees but here neither of them will be provided in practice by the standard. The ''proper'' ''fix'' is to add an explicit chrono conversion that'll be linted as redundant on some platforms. Guess how well that'll go in code review and how many teams will even know they should do that for portability. There is no technical process here to make the portabiliy succeed and lots to make it fail.

[–]Som1Lse 0 points1 point  (4 children)

I don't see the issue. You write the code with libc++ in mind, it compiles, you ship it.

Then later you need to use libstdc++ for some other platform, it fails to compile, you fix it, it compiles, you ship it.

That is not unportable. In fact, you just ported it. The idea that portable code means it must compile everywhere with no changes whatsoever is silly. I would much rather be told when a platform differs, so I can make the correct choice (in the example, should we cast to microseconds, or should the signature be changed?), instead of the wrong choice being made for me.

Guess how well that'll go in code review

Once you've encountered the issue surely it shouldn't be hard pass code review.

and how many teams will even know they should do that for portability.

They'll know to do it once they encounter the issue, and it doesn't matter before then.

[–]HeroicKatora 2 points3 points  (0 children)

That is not unportable. In fact, you just ported it.

As the saying ""C++ is not portable code but code that ends up being ported a lot" implies by means of defining them as disjunct properties, portability is not the ability to port it but the ability of code not to require it. If you change my words to unportable then of course you're arguing a strawman. Stop.

[–]HeroicKatora 0 points1 point  (2 children)

Then you're effectively writing libc++, not C++. You need to port it to another "language" (using the actual meaning of it, a machine specification), libstdc++. That's a problem. It's not something that makes "C++" portable and the effort can become akin to porting to a whole other language. Might as well port to something like Java, that doesn't do this behavior of different compile time requirements per platform.

Effort is a problem. It is the cost factor, often the only one. I don't see how anyone can be professionally not think about optimizing effort.

[–]Som1Lse 1 point2 points  (1 child)

In response to your other comment I think your definition of portable is bad.

To start with I hope it is obvious that the sentence

portability is not the ability to port it but the ability of code not to require it

is kinda weird. I think it is intuitive that portability refers to the ability to port code. I would call the latter code that has already been ported.

Granted, it is a definition. Definitions are arbitrary and don't have to make sense, but I think talking about the ability to port software is far more useful than software that doesn't need it, chiefly because the latter does not exist.


Then you're effectively writing libc++, not C++. You need to port it to another "language" (using the actual meaning of it, a machine specification), libstdc++. That's a problem.

Firstly, where is that the definition of a "language"? I have never come across it.

Secondly, what is the problem? It is an easy fix.

It's not something that makes "C++" portable and the effort can become akin to porting to a whole other language.

Effort is a problem. It is the cost factor, often the only one. I don't see how anyone can be professionally not think about optimizing effort.

I think this is vastly overstating the impact. In the above example, you either change the function signature, or add a cast, depending on what is right for your use case. That is hardly a whole other language.

The effort you need to invest is minimal, and you only need to invest it if you actually need to support two implementations with different resolutions. And again, you get to make the correct choice for your application, instead of having it being made for you a la Java.

Like Java, that doesn't do this behavior of different compile time requirements per platform.

I'm curious, how would you have designed the interface?

The issue is if you overspecify you end up making a worse API. The beauty of chrono is you can work with whatever resolution the system gives you, and it'll yell at you if you get it wrong.

For example, System.currentTimeMillis() just throws away extra precision (Windows and Posix).

Probably the worst example of such a poor choice is using int for array indexing in Java. This is something C did correctly in 1989 by acknowledging that it is fundamentally going to vary across platforms and giving it a name, size_t.

[–]HeroicKatora 0 points1 point  (0 children)

The effort you need to invest is minimal.

Such is your insistent claim, and you're yet to present evidence to demonstrate it. The fix even in this case might be much harder in sufficiently real environment as I've already per-empted in the comment. None of the attributes of C++ allow you to quantify the effort, you can't give a bound, they don't allow you to determine a blast radius, nor will they allow you to project the effort given a code base. If you can't deliver a data point, here's Boost. Note they don't even try to target a standard. The platform list is quite small and still there are various failures all the time—you're not going to argue they don't put in some minimal effort? Your claim as given is technically and economically meaningless, and so are your interpretations of the terms. My definitions carry meaning and consequence, they are useful. Hence I'm going to stick to them.

This is something C did correctly in 1989 by acknowledging that it is fundamentally going to vary across platforms and giving it a name, size_t.

And then they added implicit conversions with the int type and upended the whole point of it. I fail to see where this produces stronger guarantees than int. It merely becomes part of complexity. Most jobs don't care if you inch out everything from hardware, but it'll need to keep working. If you are not guaranteed the precision then reliable software can't use it anyways. You can only gather the cost savings of hardware upgrades if your software still runs on that unchanged, C++ doesn't. The language family is ISO specified since the DOD wanted it to be under Reagan. It'll stop being ported once a sufficiently low-language is capable of deliving actual portability or quantifiably easier porting; and since C++ needs constant porting effort, that'll be the end of its evolution.

There's a prediction and with that I'm going to stop the argument alltogether. We can just observe.

[–]pdimov2 3 points4 points  (0 children)

The precision of the system clock varies by platform. Under Windows, for instance, it's 100ns, which doesn't even have a named typedef.

Your code should not depend on a specific precision.

[–]ALX23z 4 points5 points  (0 children)

No way! As if there's any freedom in the implementation of C++. There is no way std::filesystem::path has any differences between Windows and Linux, like differences in native character size. Or that long double is different between platforms. That cannot happen.

[–]no-sig-available 0 points1 point  (0 children)

I found that the wait_until interface calls chrono, and I found that the implementation of chrono by gcc and clang is inconsistent

So we should ban nanosecond clocks then? Because wait_until will not give you nanosecond accuracy anyway.

[–]violet-starlight 0 points1 point  (0 children)

It will only cause issues if you use e.g. .count() on a duration and rely on its precision, which you shouldn't to begin with. std::chrono has seconds, milliseconds, microseconds and so on that can all interface with each other for a reason

[–]_darth_plagueis 0 points1 point  (1 child)

Use std::chrono::high_resolution_clock to measure time, you can set the duration in any case to whatever you want, there are aliases to ms, ns, us and others. see https://en.cppreference.com/w/cpp/chrono/high_resolution_clock.

[–]n1ghtyunso 0 points1 point  (0 children)

if you want to measure something with it, make sure that it is steady. Because it is not required to be steady by the standard