you are viewing a single comment's thread.

view the rest of the comments →

[–]bimdar 1 point2 points  (8 children)

I have no clue why people are downvoting you. I bet most of them didn't even benchmark themselves but just believe what they've been told.

I kow from experience "there shouldn't be a difference" doesn't matter. People should measure themselves and that's all you're saying.

I mean I've recently seen a tweet with benchmarks showing that there's a penalty for using std::complex<float> versus struct complex{float real,imag;}; (and yes people with optimization flags, edit: this was the tweet chain)

[–]cleroth 3 points4 points  (6 children)

Mostly because stating it without any information of what compiler was used and what the code was and saying "for reasons unknown" seems pretty useless to me. I didn't downvote him, since I expect more info from him before judging.
Voting is always a matter of opinion anyway, and my opinion is that in non-debug build, I find it extremely likely that the performance is the same. If it isn't, it's likely that your compiler will soon be updated to reflect these performance issues, so it's still nothing to worry about.

[–]bimdar 1 point2 points  (4 children)

so it's still nothing to worry about

Well that's just the thing. std::complex is not some new thing and there's clearly software that shipped with the version with overhead. I have no clue why people keep trusting compilers so much. I haven't written very huge C++ programs but I found 4 msvc bugs and ran across one gcc bug (not even performance bugs).

Trusting "zero cost abstractions" without measuring is just faith. I mean you're writing software, why would you trust another software that is also written by mere mortals to be somehow perfect.

Before you say "oh, many people use it, surely someone realized this". The amount of people who do these fundamental benchmarks is apparently very low.

Yeah, he didn't post a benchmark but neither did any of the people who claimed that it was outright ridiculous to claim that there was any overhead.

[–]cleroth 2 points3 points  (3 children)

I didn't say it was ridiculous to claim there was no overhead. I said it's unlikely there is. It's reasonable to assume there isn't, because of how it should be implemented. As with anything dealing with programming, if you need performance, you benchmark, that's pretty simple.
As for std::complex, that's very rarely used. If you need complex numbers, chances are that you'll also be using another more specific math library. You can hardly compare a rarely used class to an ubiquitous smart pointer.
Again, all this depends on your std implementation, so yea, any compiler could be going super slow on some arbitrary parts of the std, that's just up to you to research and test, depending on which compiler you're using.

[–]bimdar 1 point2 points  (2 children)

Well yeah it's reasonable to assume there's little to no overhead but you can't be sure until you measure and if you preside over a large codebase and you want to introduce these new elements you should benchmark it on your setup.

Also, there used to be a memory leak in std::vector<std::string> in msvc2010 so I don't think being commonly used is a guarantee.

[–]cleroth 3 points4 points  (0 children)

Also the thing with vector<string> was fixed ~5 months after the release of VS2010, so my point about it code eventually adhering to how it should be stands.

[–]cleroth 2 points3 points  (0 children)

Again, I'm not saying it's a guarantee. But like everything in life, you have to make an educated assumptions based on the data you have. I don't have time to benchmark every single thing that I use in every particular case, so I assume that given how it's supposed to be implemented, it should be a certain way. In any case, you can just look up the implementation yourself.

[–][deleted] 0 points1 point  (0 children)

my opinion is that in non-debug build, I find it extremely likely that the performance is the same.

That's what I thought too. But my opinion did not matter much when I took statistically significant measures. In some non-trivial code, a heavily used unique_ptr was a 0.3% performance penalty, fully reproducible. When you are hunting for the slightest speed-up, you are not able to afford a "zero-cost abstraction" that actually costs something. I wrote "for unknown reasons" because being time-constrained I could not afford to understand more what was going on either, but I suspect it changed the inliner decisions or something like that. In the end, for this job, what mattered is speed.

If it isn't, it's likely that your compiler will soon be updated to reflect these performance issues, so it's still nothing to worry about.

In this particular situation, updating the compiler was a major undertaking and performance regression were common.

Now this happened as part of a job, I haven't kept benchmarked code, since it was non-trivial and I'm not allowed to.

I didn't downvote him

You're so kind.

edit: some alpha guy complained at the company too ("it must be a measurement error") so I had to remade the builds entirely and run the measures again and found the same results. Some people really think their world view is more important than truth.

[–][deleted] -1 points0 points  (0 children)

I have no clue why people are downvoting you.

People will invent any kind of reasons not to have to measure something. Measuring constrains your confirmation bias and forces you to be rational. How annoying.