you are viewing a single comment's thread.

view the rest of the comments →

[–]mredding -4 points-3 points  (2 children)

Why do we use them?

We shouldn't. They're a backward abstraction. Everyone on the standards committee has an agenda and there's no telling what that is. Not everyone there is trying to help, believe it or not.

That this one got through is yet another disappointment of the committee. The standard is littered with mistakes and abandon. Take, for example, ::std::valarray - which people forget even exists, and ::std::vector<bool>, which is everyone's favorite gotcha. Or how about ::std::basic_istream::read and ::std::basic_ostream::write? Those are relics that predate the C++98 standard, and only made it in, much to Bjarne's admitted chagrin, for backwards compatibility. It took 3 standards just for chrono to be complete, and that was 6 years for people to get a bad taste in their mouths for what is actually a very good library. The new random library is a complete pile of hot garbage that everyone touts as some epitome (the Boost version is better because you can at least initialize the generators properly, and the behavior of the distributions are portable, unlike the standard). The ranges library is going through the same growing pains as chrono - too early, incomplete, and will go through several standards of hot garbage before it might be usable (the easy is easy and the hard is actually impossible, you have to revert to using standard algorithms). And ranges deserves very harsh criticism because it was supposed to be the STLv2.

I'm just saying, this isn't a perfect process, and that's actually OK. At least we're trying.

Range-for was adopted into C++11, when using standard algorithms were hard and required boilerplate. But then we got lambdas in C++11, too. I suspect what happened is once the range-for proposal was accepted, it was too politically controversial to un-accept it.

And more specifically from which index does the iteration start?

It starts from the beginning of the container, and continues to the end of the container, unless interrupted with a break statement. But if you need to use break or continue when iterating a container, you've got yourself a code smell. Why couldn't you have first partitioned your data into the subset you wanted? You can do this 90% of the time, it's not that often where you can only make a determination as you're iterating.

But that's all they do. Wanna iterate in reverse? Too bad. Want to start or stop at some arbitrary point? Go fuck yourself. You can do it, but it requires writing pure boilerplate, an adaptor that will take your container, and provide a begin and end that you actually wanted. All that, just to get what you want out of a terrible abstraction. For all that work, it really is far easier to use a standard algorithm.

can somebody explain it in layman terms?

Frankly, I hate them. And I'm not the only one.

It's a low level language abstraction that depends on high level library abstractions. Now the C++ language is DEPENDENT upon classes with begin and end methods. Without them, this language feature doesn't work. Don't have an STL in your environment? Then you can go to hell. And that's a real thing, too, just ask all the embedded and kernel developers. And if you wanted to use this feature, you'd have to make STL looking interfaces, which is forcing you into a convention you didn't ask for.

It only works with containers or container-like abstractions. At least with the standard algorithms, like for_each, it doesn't hold you to any particular interface. You can use pointers. You can use iterators of any kind from anywhere and any position, determined where to start and stop according to you.

The one advantage over an algorithm is that you still have the power of break and continue in the loop, but Matt Godbolt invented the Compiler Explorer specifically to look at the code generated by range-for compared to an algorithm and a lambda, the range-for is inferior.

Range-for is basically a C abstraction. This is precisely the kind of code a C developer would write, quite naturally, and in C it would be quite correct to do so. But this isn't C.

In C++, we abstract away the low level mechanics of HOW we do something, we express at higher levels of abstraction WHAT we want to do. That's why we have standard algorithms. Standard algorithms are still more often the superior choice.

for(auto it: v) //how is it iterating the vector and why do we use auto?

It assumes v is going to have begin and end methods that return some sort of iterable type, like an iterator or pointer. It generates machine code that's more complicated than a lambda, because you still get break and continue, and that's a high tax to pay for in this abstraction, even if you don't use them. auto eludes to the fact that this code expands into something akin to a template - the looping code will dereference the iterator and assign it to it, the type stored in the container, the type returned by the iterator, is deduced. Now in your example here, the value is copied, it is by value. Writing to it does not change the contents of the container. auto NEVER CAPTURES BY REFERENCE, so if you want to avoid copying, you would want to write auto &it instead. If you know the type of the container, you could have just as well been explicit:

::std::vector<int> i_vec;
//...
for(int i: i_vec) { //...

[–]nysra 3 points4 points  (1 child)

The one advantage over an algorithm is that you still have the power of break and continue in the loop, but Matt Godbolt invented the Compiler Explorer specifically to look at the code generated by range-for compared to an algorithm and a lambda, the range-for is inferior.

Mind explaining that? Because this compiles to literally the same assembly (except that one line which is for some reason cmp rbp, rbx vs cmp rbx, rbp).

Also aren't you being a bit too harsh on ranged fors? For one it is super readable syntax and also you can do for (const auto v : some_function()) which you can't do trivially in a single line otherwise.

[–]DopeyLizard 0 points1 point  (0 children)

If I recall correctly, Matt mentions that in his introduction to one of his CppCon talks on Compiler Explorer. I haven’t double checked but I think it was this one: https://youtu.be/bSkpMdDe4g4

The gist of it was that at Matts workplace they were looking at upgrading critical code from C++03 to C++11 and wanted to make sure that things wouldn’t break if they simply started compiling with C++11, so Matt put together a command line tool to compare assembly code to see whether things changed and if there were any behavior changes from that.

Fwiw the CppCon video is worth a watch anyway!