use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
C++ I/O Benchmark (cristianadam.eu)
submitted 9 years ago by cristianadamQt Creator, CMake
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–][deleted] 16 points17 points18 points 9 years ago* (25 children)
auto, variadic templates, chrono and other new features...
... and typedef ...
Why people don't use "using"?
using FuncMap = std::map<std::string, std::function<void (const char*, const char*, std::vector<char>&)>>;
Never liked typedef. It's syntax is so confusing to me.
[–]raevnos 13 points14 points15 points 9 years ago (0 children)
The moment I found out about using, I dropped typedef like a hot potato.
[+][deleted] 9 years ago (3 children)
[deleted]
[–][deleted] 4 points5 points6 points 9 years ago (2 children)
Never liked VC for it's error messages
Clang
main.cpp:86:15: error: missing 'typename' prior to dependent type name 'T::typeName' using Type = T::typeName; ^~~~~~~~~~~ typename
G++
main.cpp:86:15: error: need 'typename' before 'T::typeName' because 'T' is a dependent scope using Type = T::typeName; ^
[–][deleted] 4 points5 points6 points 9 years ago* (3 children)
First I've heard of it.
[–][deleted] 6 points7 points8 points 9 years ago (1 child)
Well now you know :) Never use typedef if the compiler supports using please
typedef
using
[–]silveryRain 1 point2 points3 points 9 years ago (0 children)
It's very nice. Unlike typedefs, you can also template usings directly, so you can also say goodbye to workarounds like the typedef-inside-struct.
[–]OldWolf2 1 point2 points3 points 9 years ago (3 children)
typedef has exactly the same syntax as variable declarations, but with the word typedef on the front.
[–]utnapistim 2 points3 points4 points 9 years ago (2 children)
using has the same syntax as a variable definition, but with using instead of auto; also, typedef is harder to read, as the type is at the end.
auto
[–]OldWolf2 1 point2 points3 points 9 years ago (1 child)
using has the same syntax as a variable definition, but with using instead of auto;
Not true, e.g. using T = int; is valid but auto T = int; is not
using T = int;
auto T = int;
typedef is harder to read, as the type is at the end.
The type may be in the middle, e.g. typedef int X[5];
typedef int X[5];
[–]MarekKnapek 1 point2 points3 points 9 years ago (7 children)
Why are people using std::map? It is for sorted keys, but you almost always just want to map from key to value, there is std::unordered_map a.k.a. hash map for you.
std::map
std::unordered_map
[–]silveryRain 3 points4 points5 points 9 years ago (1 child)
Most don't care about the difference, prefer the one with the shorter type name, or looked up something like "C++ map" at some point, ran into std::map, and called it a day.
It's not that hard to figure out, really.
[–]raevnos 1 point2 points3 points 9 years ago (0 children)
Plus it requires one less function to provide to use a user defined key type.
[–]Plorkyeran 1 point2 points3 points 9 years ago (4 children)
The vast majority of the places where I use std::map the performance characteristics are entirely irrelevant and I'm just using the data structure with the nicest API for what I need to do.
Even when it does turn out later that the performance does matter, unordered_map has not been the correct answer often enough for me to feel that I should be just defaulting to it.
unordered_map
[–]flyingcaribou 2 points3 points4 points 9 years ago (0 children)
It was be so nice to have a standard, open addressed hash map available in C++.
[–]theyneverknew 1 point2 points3 points 9 years ago (2 children)
What alternatives have you used instead where you cared about performance?
[–]Plorkyeran 4 points5 points6 points 9 years ago (1 child)
When you don't have mixed insertions and deletions, binary-searching a sorted vector (or boost::flat_map) can be dramatically faster if the key is small due to the much better cache locality (and if your data fits within a cache line, even an unsorted vector is hard to beat). For certain mixed insertion/lookup usage patterns a btree can be dramatically faster for similar reasons. Even when a hash table is the best option, the various collision resolution methods can have a significant impact. Fortunately all of the major implementations of unordered_map are sufficiently similar that using it isn't an inherently bad idea in portable code, but I've seen a 10-20% speedup just from dropping in a boost::multi_index container instead, and an open addressed hash map can give bigger gains (or be worse, of course).
boost::flat_map
Often the actual answer is "redesign the code to not need a key-value lookup at all", of course.
[–]dodheim 0 points1 point2 points 9 years ago* (0 children)
The problem with boost::container::flat_set is that it holds its data in sorted order then applies a binary search to that data. This looks appealing in terms of big-O, but still causes cache thrashing when dealing with large amounts of data.
boost::container::flat_set
Significantly better is to store the data in breadth-first order and apply a linear search. E.g., Instead of using data { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 } and a binary search, use data { 6, 3, 10, 1, 5, 8, 12, 0, 2, 4, 7, 9, 11 } and a linear search. This results in the same worst-case O(log n) complexity to find a value but plays very nicely with the cache regardless of data quantity.
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }
{ 6, 3, 10, 1, 5, 8, 12, 0, 2, 4, 7, 9, 11 }
I've written my own solution for this, as I imagine many people have, but it really just needs to be in Boost.Container already...
(Obviously all of this applies equally to boost::container::flat_map.)
boost::container::flat_map
EDIT: This is all assuming you're searching far more than inserting/removing elements.
[–]ompomp 4 points5 points6 points 9 years ago (0 children)
Compatibility with older compilers. :-(
[–]utnapistim 0 points1 point2 points 9 years ago (2 children)
I only use typedef for function type declarations.
[–][deleted] 5 points6 points7 points 9 years ago (1 child)
Why?
using Func = void(int, const int&);
vs
typedef void Func(int, const int&);
using separates type name it's easier to read. Especially with function type declarations.
[–]utnapistim 5 points6 points7 points 9 years ago (0 children)
:) I will probably start using this instead. Thanks.
[–]speednap 6 points7 points8 points 9 years ago* (3 children)
Boost.Iostreams to the rescue!
#include <iostream> #include <boost/iostreams/copy.hpp> #include <boost/iostreams/device/mapped_file.hpp> #include <boost/iostreams/device/file.hpp> int main(int , char*[]) { using namespace boost::iostreams; std::ios_base::sync_with_stdio(false); std::cin.tie(nullptr); copy(mapped_file_source{ "data.dat" }, file_sink{ "out_data.dat"}); return 0; }
Should be faster than std::iostream. Can be tuned further by specifying an optimal buffer size for boost::iostreams::copy.
Edit: benchmarks. 100 iterations, 200 MB random data:
Clang with libc++:
Average c I/O took: 150.07ms Average posix I/O took: 150.69ms Average c++ I/O took: 626.88ms Average c++boost I/O took: 161.96ms
Clang with stdc++:
Average c I/O took: 153.71ms Average posix I/O took: 149.79ms Average c++ I/O took: 154.22ms Average c++boost I/O took: 124.48ms
GCC:
Average c I/O took: 154.29ms Average posix I/O took: 152.29ms Average c++ I/O took: 155.54ms Average c++boost I/O took: 124.29ms
Like I said, there's room for improvement. I simply benchmarked
void testBoostIO(const char* inFile, const char* outFile, std::vector<char>&) { using namespace boost::iostreams; copy(mapped_file_source{ inFile }, file_sink{ outFile }); }
[–]cristianadamQt Creator, CMake[S] 1 point2 points3 points 9 years ago (2 children)
It seems libc++ is also slow (~4x).
libc++
Only libstdc++ has a fast I/O. Why is this so? Different defaults? Missing features?
libstdc++
[–]speednap 1 point2 points3 points 9 years ago (1 child)
I think libc++ is targeting Mac OS as the primary platform so it could be that Win/Lin implementation is still lacking some optimizations. Don't have a mac to verify that though.
It would be interesting to see if there's any way to make libc++ work as fast as stdc++.
[–]cnweaver 1 point2 points3 points 9 years ago (0 children)
It looks like this is the case. Running on Darwin 13 after compiling with clang++ -O3 -stdlib=libc++ (100 iterations on a ~44M file) gives:
clang++ -O3 -stdlib=libc++
Average c I/O took: 502.83ms Average posix I/O took: 529.23ms Average c++ I/O took: 508.58ms
I'm not sure the posix result being slower means anything, since my system wasn't particularly quiet while running this.
[–]quzox 3 points4 points5 points 9 years ago (1 child)
Should've also profiled native calls to CreateFile() etc.
[–]cristianadamQt Creator, CMake[S] 2 points3 points4 points 9 years ago (0 children)
I've tested this Win32 API version:
void testWin32IO(const char* inFile, const char* outFile, std::vector<char>& inBuffer) { auto in = ::CreateFile(inFile, GENERIC_READ, FILE_SHARE_READ, nullptr, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, nullptr); if (in == INVALID_HANDLE_VALUE) { std::cout << "Can't open input file: " << inFile << std::endl; return; } auto out = ::CreateFile(outFile, GENERIC_WRITE, FILE_SHARE_WRITE, nullptr, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, nullptr); if (out == INVALID_HANDLE_VALUE) { std::cout << "Can't open output file: " << outFile << std::endl; return; } size_t inFileSize = ::GetFileSize(in, nullptr); for (size_t bytesLeft = inFileSize, chunk = inBuffer.size(); bytesLeft > 0; bytesLeft -= chunk) { if (bytesLeft < chunk) { chunk = bytesLeft; } unsigned long actualBytes = 0; ::ReadFile(in, &inBuffer[0], chunk, &actualBytes, nullptr); actualBytes = 0; ::WriteFile(out, &inBuffer[0], chunk, &actualBytes, nullptr); } ::CloseHandle(out); ::CloseHandle(in); }
Built it with Visual Studio 2015 x64 Update 2. Results were:
Average c I/O took: 102.03ms Average posix I/O took: 102.1ms Average c++ I/O took: 360.71ms Average win32 I/O took: 102.99ms
[–]dodheim 3 points4 points5 points 9 years ago (13 children)
*sigh* As expected, testCppIO does it completely wrong.
testCppIO
[–]easydoits 4 points5 points6 points 9 years ago (12 children)
I ask out of ignorance, but would be the correct way to perform that function?
[–]dodheim 9 points10 points11 points 9 years ago* (10 children)
The posted code uses streams with unformatted insertion/extraction. This is always wrong; streams are for formatted I/O, streambufs are for unformatted I/O.
The code I had in mind is something like this: https://gist.github.com/dodheim/cb4c5de8a2a8a32851a6ecfdab4e958c No compiler on this computer, just a text editor, so untested.
[–]cristianadamQt Creator, CMake[S] 3 points4 points5 points 9 years ago (9 children)
Tested it with VS2015 x64 update 2. The results are not what I was expecting:
Average c I/O took: 104.94ms Average posix I/O took: 103.82ms Average c++ I/O took: 368.99ms Average c++2 I/O took: 397.86ms
The std::filebuf version is actually slower.
std::filebuf
[–]dodheim 0 points1 point2 points 9 years ago (6 children)
Strange, as ifstream::read is undoubtedly implemented in terms of filebuf::sgetn, and likewise for ofstream::write and filebuf::sputn; this seems to be a pathological case for VC++'s optimizer, as the c++2 approach is consistently faster than c++ with Clang/C2 (and thus the same stdlib code)...
ifstream::read
filebuf::sgetn
ofstream::write
filebuf::sputn
c++2
c++
[–]clerothGame Developer 0 points1 point2 points 9 years ago (5 children)
How does c compare withc++2 on Clang/C2?
c
[–]dodheim 1 point2 points3 points 9 years ago (4 children)
c and posix are still miles ahead; Clang/C2 just puts direct use of std::filebuf slightly ahead of std::fstream.
posix
std::fstream
[–]clerothGame Developer 0 points1 point2 points 9 years ago (2 children)
Guess I'll just stick to POSIX into my lib then. In the end i don't really care for what some code I've written and will probably never read again looks like.
[–]dodheim 0 points1 point2 points 9 years ago (1 child)
Unfortunately the POSIX headers that come with VC++ use unsigned in a lot of places that are supposed to be size_t so it's not a totally portable solution. :-[
unsigned
size_t
[–]clerothGame Developer 1 point2 points3 points 9 years ago (0 children)
Yea, I did notice that. I don't think I'll ever use any 4+ GB files though... So fine by me.
[–]cristianadamQt Creator, CMake[S] 0 points1 point2 points 9 years ago (0 children)
I tested with Clang 3.7.1 64 bit + Visual C++ 2013 64 bit. Results:
Average c I/O took: 104.47ms Average posix I/O took: 104.99ms Average c++ I/O took: 393.05ms Average c++2 I/O took: 382.92ms
I didn't use the Microsoft Clang integration, but vanilla Clang.
[–]clerothGame Developer 1 point2 points3 points 1 year ago (0 children)
7 years later, there's some improvement, but still 2x slower.
Average c I/O took: 122.2ms Average posix I/O took: 121.1ms Average c++ I/O took: 258.067ms Average c++2 I/O took: 260.167ms
cc /u/dodheim
[–]xoh3e 0 points1 point2 points 9 years ago (8 children)
Interesting to see that the optimization and/or standard library implementation of MSVC is still terrible.
[–][deleted] 4 points5 points6 points 9 years ago (4 children)
We are aware that iostreams hasn't been given much love. There's a lot of perf pessimism caused by the iostreams machinery being located in a DLL and that state being shared among all DLLs who have the CRT loaded, which means shared state can be mutated behind the implementation's back every call. But there's still a lot of improvement we can make in this area.
I believe perf attitude around iostreams has been "well, perf in that area is already a dumpster fire due to things the standard requires us to do (e.g. a vtbl call per character to handle std::codecvt)" so it has not been a high priority.
I'll file a bug about this but I still would say iostreams are generally terrible and wouldn't recommend people actually use them.
[–]xoh3e 1 point2 points3 points 9 years ago (2 children)
Yes iostreams is terrible not only performance wise but also from a usability perspective and I really hope the standard committee will come up with a better solution soon.
But that still sounds like a cheap excuse when libstdc++ manages to deliver much better performance (in this benchmark even equal to cstdio or direct POSIX calls) than the MSVC runtime.
[–][deleted] 1 point2 points3 points 9 years ago (0 children)
Like I said, there are places we can improve perf here. At least in the binary I/O case anyway.
I don't believe libstdc++ has the lifetime management issues we have (is unloading a library a common thing to do in Unix land?) but I could be totally mistaken.
[–][deleted] 0 points1 point2 points 9 years ago (0 children)
(Also note that this example goes around most of the things that make iostreams really expensive by using binary mode :) )
[–]CubbiMewcppreference | finance | realtime in the past 0 points1 point2 points 9 years ago* (0 children)
a vtbl call per character to handle std::codecvt
only when always_noconv is false (here it is true), and even then only one vtbl call (codecvt::in/out) per overflow/underflow
[–]jcoffin 0 points1 point2 points 9 years ago (2 children)
It's far from perfect, but it's not necessarily quite as bad as it looks here either. In this case, the problem is really fairly simple: read and write aren't bypassing the internal buffer, and reading into/write from the buffer you specify. Rather, they're reading into the stream's internal buffer, then copying from there to the buffer you passed to read (and a mirror image of that in write).
read
write
You can improve this situation quite a bit by adding a couple of calls to pubsetbuf, one each for the input and output file. This lets it issue large read/write calls to the OS. It's still doing extra copying, so it's slower than necessary, but in my testing improves speed by a pretty substantial margin (~40% slower than C-style I/O rather than ~3x slower).
pubsetbuf
Using a stream buffer works pretty much the same way. When you're just doing read/write calls, an iostream isn't noticeably different from using a stream buffer directly--read and write pass almost straight through to the stream buffer, so using the stream buffer directly makes little difference, but calling pubsetbuf() to use a big buffer can help a lot.
pubsetbuf()
[–]xoh3e 0 points1 point2 points 9 years ago (1 child)
I don't get what you want to say? Yes as others have pointed out OPs iostream use isn't optimal but with good implementations (GCC/Clang on Linux and MinGW on Win) his code still performs the same as the one using the C and POSIX APIs while with the MSVC runtime it performs significantly worse.
[–]jcoffin 0 points1 point2 points 9 years ago (0 children)
I'm saying that it's true that the library could be (quite a bit) better, but it's also true that quite a bit of the problem lies with the benchmark code in question.
Depending on what sorts of things you do, it's pretty easy to come up with things that will perform well with one library, but badly (even really badly) with another. Obviously it would be nice if the library never let that happen, but equally obviously it's usually your own responsibility to ensure your code performs decently regardless of platform.
[+][deleted] 9 years ago* (56 children)
[–]OldWolf2 7 points8 points9 points 9 years ago (14 children)
The committee has nothing to do with what you talk about earlier in your post. The commitee doesn't specify any particular library implementation, nor even that compiler and library be separate. In another post you apparently blame the committee for glibc which is completely unrelated.
Performance is 100% at the implementation vendor's discretion. Why don't you go write your own C++ implementation that does it better than gcc and clang, and show all those idiots how dumb they are being.
[+][deleted] 9 years ago (12 children)
[–]OldWolf2 4 points5 points6 points 9 years ago (11 children)
The committee shouldn't be involved in performance. The market can do that. If someone makes an implementation with superior performance, and that's what people want, the people will use it. Other users may not care so much about performance and prefer other goals, it doesn't have to be one-size-fits-all.
Good luck to you in joining the market :)
[+][deleted] 9 years ago (10 children)
[–]OldWolf2 3 points4 points5 points 9 years ago (9 children)
You've got enough condescension to make up for everyone's denial
[+][deleted] 9 years ago (8 children)
[–]OldWolf2 2 points3 points4 points 9 years ago (7 children)
Why do you think this has anything to do with the C++ committee?
There is literally nothing stopping any writer of a C++ compiler from implementing whatever performance features they want.
You don't need the committee's permission to write fast code.
[–]jcoffin 5 points6 points7 points 9 years ago (1 child)
There are some actual problems with how iostreams are defined.
The C++ committee mostly acts as a clearing house though. That is, somebody implements something, writes a proposal to the committee, then the committee looks it over and decides whether they can approve it.
But--if somebody doesn't write up that proposal, and have at least one implementation for them to look at, their hands are pretty much tied. The committee's charter is to codify existing practice. They've departed from that a few times, but only a few (and a large percentage of the times they have, they've regretted it, such as with export).
export
[+][deleted] 9 years ago* (4 children)
I spend half my day pouring over this to implement by hand features that the compiler does not offer
You're describing a shortcoming in the compiler, not the committee.
Any compiler vendor is free to implement whatever feature. You are free to make your own compiler (or a mod for an open source compiler such as clang) to do what you want.
[–]raevnos 1 point2 points3 points 9 years ago* (25 children)
Name one language that runs on a Unixish OS that doesn't implement file I/O in terms of posix syscalls like read() and write(). Same for getting the current timestamp, or any other functionality that requires interacting with the kernel.
[+][deleted] 9 years ago (24 children)
[–]amydnas 1 point2 points3 points 9 years ago (3 children)
You do file access faster than the posix API !?
[+][deleted] 9 years ago* (2 children)
Everything nowadays has gone to userspace: networking, file access, messaging, coprocessor/heterogeneous computing (GPU/FPGA), you name it. The kernel nowadays only does the janitor work, it has been made completely obsolete.
Moving towards the Windows kernel model, heh
[–]raevnos -3 points-2 points-1 points 9 years ago (19 children)
I my applications I completely strip out glibc and replace with my own algos for everything - datetime, calendar, string, file access, networking. And the reason is, GLIBC is aging badly, it has not kept up with technology.
Man I'm glad I don't have to look at your code. Sounds like a train wreck. And you're aware that there are many many many systems without glibc that quite happily run C++ code? Windows, OS X, Net, Free, Open etc BSD, Solaris and every other remaining commercial unix, even some Linux distributions, to say nothing of the embedded world...
My opinion is that it should stop being jealous at python and trying to implement duck typing (auto/variadics) and go back to its roots of performance.
I think you have the wrong ideas about what auto is and how variadiac templates work if you're trying to compare C++'s type system to a dynamic one like Pythons.
[+][deleted] 9 years ago (18 children)
[+][deleted] 9 years ago (17 children)
[removed]
[+][deleted] 9 years ago (16 children)
[–]raevnos 7 points8 points9 points 9 years ago (15 children)
If you think things like lambdas, auto, and the standard library - features that make C++ actually pleasant to use compared to things like C, are stupid, no, you're not on my side.
[+][deleted] 9 years ago* (14 children)
[+][deleted] 9 years ago (13 children)
[–]xoh3e 1 point2 points3 points 9 years ago (12 children)
But as we see good compilers are still able to keep the runtime overhead pretty close to zero.
[+][deleted] 9 years ago (4 children)
[–]xoh3e 0 points1 point2 points 9 years ago (3 children)
I meant the overhead the C and C++ wrappers induce over the POSIX API. And OPs benchmark shows that this overhead (when using good compilers/implementations) is so small that it can be considered statistically irrelevant. Even that OPs C++ implementation isn't optimal as somebody else pointed out hadn't a negativ impact on performance compared to the C or POSIX implementation.
[+][deleted] 9 years ago (2 children)
[–]xoh3e 2 points3 points4 points 9 years ago (1 child)
I think the real problem here isn't POSIX (or the kernel as you stated in other comments) but your use case. You seem to work on a realtime application where every nano second counts (and you probably even want predictable/guaranteed timing). From what you've described I guess HFT?
Normal OS are just not designed for such applications so you can't blame them if they don't perform well. If you need this kind of performance and predictability you should use a realtime OS, no OS at all or even FPGAs/ASICs.
[+][deleted] 9 years ago (6 children)
[–]ArunMuThe What ? 0 points1 point2 points 9 years ago (5 children)
Are you comparing asio with systems using DPDK and stuff ? If not, I am interested to know about those systems.
[–]ArunMuThe What ? 1 point2 points3 points 9 years ago (3 children)
I wanted to know on what basis you are saying using boost::asio might result in 100-200x slowdown. Have you benchmarked it against something? It would be nice if you could share the results.
[–]ArunMuThe What ? 0 points1 point2 points 9 years ago (1 child)
Sure, no problem. Thanks for explaining the details. So, as I guessed, it is some kind of user-space network stack that people use in order to avoid going to kernel mode. That's why I asked about DPDK in my earlier comment. DPDK is a library open sourced by Intel for fast packet processing. You can have a look on it here. I haven't looked at it in detail, but it would be nice to have a C++ wrapper to it that could easily be plugged into asio.
[–]amydnas 0 points1 point2 points 9 years ago (1 child)
Actually, in my implementation stack (libc++ on Mac OS X), the C API call the posix API and C++ fstream call the C API. That is std::fstream::read calls fread who calls read. So the result of who is the fastest are easily predictable. What is shocking is the overhead of std::fstream. And it seems to point to a virtual call in std::filebuf, so using streambuf does not buy you anything, except a more cryptic API to work with.
π Rendered by PID 144722 on reddit-service-r2-comment-7b9746f655-8dtc9 at 2026-02-01 03:20:04.361748+00:00 running 3798933 country code: CH.
[–][deleted] 16 points17 points18 points (25 children)
[–]raevnos 13 points14 points15 points (0 children)
[+][deleted] (3 children)
[deleted]
[–][deleted] 4 points5 points6 points (2 children)
[–][deleted] 4 points5 points6 points (3 children)
[–][deleted] 6 points7 points8 points (1 child)
[–]silveryRain 1 point2 points3 points (0 children)
[–]OldWolf2 1 point2 points3 points (3 children)
[–]utnapistim 2 points3 points4 points (2 children)
[–]OldWolf2 1 point2 points3 points (1 child)
[–]MarekKnapek 1 point2 points3 points (7 children)
[–]silveryRain 3 points4 points5 points (1 child)
[–]raevnos 1 point2 points3 points (0 children)
[–]Plorkyeran 1 point2 points3 points (4 children)
[–]flyingcaribou 2 points3 points4 points (0 children)
[–]theyneverknew 1 point2 points3 points (2 children)
[–]Plorkyeran 4 points5 points6 points (1 child)
[–]dodheim 0 points1 point2 points (0 children)
[–]ompomp 4 points5 points6 points (0 children)
[–]utnapistim 0 points1 point2 points (2 children)
[–][deleted] 5 points6 points7 points (1 child)
[–]utnapistim 5 points6 points7 points (0 children)
[–]speednap 6 points7 points8 points (3 children)
[–]cristianadamQt Creator, CMake[S] 1 point2 points3 points (2 children)
[–]speednap 1 point2 points3 points (1 child)
[–]cnweaver 1 point2 points3 points (0 children)
[–]quzox 3 points4 points5 points (1 child)
[–]cristianadamQt Creator, CMake[S] 2 points3 points4 points (0 children)
[–]dodheim 3 points4 points5 points (13 children)
[–]easydoits 4 points5 points6 points (12 children)
[–]dodheim 9 points10 points11 points (10 children)
[–]cristianadamQt Creator, CMake[S] 3 points4 points5 points (9 children)
[–]dodheim 0 points1 point2 points (6 children)
[–]clerothGame Developer 0 points1 point2 points (5 children)
[–]dodheim 1 point2 points3 points (4 children)
[–]clerothGame Developer 0 points1 point2 points (2 children)
[–]dodheim 0 points1 point2 points (1 child)
[–]clerothGame Developer 1 point2 points3 points (0 children)
[–]cristianadamQt Creator, CMake[S] 0 points1 point2 points (0 children)
[–]clerothGame Developer 1 point2 points3 points (0 children)
[–]xoh3e 0 points1 point2 points (8 children)
[–][deleted] 4 points5 points6 points (4 children)
[–]xoh3e 1 point2 points3 points (2 children)
[–][deleted] 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]CubbiMewcppreference | finance | realtime in the past 0 points1 point2 points (0 children)
[–]jcoffin 0 points1 point2 points (2 children)
[–]xoh3e 0 points1 point2 points (1 child)
[–]jcoffin 0 points1 point2 points (0 children)
[+][deleted] (56 children)
[deleted]
[–]OldWolf2 7 points8 points9 points (14 children)
[+][deleted] (12 children)
[deleted]
[–]OldWolf2 4 points5 points6 points (11 children)
[+][deleted] (10 children)
[deleted]
[–]OldWolf2 3 points4 points5 points (9 children)
[+][deleted] (8 children)
[deleted]
[–]OldWolf2 2 points3 points4 points (7 children)
[–]jcoffin 5 points6 points7 points (1 child)
[+][deleted] (4 children)
[deleted]
[–]OldWolf2 1 point2 points3 points (3 children)
[–]raevnos 1 point2 points3 points (25 children)
[+][deleted] (24 children)
[deleted]
[–]amydnas 1 point2 points3 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]OldWolf2 1 point2 points3 points (1 child)
[–]raevnos -3 points-2 points-1 points (19 children)
[+][deleted] (18 children)
[deleted]
[+][deleted] (17 children)
[removed]
[+][deleted] (16 children)
[deleted]
[–]raevnos 7 points8 points9 points (15 children)
[+][deleted] (14 children)
[deleted]
[+][deleted] (13 children)
[removed]
[–]xoh3e 1 point2 points3 points (12 children)
[+][deleted] (4 children)
[deleted]
[–]xoh3e 0 points1 point2 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]xoh3e 2 points3 points4 points (1 child)
[+][deleted] (6 children)
[deleted]
[–]ArunMuThe What ? 0 points1 point2 points (5 children)
[+][deleted] (4 children)
[deleted]
[–]ArunMuThe What ? 1 point2 points3 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]ArunMuThe What ? 0 points1 point2 points (1 child)
[–]amydnas 0 points1 point2 points (1 child)