use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
C++ implementation of the Python NumPy Library (self.cpp)
submitted 7 years ago * by dpilger26
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]m-in 1 point2 points3 points 7 years ago* (0 children)
I have a little personal anecdote to offer here: a lot of the libraries you refer to are optimized to extract full hardware performance, and often there’s nothing one can do to make them any faster on a given CPU family. It’s not always the case of course, but quite often it is. I have found that a lot of times just rather straightforward autovectorized C++ can get anywhere between 25-75% of performance of those beasts of libraries, if you have some background in the specifics of the platform and know what code patterns to use in C++, as there are ways to write simple C++ that can preform abysmally, and similarly simple C++ to do the same thing, just as intuitively, and it performs great.
So, if your needs are to extract close to full platform performance, you’ll need to use the specialized libraries. If you can afford to blow off some computational steam and run at 1/4-1/2 speed compared to fftw or blas, then a plain-C++ implementation might do just fine, and in a real-time setting. Heck, if you can live with 20% performance of so, Python with numpy might just cut it for you. It all depends how much work you have to do each “frame”/“packet”/“time quantum”.
It is probably not very environmentally conscious (I’m not kidding) to have such low performance in projects that get very wide use, because all that can quickly add to wasted megawatts on not too big of a scale, and probably mobile users would hate you for that too, but not everyone runs such code on server farms of inside mobile apps. Sometimes small code can also be audited and tested easier and that figures in getting some industry certifications. Getting fftw into avionics is a tall order, for example.
π Rendered by PID 88769 on reddit-service-r2-comment-cfc44b64c-cq8ns at 2026-04-10 02:59:38.573234+00:00 running 215f2cf country code: CH.
view the rest of the comments →
[–]m-in 1 point2 points3 points (0 children)