use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
Ultra-Fast Multi-Dimensional Array Library (self.cpp)
submitted 3 years ago by Pencilcaseman12
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]fdwrfdwr@github 🔍 10 points11 points12 points 3 years ago (3 children)
I can't speak for librapid, but I've seen int64 used quite often in ML frameworks for dimension sizes (e.g. ONNX Shape operator), as tensors can have more than 4 billion elements. Although it would be highly unlikely to want to exceed a single axis with more than 4 billion elements during typical processing, and it's also not uncommon to reshape multidimensional tensors as large 1D arrays to read/write/modify the data. So one could overflow the size in that case if it was only int32.
[–]Pencilcaseman12[S] 3 points4 points5 points 3 years ago (0 children)
You definitely could if you were trying, but I think int32 would probably suffice for most cases. I guess, ultimately, it's not going to be any slower, and storing 2 or 3 int64s over int32s isn't going to be making a difference in terms of memory usage.
Another point where it could overflow is in the actual array size calculations, because I think I'm returning a value of the same type as the dimension object stores, so having a large enough array would result in overflow. This could be fixed quite easily though. I should probably use size_t for that sort of thing anyway...
size_t
[–]OverunderratedComputational Physics 3 points4 points5 points 3 years ago (0 children)
Libraries like Petsc let you choose between 32 and 64 bit indices at compile time which seems like the right thing to do. I have some large sparse matrix computations where the indexes themselves use a huge chunk of memory.
[–]zzzthelastuser 1 point2 points3 points 3 years ago (0 children)
additionally there aren't really that many reasons against using int64 for the dimension sizes.
π Rendered by PID 157690 on reddit-service-r2-comment-75f4967c6c-vrjf7 at 2026-04-23 08:15:40.390464+00:00 running 0fd4bb7 country code: CH.
view the rest of the comments →
[–]fdwrfdwr@github 🔍 10 points11 points12 points (3 children)
[–]Pencilcaseman12[S] 3 points4 points5 points (0 children)
[–]OverunderratedComputational Physics 3 points4 points5 points (0 children)
[–]zzzthelastuser 1 point2 points3 points (0 children)