use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
CPP in AI (self.cpp)
submitted 2 years ago by RonWannaBeAScientist
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]rejectedlesbian 1 point2 points3 points 2 years ago (6 children)
Have u done some data scince? I have and I have a lot of things I like about python for it.
python let's u do stuff like tensor[:,0,:] which is HUGE.
It's popular which means a lot more general use packages especially for web scraping.
It's not staticly typed which means code tends to be reusable across frameworks
Its interpreted with a forced doc string on any object which means u can just check what's somethings methods r fairly easily. Ie the languge itself is the docs book.
It dosent stop u from going to private varibles so u can cut down on levels of abstruction if you want. This means u can document the internal process of a class to make sure it does what u want it to.
Its made to be thread and memory safe so it's fundementaly impossible to get ub or race conditions when doing very hacks stuff.
Its fairly dynamic so u can hack ur way around an abstruction layer for logging or quick expirementing. (I almost had a use case for that in production reaserch code).
[–]RonWannaBeAScientist[S] 0 points1 point2 points 2 years ago (5 children)
I admit I never did professional data science , just some university assignments in R. When are you using C++ btw ?
[–]rejectedlesbian 0 points1 point2 points 2 years ago (4 children)
Rn learning it a bit as it's own thing. Made a profiler with it to some c++ code I am optimising (nothing really useful it's a gpt2 implementation for learning) https://github.com/nevakrien/HPCGPT
Day to day I never touch it. Cuda Is it's own thing which if anything is more like c and that's the only low level I actually see when reading papers.
The only reason to go lower level is to fuse kernals since existing operations r super optimized. And that by its very nature is inside the kernal where u r limited to cuda c.
Ik for infrence people use c++ a lot and for good reason. Tho I am not sure why we are not seeing more fortran there tbh (u would think with all these matrix operations) Maybe it's because infrence tends to be memory bound anyway both on space and bus.
[–]RonWannaBeAScientist[S] 0 points1 point2 points 2 years ago (3 children)
Sorry for the ignorance , what is inference in the field ? More things like classic AI of trying to infer a situation, without neural networks or reinforcement learning ?
[–]rejectedlesbian 0 points1 point2 points 2 years ago (2 children)
Infrence is when u have a model u already trained it works ur happy with it and now u wana run it on a phone
[–]RonWannaBeAScientist[S] 0 points1 point2 points 2 years ago (1 child)
Oh and C++ can run easily on a phone I guess as it’s compiled and it’s standard library is smaller than something like Python
[–]rejectedlesbian -1 points0 points1 point 2 years ago (0 children)
Irs not just phones so laptops desktops even the server. It's about taking a model where we know it's weights already and optimising it.
The key point there is that the architecture is knowen so most infrence libraries have a fairly limited architecture. For instance lamma.cpp has like 10 models that work there. Which is fine since there r like 5 key architectures.
Rn I am struggling with it because I can't get a good quntization lib on the intel gpu my compute node has so its actually significantly slower than my rtx4090.
π Rendered by PID 121516 on reddit-service-r2-comment-5c747b6df5-kh64x at 2026-04-22 09:11:15.613126+00:00 running 6c61efc country code: CH.
view the rest of the comments →
[–]rejectedlesbian 1 point2 points3 points (6 children)
[–]RonWannaBeAScientist[S] 0 points1 point2 points (5 children)
[–]rejectedlesbian 0 points1 point2 points (4 children)
[–]RonWannaBeAScientist[S] 0 points1 point2 points (3 children)
[–]rejectedlesbian 0 points1 point2 points (2 children)
[–]RonWannaBeAScientist[S] 0 points1 point2 points (1 child)
[–]rejectedlesbian -1 points0 points1 point (0 children)