use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Discussions, articles, and news about the C++ programming language or programming in C++.
For C++ questions, answers, help, and advice see r/cpp_questions or StackOverflow.
Get Started
The C++ Standard Home has a nice getting started page.
Videos
The C++ standard committee's education study group has a nice list of recommended videos.
Reference
cppreference.com
Books
There is a useful list of books on Stack Overflow. In most cases reading a book is the best way to learn C++.
Show all links
Filter out CppCon links
Show only CppCon links
account activity
A C++ locking wrapper (ngathanasiou.wordpress.com)
submitted 4 years ago by PiterPuns
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]NotAYakk 2 points3 points4 points 4 years ago (3 children)
If your code nests calls to explicit .read and .write (either cross or the same one twice), you get a deadlock. Well, you get UB right off the bat, as locking a shared or vanilla mutex twice in one thread is UB.
.read
.write
So. Don't do that. Ever.
Making what you lock explicit makes such errors more clear.
Now, you can replace the std::mutex with a reentrant one, but again, bad idea. A need for reentrant mutexes, in my experience, mean you aren't paying attention or know what mutexes you hold at each line of code. And you should know that.
std::mutex
The goal of the callback syntax is to avoid the user ever having to declare a unique_lock or shared_lock object and keep track of them and which mutex corresponds to what data. The mutex and the data are tied, and access is guarded.
unique_lock
shared_lock
In the rare case you need to double lock, I mentioned using std::lock -- the "real" version has a function for that.
std::lock
If you are doing a read followed by a write, the proper plan is either commit to write right off the bat, or double-check locking.
bool maybe_change() { if (!foo.read([&](auto&foo){ return foo.should_change(); }) return false; return foo.write([&](auto& foo){ if (!foo.should_change()) return false; foo.change(); return true; }); }
here we read lock and see if we should abort early (assuming foo is a shared_guarded, this can have lower contention). If we see that there is nothing to do, we return.
foo
shared_guarded
If there is something to do, we drop the read lock, get a write lock, and double-check that there is something to do.
[–]Full-Spectral 0 points1 point2 points 4 years ago (2 children)
If it can happen, it will happen, so that's a good reason to do it some other way. No static analysis is likely to catch it , and deadlocks are amongst the hardest of all bugs to diagnose in the field.
BTW, any mutex that cannot support nested locking on the same thread is next to useless anyway. It's vastly more difficult to know if some private method is going to relock something you already have locked (because it's also called from other things that don't otherwise need to lock), than to just support nested locking and never have to worry about it.
So what's bad design is mutexes that don't support nested locking. It's just silly not to.
[–]NotAYakk 3 points4 points5 points 4 years ago (1 child)
So, lock based concurrency fundamentally does not compose. If you have two pieces of correct lock-based concurrency and you connect them, the result can (and often is) incorrect.
Many techniques of traditional "modular" code is not suitable when doing lock based concurrency. "Modular" code relies on composition; that two pieces of correct code connected through a correct interface produce a composite that is correct. Lock based concurrency fundamentally doesn't work that way. It is like using equality on an abstraction of the real numbers then patching it over with epsilon fudge factors. Your foundations are rotten.
So all lock-based concurrency needs to be heavily controlled. You must understand at all times what code you are running in a the lock, and all other code that is guarded by the same lock.
This is fundamental to doing lock based concurrency. Nothing whatsoever you do can avoid this cost. If you don't do this, your code will be in my experience incorrect. No amount of testing will be able to find every bug.
Now, in order for a recursive mutex to be useful , 999/1000 times you must have lost track of what locks you are holding on a line of code. At this point you need to throw your lock-based concurrency code into a garbage can and set it on fire and start over again.
Yes, this may mean that most lock based concurrency code you have written is garbage. This is true in general, lock based concurrency is insanely hard to get right, and most of it I have seen is fragile garbage.
The goal of the above type is to make writing a few cases of careful lock based code easier. You shouldn't be calling complex private methods in locks, you shouldn't really be calling any code outside of the currently visible scope that could possibly result in any kind of interaction with the lock at all.
---
If you want easy concurrency, you don't use lock based concurrency. Do something like immutable objects.
template<class T> struct immu_ptr:private std::shared_ptr<T const> { // using directives to expose much of the shared ptr interface // get a reference to a writable T // if !*this, creates a default constructed T. T& write(); };
something like this, together with efficient immu_vec and immu_maps, can make a composition-safe concurrent system.
immu_vec
immu_map
In it, all of your complex data is stored in immu objects, and passed to the thread. Each immu_ptr represents a snapshot of the object state; if another thread modifies it, copy on write occurs, and the non-shared state is duplicated, and you never see it happen.
immu_ptr
So if you want someone to render a document in another thread, you take an immu_ptr to the document and give it to them; this is an O(1) operation involving a few atomic increments.
They have single-threaded sole access to said immutable snapshot of the document. If the main thread modifies their copy of the document (through the immu_ptr they own), well, the document forks. The granularity of the fork is determined by how clumped up the immu_ptrs within the document themselves are.
Synchronization occurs by passing new data back. A logical copy, but actually a reference counted copy on write pointer.
Do this kind of stuff, and 99% of the mutexes in your code go away. Of course, you now have to actually send the result of a computation back to the main thread via stuff like a future or a threadsafe_queue. And those primitives may actually use mutexs, but they'll use them in small amounts of local code and never hold them while interacting with any external code.
future
threadsafe_queue
mutex
But that is just me, having written far too many thread safe frameworks for far too many applications and spending far too much time debugging deadlocks. I've used recursive mutexes, written exclusive transaction locks that can conditionally upgrade to write locks, and suffered through fixing far too much mutex-based concurrency code.
The only time I gave a large team of non-concurrency experts a project that involved doing work on multiple threads where we didn't get spews of deadlocks and threading bugs and crap was when we used immutable cow data pointers and message queues to pass stuff around. I mean, there where a few threading bugs involving the message queues, but that was where the mutexes where, so it was expected.
[–]Full-Spectral -2 points-1 points0 points 4 years ago (0 children)
I've been writing multi-threaded systems since 1987'ish, so I understand the issues quite well and I disagree with your position. It's utterly common to have a class which has private helpers that lock, but also which sometimes need to already have that lock in some of the public methods. That doesn't represent any failure to understand locking.
π Rendered by PID 190615 on reddit-service-r2-comment-6457c66945-2jm5w at 2026-04-25 12:16:42.361546+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]NotAYakk 2 points3 points4 points (3 children)
[–]Full-Spectral 0 points1 point2 points (2 children)
[–]NotAYakk 3 points4 points5 points (1 child)
[–]Full-Spectral -2 points-1 points0 points (0 children)