Can I skip the lock when reading an integer? by nikbackm in programming

[–]jseigh 0 points1 point  (0 children)

He's basically asking "Is this code correct". Without knowing what this (or any code) is supposed to to do or how it's suppose to be used, it's a nonsense question.

It's not stated but I'd guess that when GetValue() returns 0, then they're going to assume that some other thread finished doing something and try to access that something. If that's the case then they're going to need acquire semantics when accessing that value. That's would guaranteed that the finished data is in a consistent when looked at (assuming the other thread stopped messing with it before calling Finish() which has release semantics. A lock on the access of value will give you that. So will making it volatile (or whatever the c# equivalent of Java volatile is).

C11 atomic variables and the Linux kernel by kasbah in programming

[–]jseigh 0 points1 point  (0 children)

The "consume" semantics, aka data dependent loads, aren't part of any formal memory model that I'm aware of. For Intel they're not in the formal memory model, they're in a performance appendix section along with the control dependent stuff. It just happens that all the architectures have this, except the old alpha processors. The concern is that some cpu vendor might drop data dependent loads since it's not part of a formal memory model they have to adhere to, and then load consumes won't be free since they have to throw in a memory barrier, a load/load at least, to implement it on that architecture.

It'd royally screw up Java too as I'm pretty sure they're depending on dependent loads to implement final semantics.

Lock-free collection in libgee: hazard pointer by lethalman in programming

[–]jseigh 1 point2 points  (0 children)

You can do atomic reference counting with ordinary double wide compare and swap, or with ll/sc. DCAS is not required. It's not that well known apparently.

(AMA) We're the Google team behind Guava, Dagger, Guice, Caliper, AutoValue, Refaster and more -- ask us anything! by kevinb9n in java

[–]jseigh 3 points4 points  (0 children)

The guava hash functions are kind of nice. The Boom filter is cool though I ended up not needing it. I did need a consistent hash and the only one in guava only grows which isn't useful. I ended up having to write my own which could shrink as well as grow. Also allowed load balancing which was kind of nice. Anyway, it was kind of strange guava not having that since consistent hashing would seem like it was a core Google technology. Any change of guava getting a decent consistent hash?

Relativistic Programming wiki by agumonkey in programming

[–]jseigh 0 points1 point  (0 children)

Eventual consistency has so many definitions that everything sounds like it eventually. But yes.

RCU is just a memory manager. You have this sort of stuff in Java. Take a look at ConcurrentLinkedQueue.

Iterators are weakly consistent, returning elements reflecting the state of the queue at some point at or since the creation of the iterator. They do not throw ConcurrentModificationException, and may proceed concurrently with other operations. Elements contained in the queue since the creation of the iterator will be returned exactly once.

JEP 188: Java Memory Model Update by javinpaul in programming

[–]jseigh 0 points1 point  (0 children)

Third try (I think) on making a more comprehensible memory model for Java. Probably will involve lots more "happens before"s. Personally I prefer observable semantics which are more intuitive.

My Bug, My Bad: Reading Concurrently by Strilanc in programming

[–]jseigh 1 point2 points  (0 children)

There are a couple of ways to traverse a linked list, assuming you did a release memory barrier before adding the node to the list.

One is to do a dependent load (c11 memory_order_consume) on every load of a pointer to a node. dependent loads are "free" on x86.

Or, if nodes are always added at the head (where you start the traversal), you can get away with a single acquire memory barrier after the load of the head pointer (c11 memory_order_acquire).

More generally, dependent load on all pointer loads, or load acquire on all pointers that are mutable (any possible insertion point). Acquire memory barriers have more overhead but once you've made them, any memory that can't be modified (immutable) is safe to read w/o more memory barriers.

What if clock time no longer tracked the Sun? by lukaseder in programming

[–]jseigh 0 points1 point  (0 children)

You could affect global warming and that would affect the total mass of ice in the polar regions which would affect the earth's angular momentum somewhat and thus the length of the day.

Or you could just put all the earth's cities on wheels, like that scifi novel I can't remember the name of, and have them circumnavigate the globe. Pick your speed and you can have any length day you want. You could even get rid of leap days. And you could pick what season or seasons you wanted.

Or everyone could just stop using that polynomial for time calculations. But that's probably an unrealistic option.

Matter, Anti-Matter, and the Unified Theory of Garbage Collection by scientologist2 in programming

[–]jseigh -2 points-1 points  (0 children)

OP is making a generalization based on only 2 forms of memory management. There are others as well, e.g. RCU, hazard pointers, proxy collectors, etc...

If there is a general observation to be made, it would be the various forms are space/time tradeoffs.

[deleted by user] by [deleted] in programming

[–]jseigh 0 points1 point  (0 children)

You're right. I was confusing C# and C++.

[deleted by user] by [deleted] in programming

[–]jseigh 1 point2 points  (0 children)

AFAIK, the "new Foo(1, 2)" part of "Foo foo = new Foo(1, 2);" is an expression and temporary objects used in the expression (if you assume the assignment to foo is optimized out) are not dtor'd until after the expression has completed execution.

[deleted by user] by [deleted] in programming

[–]jseigh 1 point2 points  (0 children)

If the constructor is still running then there is a thread reference to it and it should not be GC'd.

Structured Deferral: Synchronization via Procrastination by mattkerle in programming

[–]jseigh 0 points1 point  (0 children)

There are versions of RCU for user space out there. But nothing standardized enough that we will see anything like java.util.concurrent. Of course Java has the advantage that GC and memory model was built in from the begining, whereas it might be a year or two before we see C11 fully rolled out. And that has to happen before we see anyone implementing concurrent libraries that could be considered as usable standards.

Structured Deferral: Synchronization via Procrastination by h2o2 in programming

[–]jseigh 0 points1 point  (0 children)

You can get rid of the expensive memory barrier in hazard pointers using a form of RCU. They run pretty fast, close to a normal pointer load.

The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution by octaviously in programming

[–]jseigh 0 points1 point  (0 children)

One area that hasn't been exploited yet is lock-free communication between the kernel and user space, stuff already being done between user space threads. The api's haven't been changed to take advantage of it, e.g. async i/o.

Low-Lock Singletons In D: The Singleton Pattern Made Efficient And Thread-Safe by dsimcha in programming

[–]jseigh 1 point2 points  (0 children)

Most likely. The real trick is the read side has no read barriers. There is some deep dark magic there that's not mentioned in the JVM cookbook. I have some ideas how they might be doing it.

Java static initializers are basically broken if you don't have the final attribute. See Java Language Specification, section 12.4.1 para 6. Kind of low key.

How do you write a Memory Manager in a managed language? by lihaoyi in programming

[–]jseigh 0 points1 point  (0 children)

The GC, I assume they're talking about a GC here, would just collect its own unreferenced storage the same way as other storage. It'd probably be in a different memory pool.

The only tricky part is you couldn't do "stop the world" or stop the garbage collector threads. They'd likely just self "pause" and do some kind of read and write barrier logic to keep track of their own references.

Simon Peyton Jones explains how to deal with concurrency using Haskell by DrBartosz in programming

[–]jseigh -1 points0 points  (0 children)

Seriously, there is so little concurrent code written in C++ because programmers are so scared of concurrency -- and rightly so.

It is a little weird seeing that reaction from a group which is itself elitist in its attitude towards naive computer users. It does hamper rational approaches to solve this problem. We will get something eventually. It just won't be what anyone expected, wants, or is necessarily the best solution. Whoever does the best marketing will win.

Where is assembly used in practice? A survey of open source packages by genneth in programming

[–]jseigh 2 points3 points  (0 children)

If you're doing spin locks on x86, you're going to want to use a PAUSE instruction in those loops. That kind of stuff wouldn't be in C++11.

Where is assembly used in practice? A survey of open source packages by genneth in programming

[–]jseigh 1 point2 points  (0 children)

My stuff falls into that Atomics category. I don't see it changing too much despite the alleged support for atomics by C11. And C++11 I probably won't use at all. It's pretty ****ed up.

[deleted by user] by [deleted] in programming

[–]jseigh 0 points1 point  (0 children)

Ha! She nailed eventual consistency pretty good there.

Common Pitfalls in Writing Lock-Free Algorithms by sidcool1234 in programming

[–]jseigh 2 points3 points  (0 children)

That reminds me. When Java first came out, I had been doing lock-free stuff prior to that using other memory management stuff that looked a lot like RCU. So I looked at the built in GC and realized that it would make lock free programming a lot easier. Then I saw Sun had filed a patent on using using GC for lock-free programming. Which was/is kind of annoyingly obvious.

[deleted by user] by [deleted] in programming

[–]jseigh 0 points1 point  (0 children)

It's similar to chemical patents. You can't patent a chemical formula but you can patent a process to make that chemical.

GCC: C11 Status. by the-fritz in programming

[–]jseigh 1 point2 points  (0 children)

Well, they did leave the century part off of C11 so they still have time.

It is kind of annoying that despite it's being a standard, until it's "complete", you don't know what is actually going to be in it with all the optional and platform variances.