I would like to show in this post that the WriterReaderPhaser is a variant of the Left-Right and is therefore, not a new synchronization primitive.
First of all, what is a "synchronization primitive"?
After some quick googling, here are the best definitions that I could come up with:
- "Languages and Compilers for Parallel Computing": IMO, this one seems a bit too strict.
- "Encyclopedia of algorithms": This one talks about primitives applied to wait-free consensus. Not sure it's generic enough to be used for a technique where some methods are wait-free and others are blocking.
- "Understanding the Linux Kernel": Although it is specific to the linux kernel, it seems the one that makes more sense.
- "Advanced Computer Architecture": This one seems to indicate that it's the algorithm the "synchronization primitive".
- "C# collections, a detailed presentation": This one has a longer list even though it is specific to C#
I'll let you make your own conclusions on what a "synchronization primitive" is.
In the meantime, let's try to get somewhere by doing a "Reductio ad absurdum", so please turn on your bulls41t detector!
The WriterReaderPhaser uses a new type of ReadIndicator with three counters and a special use of getAndSet()/atomic_exchange() operations. From a logical point of view this is an idea similar to the ingress/egress ReadIndicator, but there is some novelty in the way it is done, so lets say it is indeed a new ReadIndicator.
What if, applying a new ReadIndicator to an already existing concurrency mechanism is "a new synchronization primitive"?
If that is so, then Andreia and I have come up with many new synchronization primitives because we discovered (and re-discovered) several ReadIndicators, some of which are mentioned here.
We can then apply these ReadIndicators to existing mechanisms that use ReadIndicators, for example the three algorithms in the "NUMA-Aware reader-Writer Locks", which are C-RW-RP, C-RW-NP, and C-RW-WP. We've discovered at least three new ReadIndicators (CLQ+Array+finalizers, DCLC, LongAdderExt), so we can say that 3x3 gives us 9 new synchronization primitives.
If that's not enough, then consider that the Left-Right variants RV, NV and GT (with the ScalableGT ReadIndicator) also use different ReadIndicators, which means it's three more new synchronization primitives, giving us a total of 12 new synchronization primitives... uouuu that's great!
The WriterReaderPhaser uses a "role reversal" compared to Left-Right, where the threads doing modifications are calling arrive()/depart() on the ReadIndicators, and the thread doing read-and-write is calling togglerVersionAndScan().
What if, doing a role reversal of an existing concurrency mechanism is "a new synchronization primitive" ?
If that is so, then let me do role reversal on a Reader-Writer lock, where I do modifications inside the readLock()/readUnlock() block, and do read-only or read-and-write operations inside the writeLock()/writeUnlock() block.
If you need a concrete example, then think of multiple threads (acquiring the readlock) doing getAndAdd()/fetch_add() on an entry in an array of counters, and then one single thread (acquiring the writelock) wants a "snapshot view" of the array. Yes, it will be "blocking", but you can have multiple threads incrementing in the array simultaneously, which means it scales ok.
Moreover, we can apply the same kind of idea to an optimistic reader-writer lock (like StampedLock or seqlock), and even apply it to Double Instance Locking, thus providing lock-free properties to the threads incrementing entries in the array.
This gives a total of 3 new synchronization primitives that we've just discovered (plus all knows variants of all reader-writer locks?)!
One of the things about the WriterReaderPhaser is that it doesn't apply the same code to both instances.
What if, applying different user code within a synchronization primitive is a "new synchronization primitive"?
In the Left-Right paper we showed an example of how to apply it as generically as the user wants. Just look at Algorithms 12, 13, 14, 15 and 16.
For a concrete example of applying different code to the two instances, then just take a look at the LRLinkedListSingle (LROLLS) at this post we did a couple months before the WriterReaderPhaser was made public:
http://concurrencyfreaks.com/2014/10/linked-lists-with-wait-free-reads-in-c.html
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/CPP/lists/LROrderedLinkedListSingle.h
which shows a very different code is used before and after the toggleVersionAndScan(), for example in the add() function.
This means that if there is anything new about applying different code on the two instances, then we already came up with it long before the WriterReaderPhaser.
WriterReaderPhaser uses the same algorithm as Left-Right, but it changes the method names. Here is a translation table from one to the other:
writerCriticalSectionEnter() -> arrive()
writerCriticalSectionExit() -> depart()
readerLock() -> writersMutex.lock()
readerUnlock() -> writersMutex.unlock()
flipPhase() -> toggleVersionAndScan()
What if, changing the named of methods in a pre-existing synchronization primitives qualifies as a "new synchronization primitive"?
If that is so, then I can create a random letter generator to make random names for the five methods mentioned above. If I restrict myself to names with 10 characters with only letters (small and capitals), then this will give 52 "letters" raised to the power of (10 + 5) = 5.496 x 10^25
Holy cow, I've just invented 10^25 new synchronization primitives... amazing!
I hope the previous paragraphs were absurd enough to show the point.
Whether the Left-Right is a new synchronization primitive or not, may depend on the definition you chose from the multiple ones at the beginning of this post, but one thing is for certain, the WriterReaderPhaser is not a "new" synchronization primitive, because (at best) it's the same synchronization primitive as Left-Right, which was discovered several years ago.
No comments:
Post a Comment