Sunday, September 29, 2013

Concurrency Freaks Library 0.5

We've made a new release of the Concurrency Freaks Library 0.5
It is available on Source Forge
https://sourceforge.net/projects/ccfreaks/files/

What's new in this release:
- More validation tests
- Added BlockingArrayList: An ArrayList protected with a LongAdderRWLock.
- Added "transient" keyword to member fields of lock classes so that they work with serialization.
- Added OffByXCounter: A counter that can be incremented from a single thread and uses relaxed atomics to improve performance
- Added DistributedCacheLineCounter: A counter that can be incremented and read from multiple threads and that uses a set of counters distributed over several cache lines, which makes it very fast for incrementing but slow for reading.
- Added DCLCRWLock: A Reader-Writer lock based on a Distributed Cache Line Counter that uses an array of fixed size and a hash function to distribute the reader's states among different cache lines to avoid contention (and false-sharing).
- Added DCLCStampedRWLock: A lock similar to DCLCRWLock that "defaults" to a StampedLock in case of many Writers.
- Added LongAdderRWLock: A Reader-Writer lock that uses the same 2-state algorithm described in the NUMA paper but with LongAdder as counters for Readers.
- Added LongAdderStampedRWLock: A lock similar similar to LongAdderRWLock that defaults to a StampedLock in case of many Writers.
- Added LongAdderExtRWLock: A Lock that uses a single LongAdder instead of two LongAdder instances like in the case of the LongAdderRWLock, but it requires a small modification to the LongAdder provided in JDK 8.

Friday, September 27, 2013

Scalable RW Lock with a single LongAdder

Hi everyone, this is my first post. Pedro made me do it ;).

On our search to improve "publishing of state", we have been drawn to distributed counters, and in particular for this post to the LongAdder. As Pedro as said LongAdder is not sequentially consistent, and the first approach was to apply the ingress/egress technique described in the Numa paper, the LongAdderRWLock. But during our research we discovered a way to use a single LongAdder to publish state, and still guarantee that the RW lock is linearizable.

Again, Pedro gave an example that shows how the sum() function can return an incorrect value, allowing a write to proceed without guaranteeing that all Readers had made the unlock.

After careful thinking, we realized that for this particular RW lock it is possible to give that guarantee, by only making sure that the "rollback" by the Reader is done on the same cell where it was incremented. I know that can be difficult to believe but its true!!! By rollback I mean the section of the code that the read operation runs after realizing that the lock is already acquired in write mode.

The code is very simple, but required a change to the LongAdder class to return the Cell where the increment occurred.


    public void sharedLock() {
        while (true) {
            final LongAdderExt.Cell c = readers.increment();
            if (!stampedLock.isWriteLocked()) {
                // Acquired lock in read-only mode
                return;
            } else {
                // Rollback logical counter to avoid blocking a Writer
                if (c == null){
                    // in case the increment it is not done in a Cell
                    // but instead on base
                    readers.getandAddBase(-1);
                } else {
                    c.getAndAdd(-1);
                }
                // If there is a Writer, wait until it is gone
                while (stampedLock.isWriteLocked()) {
                    Thread.yield();
                }
            }
        }     
    }

    /**
     * Release the read lock.
     */ 
    public void sharedUnlock() {
        readers.decrement();
    }

    /**
     * Acquires the write lock.
     */
    public void exclusiveLock() {
        // Try to acquire the stampedLock in write-mode
        stampedLock.writeLock(); 
        while (true) {
            final long egressSum = readers.sum();
            if (egressSum == 0) return;
            if (egressSum < 0) throw new IllegalMonitorStateException(); 
            Thread.yield();
        }
    }

    /**
     * Release the write lock.
     */
    public void exclusiveUnlock() {
        if (!stampedLock.isWriteLocked()) {
            // ERROR: tried to unlock a non write-locked instance
            throw new IllegalMonitorStateException();
        }
        stampedLock.asWriteLock().unlock();
    }


As usual we run our benchmark comparing both solutions, and the results follow.







The last plot shows results with number of threads ranging from 1 to 256, even though our machine has only 32 cores.
The results show that both solutions have similar performance but notice that LongAdderExtRWLock is expected to allocate half the memory of the LongAdderRWLock.

Tuesday, September 24, 2013

Clojure and Persistent Data Structures

Here is a nice talk about Persistent Data Structures from the creator of the Clojure language:
http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey



The stuff he says about "Time" seems to go to the core of what a consistency model is.
Also, I like the whole idea of immutability and the immutable/persistent data structures. These are ideas that simplify the way we code, but I'm not a huge fan-boy of persistent data structures like the presenter, mostly because the performance of these data structures is not as good as other techniques.

Worth watching, at least to understand why the Object-Oriented model gets broken by Concurrency.