Saturday, February 16, 2013

Blocking, Lock-Free, and Wait-Free

Concurrent algorithms can be broadly classified into the following groups when it comes to progress conditions:


Most data-structures that use locks are Blocking, this includes exclusive and shared (read-write) locks. When an algorithm is Blocking, there is no guarantee on when will a thread be able to access the resource or perform the operation that is meant to.
An example is when an object is protected by an exclusive lock: If  the contention is low, the operation will be performed without delay or a very short one. If the contention is high, it can happen that a single thread keeps accessing the resource, thus starving one or many of the other threads.


When an algorithm gives some guarantee on the number of operations that will be performed, even if infinite, then it is Lock-Free. Most algorithms and data-structures that do not have a locking mechanism are Lock-Free.
An example is a Lock-Free Queue like the one used in Java's ConcurrentLinkedQueue.


If an algorithm gives a guarantee of a finite number of steps, then it is Wait-Free.
An example of a Wait-Free algorithm (but not Population Oblivious) is one where each thread publishes its state on some thread-specific place shared among other threads, and then each thread must read the states of the other threads before making some decision related to the algorithm. In this case the number of operations is bounded by O(k) where k is the number of threads.

Wait-Free Population Oblivious

An algorithm that gives a constant number of steps or a number of steps that does not depend on the number of threads currently running is said to be Wait-Free-Population-Oblivious.
The best example of a pattern that is Wait-Free Population Oblivious is doing a getAndAdd() on a single variable (or in C a __sync_fetch_and_add()), which is always guaranteed to complete on a single operation, at least on x86 CPUs (see more info here).

For more detailed definitions take a look at the Section 3.7 Progress Conditions in The Art of Multiprocessor Programming.

Contrary to common intuition, under low contention scenarios, the Blocking algorithms are usually faster, followed by the Lock-Free and afterwards the Wait-Free being the slowest. This happens because Wait-Free algorithms usually involve more synchronization steps and thus make the algorithm slow when running on a single thread.

If you consider scenarios with high-contention then the case may be opposite, but even there, it is not always the case. Generalization is not possible because it all depends on the algorithms themselves, and even on implementation details (see for example the posts about Scalable Read-Write Locks).

So why all the hype about Wait-Free and Lock-Free algorithms?
Well, if you want strong time guarantees for completion of a certain procedure, then you will need a Lock-Free, or even better, a Wait-Free algorithm. This is particularly true in the case of real-time systems and GUIs, where there is usually some bound on the time a task can take to complete.

When writing code for a system that needs to be responsive, it is better to abdicate of a little performance but have a guarantee that the task will be completed in a certain amount of time, thus making Wait-Free algorithms a valuable tool in developers of GUIs and real-time systems (like networking and embedded systems, medical devices, scientific data-acquisition, etc).

No comments:

Post a Comment