One noticeable example of an optimistic read-lock, is the StampedLock in JDK 8 when used with the API tryOptimisticRead()/validate(). If you don't know what we're talking about, then take a look at the JDK documentation, or at the presentation by Heinz Kabutz on the links below:
http://download.java.net/jdk8/docs/api/java/util/concurrent/locks/StampedLock.html
http://vimeo.com/74553130 (the relevant part for this post starts at minute 28)
IMO, locks with optimistic reading have three major characteristics (issues?) that anyone using them should be aware of:
- Atomicity
- Memory Management
- Invariants
Atomicity
Any variable that is read within a block of tryOptimisticRead() / validate() should be atomic, this is particularly important for references. In Java and Scala this kind of guarantee is given by the JVM itself, but on C11 or C++1x this means that all variables read within an optimistic read lock code block must be of type atomic<>, and in order to take full advantage of the lock, it is better to not use barriers by reading those variables with atomic<>.load(std::memory_order_relaxed)http://en.cppreference.com/w/cpp/atomic/memory_order
I know what some of you are thinking, that the JVM doesn't give atomicity guarantees for long or double. Well, officially that's true, but in practice it mostly gives those guarantees, but it's not such a big deal on the atomicity itself. Check the Invariants section below.
Memory Management
Again here, the JVM takes care of this problem thanks to the automatic Garbage Collector, which means this is not an issue in Java or Scala.In other languages like C11 or C++1x this is mostly a deal breaker to the usage of optimistic read locks.
And why is that you may ask?
Well, suppose we are freeing/deleting an object using the write-modify lock of the rw-lock and at the same time there may be another thread doing an optimistic read try to access (read-only) to such an object... boom!!! your app just gave a core dump.
How do you fix this?
You either never cleanup any object that may be accessed in any way by an optimistic read lock, or you have to start using some sort of memory management technique like "Hazard Pointers" or "Drop the Anchor". In C++1x you can probably get away with a shared_ptr.
If you're using an optimistic locking mechanism, it means you're interested in high performance, so using one of these solutions will defeat the purpose (even the shared_ptr?).
Invariants
The first two issues are fairly obvious, but this one has some macabre subtleties. So what do we mean by "Invariants"?Your code, the one you protect with the write-lock and the one you protect with a read-lock, is easy to reason about when using regular (pessimistic) read-locks, because it will behave the same way as a single-threaded code, this means it behaves as you expect it to behave. The problem happens if you use an optimistic read-lock because, it can happen that another thread is holding the write-lock and modifying some of the variables at the same time as the thread that is attempting the optimistic read-lock. And what's wrong with that, you may ask?
Let's look at an example:
class Boooom {
int x = 0;
int y = 1;
}
where one thread is doing this:
while (true) {
stampedLock.writeLock();
x++;
y++;
stampedLock.writeUnlock();
}
and another thread is doing this:
do {
stamp = stampedLock.tryOptimisticRead();
z = 1/(x-y);
} while (stampedLock.validate(stamp));
so what's wrong with the code above?
If we were using a regular (pessimistic) read-lock this would be fine, because the x would always be seen as being y-1. For an optimistic read-lock, this is no longer true, and the x and y can be completely unrelated, due to y being incremented several times after x is read, or the compiling reordering the reads of the x and y variables... pretty much anything can happen. In fact, one of the most likely outcomes is for x to be equal to y, which means that z = 1/0 which gives an exception in Java, and in other languages it just blows up in your face.
This is just one example of code where some invariant no longer holds when using optimistic read locks.
So how do you fix this?
hummm I guess the old "How do porcupines mate" joke applies here. You have to be extra careful on what code you place in the write-lock and the code that is in the optimistic read-lock. There are so many things that can go wrong that it's not even possible to give you some "rule of thumb" for it :/
So now what?
Well, this is where Double Instance Locking comes to the rescue: http://concurrencyfreaks.blogspot.fr/2013/11/double-instance-locking.htmlDouble Instance Locking is a pessimistic technique, and just like on a regular read-lock, you don't have to worry about Atomicity, or Memory Management, or Invariants. And on top of that, it is lock-free for read-operations, which is one of the theoretically big advantages of the optimistic read-locks. In practice, optimistic read-locks can easily starve even with low contention on Writers, which the Double Instance Locking will almost never do (depends on the rw-lock it uses).
Let's do a summary table to compare both techniques then:
So what's the conclusion?
Depends on what you need, take a look a the table above, and see which one matches your needs the closest.
If I had to give a rule of thumb it would be: If you want to play it safe, or need low latency for read operations, better go with the "Double Instance Lock". If you need read performance, then none of the other two will beat the "Optimistic Read" approach. If you have a lot of Writers, then go for a regular Reader-Writer lock (or even just a mutual exclusion lock).