Wednesday, January 24, 2018

A Lock-Free Persistent Queue


Lock-free queues have been discovered many years ago, with the best-known and likely the simplest of all, being the one by Maged Michael and Michael Scott back in 1996
http://www.cs.rochester.edu/~scott/papers/1996_PODC_queues.pdf

Nowadays, the trendy stuff is in mixing concurrency with persistence, and when I talk about persistence, I mean Non-Volatile Memory, or Storage-Class Memory, or NVDIMMs.

The simplest way to have a persistent queue is to take a transactional persistency engine, or PTM as I like to call it (Persistent Transactional Memory) and wrap a sequential queue implementation in it. An example is to use PMDK, which this CppCon presentation covers in some detail:

https://github.com/pmem/pmdk

However, PMDK and the other log-based techniques for consistent persistence are all lock-based. What if you want a lock-free data structure, something as simple as a queue?
Well then, you would be stuck, or at least until now you would be stuck  :)

A few top names got together recently and decided to make a persistent lock-free queue, likely based on the MS queue. I haven't read the paper because it isn't out yet, there is only a brief announcement online:
http://drops.dagstuhl.de/opus/volltexte/2017/7968/pdf/LIPIcs-DISC-2017-50.pdf
Their paper has been accepted to PPoPP 2018 so hopefully in a month it will be available somewhere.
Now these names are well known in Concurrency, I mean, we're talking about Maurice Herlihy (invented wait-free and linearizability) and Erez Petrank (him and Alex Kogan made the first MPMC wait-free queue), if these guys have decided to team up to make a queue, it's something worth noticing!

Although no lock-free persistent queue existed so far, since 2016 there was a kind of recipe provided by these other guys:
https://www.cs.rochester.edu/u/jhi1/papers/2016-spaa-transform
Notice that one of the authors is none other than Michael Scott, one of the authors of the original lock-free queue.
Their recipe to transform lock-free algorithms in persistent lock-free algorithms is simple and elegant, even if overkill for most usages.
On x86, you can think of a pwb as being a CLFLUSHOPT instruction, and the pfence or psync being an SFENCE instruction.
Basically:
  • Add a pfence before every store-release and a pwb after;
  • After every load-acquire add a pwb and then a pfence;
  • Before and after every CAS/fetch_add/exchange add a pfence;
And that's it, it's so simple that even a compiler could do it for you... and probably one day a compiler will do it for you!

... or maybe it isn't so simple. Things are never as easy as they seem once you start to actual implement stuff.
You see, this automatic transformation works well in most lock-free cases (there are some for which it doesn't, but that's a subject for another post), but a lock-free queue is not just lock-free code, there is also the code in the constructor and the destructor, and that is sequential code.

Implicitly, the MS queue needs a head and tail (persistent) variables and they must be initialized to point to the same sentinel node.
We can't really expect this to happen magically in some atomic way. A failure may occur anywhere during the initialization, leaving these two variables in an inconsistent state, therefore, the initialization and de-initialization must follow a particular sequence, an algorithm.
In fact, this algorithm is quite complex, and I would say is the trickiest part of getting a correct persistent lock-free queue implementation.
Hopefully, that is what Maurice, Erez, Virendra and Michal will show in their paper next month at PPoPP 2018, but until then, here is my take on it, available on github:
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/CPP/pqueues/PMichaelScottQueue.hpp

Andreia and I discussed a bit about this and I've done a very preliminary implementation of a persistent lock-free queue.
First, we followed the transformation rules with pwb/pfence/psync, but they add too many fences. Andreia is awesome at reducing algorithms to their bare essentials and I gave some contribution as well, the end result being we got rid of most pfences.
This algorithm was designed such that on enqueue(), a successful CAS on ltail->next implies that the pwbs for newNode->item, newNode->next and tail have been done, and a successful CAS on tail means that the pwb on ltail->next has been done. This kind of happens-before implicit guarantee means the queue is always in (at worst) a semi-consistent state, which the next operation can safely recover from, without the need for an explicit recovery method.

Here is the code for enqueue(), and yes, it's just the MS algorithm plus some strategically placed persistence fences and pwbs  ;)
void enqueue(T* item, const int tid) {
    if (item == nullptr) throw std::invalid_argument("item can not be nullptr");
    Node* newNode = new Node(item);   // TODO: replace this with NVM allocator
    PWB(&newNode->item);
    PWB(&newNode->next); // Just in case 'item' and 'next' are not on the same cache line
    while (true) {
        Node* ltail = hp->protectPtr(kHpTail, tail, tid);
        if (ltail == tail.load()) {
            Node* lnext = ltail->next.load();
            if (lnext == nullptr) {
                PWB(&tail);
                if (ltail->casNext(nullptr, newNode)) {
                    PWB(&ltail->next);
                    casTail(ltail, newNode);
                    PWB(&tail);
                    PSYNC();

                    hp->clear(tid);
                    return;
                }
            } else {
                PWB(&ltail->next);
                casTail(ltail, lnext);
            }
        }
    }
}


And here is the code for dequeue()
T* dequeue(const int tid) {
    Node* node = hp->protect(kHpHead, head, tid);
    while (node != tail.load()) {
        Node* lnext = hp->protect(kHpNext, node->next, tid);
        PWB(&tail);
        PWB(&head);

        if (casHead(node, lnext)) {
            PWB(&head);
            PSYNC();

            T* item = lnext->item;  

            hp->clear(tid);
            hp->retire(node, tid); 
            return item;
        }
        node = hp->protect(kHpHead, head, tid);
    }
    hp->clear(tid);
    return nullptr;                  // Queue is empty
}


The bold fonts show the added fences needed to guarantee persistency. In fact, this queue is not just durable, it is Durable Linearizable, which is (in my view) the easiest model to reason about for durability, as important to Persistence as Linearizability is important to Concurrency.

The reason the pfences were taken out is because we're assuming that CAS has persistent semantics similar to pfence that doesn't act on the load/store of the CAS itself, only on the other loads and stores. In other words, it's as if a CAS is equivalent to a:
  PFENCE();
  CAS()     // concurrent
  PFENCE();

The reason we assume this, is because on x86, LOCK instructions and read-modify-write instructions like CAS, ensure order for CLFLUSHOPT and CLWB (pwbs). For more details see Intel's manual for CLFLUSHOPT:
https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf

As for the pwb and psync before returning, they're not always needed but it helps to reason about in terms of composability.
The only way to observe effects from this queue is to call enqueue() or dequeue(), therefore, the next call to the same method will flush the cache line and persist it. However, if you want to do something like:
   q.enqueue(a);
   a_is_persisted = true;
   PWB(&a_is_persisted);

then the only way to guarantee correct ordering of a_is_persisted with the element 'a' actually being in the queue and persistent, is to have the pwb and a psync (or pfence) before returning from enqueue()/dequeue().


Adding the persistence fences and then reducing them to a minimum was the easy part. The though part is gluing that together with the constructor and destructor that recovers after failure.
Here is what the constructor and destructor look like:
PMichaelScottQueue() {
    PWB(&head);
    PWB(&tail);
    PFENCE();
    recover();  // re-use the same code as the recovery method
}

~PMichaelScottQueue() {
    destructorInProgress = true;
    PWB(&destructorInProgress);
    PFENCE();
    recover();  // Re-use the same code as in the recovery method
}


Simple huh?... not so fast, now we need to show the recover() method:
void recover() {
    if (destructorInProgress) {
        if (head.load(std::memory_order_relaxed) != nullptr) {
            while (dequeue(0) != nullptr); // Drain the queue
            head.store(nullptr, std::memory_order_relaxed);
            PWB(&head);
            PFENCE();
            delete head.load(std::memory_order_relaxed);  // Delete the last node    // TODO: replace this with NVM deallocator
        }
        PSYNC();
        return;
    }
    hp = new HazardPointers<Node>{2, maxThreads};
    // If both head is null then a failure occurred during constructor
    if (head.load(std::memory_order_relaxed) == nullptr) {
        Node* sentinelNode = new Node(nullptr);    // TODO: replace this with NVM allocator
        head.store(sentinelNode, std::memory_order_relaxed);
        PWB(&head);
        PFENCE();
    }
    // If tail is null, then fix it by setting it to head
    if (tail.load(std::memory_order_relaxed) == nullptr) {
        tail.store(head.load(std::memory_order_relaxed), std::memory_order_relaxed);
        PWB(&tail);
        PFENCE();
    }
    // Advance the tail if needed
    Node* ltail = tail.load(std::memory_order_relaxed);
    Node* lnext = ltail->next.load(std::memory_order_relaxed);
    if (lnext != nullptr) {
        tail.store(lnext, std::memory_order_relaxed);
        PWB(&tail);
    }
    PSYNC();
}


Yep, now things are starting to get complicated, which I'm not a big fan of, but it's as good as I can get it  :(
Hopefully the approach which will be shown at PPoPP will be simpler than this.

About the constructor:
As long as the allocator returns a zeroed-out memory region, the 'head' and 'tail' will be nullptr even if there is a crash immediately at the start of the call to the constructor. If the allocator can't guarantee that the data is zero-ed out, then there is no 100% safe way to distinguish between a proper initialization and trash immediately after allocating the queue.
A crash occurring after the 'head' and 'tail' are made persistent (with nullptr) is always recoverable, although there are a few different cases:
- If the head is nullptr then the sentinel node must be allocated;
- If the head is non-null but tail is null, then the sentinel was allocated and assigned to head but not to tail;
- If both head and tail are non-null and tail->next is non-null then tail is not pointing to the last node and we need to advance tail;

About the destructor:
The destructor must first drain the queue to avoid leaking as much as possible. Then, it needs to de-allocate the last node and zero-out the head to make sure then in the event of a failure, the recovery operation will not try to "recover" the queue, therefore, we have a persistent variable named 'destructorInProgress' which is set before starting the destruction operation.
After destructorInProgress has been set to true and ordered with a pfence, we can clear head, and only then can we de-allocate the last node.


How fast is it?
Well, on DRAM emulating NVDIMMs, implementing pwb/pfence/psync as CLFLUSHOPT/SFENCE/SFENCE, we get some not too bad results when compared with the regular MS queue:



For the most attentive of you, this queue seems to have slightly better performance than the one shown in this brief announcement:
http://drops.dagstuhl.de/opus/volltexte/2017/7968/pdf/LIPIcs-DISC-2017-50.pdf
however, our implementation has integrated memory reclamation which may be acting as a kind of back-off... or maybe we have a smaller number of fences. Whatever the reason, when the other queue is out in a month I'll try to re-run the benchmark to compare them  ;)
Btw, our synthetic benchmark is very simplistic: do an enqueue, followed by a dequeue, and then repeat 10 million times. I'm not a big fan of this benchmark but it's what everybody uses in academic papers to measure queue performance, so that's what we use too.


One last note, all allocation and de-allocation operations in this queue are prone to leaking, if the failure occurs immediately before a de-allocation or immediately after an allocation. There is no way around this problem without transactions, and seen as we're trying to get lock-free progress, the transactional mechanism would have to be also lock-free, and there is no lock-free PTM published (yet).
I'm not the only one complaining about this. Paul McKenney points this out as being one of the fallacies in the "lock-free data structures are resilient" argument, typically touted as one of the advantages for lock-free (by myself included). Maurice Herlihy has some interesting stuff to say around that topic as well:
https://www.youtube.com/watch?v=94ieceVxSHs&t=44m40s



We'll talk more about transactions in a future post, for today that's all. In the meantime, have fun with the code:
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/CPP/pqueues/PMichaelScottQueue.hpp

1 comment:

  1. Useful article, thank you for sharing the article!!!

    Website bloggiaidap247.com và website blogcothebanchuabiet.com giúp bạn giải đáp mọi thắc mắc.

    ReplyDelete