Today I want to talk about correctness and the creative process of designing algorithms.
Let's start with the later.
A couple of weeks ago there was a link shared on hackernews about a blog post by the guys at QuestDB where they discovered a new concurrency technique for a single writer and multiple readers: https://questdb.io/blog/concurrent-lockfree-datastructure-design-walkthrough/
Turns out, their concurrency control is a poor-man's version of Left-Right, a universal construction that Andreia and I discovered more than a decade ago and published in a peer-reviewed conference:
https://hal.science/hal-01207881
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/left-right-2014.pdf
Left-Right provides linearizable irrevocable execution with starvation-free for a writer and wait-free for multiple readers by having two replicas of the data.
Obviously, it's a bit sad when due credit is not given where it should be, but smart readers have added to the blog's comments and on hackernews that what is being described is indeed Left-Right (or a simpler but not as powerful version of it).
This is ok because the good stuff will be re-discovered several times over, and Left-Right is certainly one of those gold nuggets. In the end what matters is not my ego being massaged, but instead that developers get one more great tool in their toolbox to handle concurrency and as I've said many times, Concurrency is hard, which means we need all the tools and help we can get.
Not all developers will read my blog posts or see my cppcon talk about Left-Right, which means there is high value for the community in having an extra entry vector in the form of the QuestDB blog, which has a wider audience than my blog or my papers, and is explained in a more detailed and accessible way.
Ultimately, what matters is that developers learn something new and useful, not who came up with it first.
Let me share a funny story on how (our) creative process works.
After discovering Left-Right as a way to solve a particular problem in my day job, Andreia and I started working in Concurrency actively. We would dream of new ideas, then bounce them over with each other and, when we felt they were solid, then and only then, we would go look them up on the existing literature. Usually we would end up finding the same idea or something very similar on an existing paper. Usually, this paper was made by Maurice Herlihy, yes, the guy who literally wrote the book on Concurrent algorithms https://www.amazon.com/Art-Multiprocessor-Programming-Maurice-Herlihy/dp/0124159508
Perhaps other people would be discouraged by this, but for us, we were new to the field and being able to rediscover something which the great Maurice Herlihy had discovered 20 years before, actually felt like "a win", and so we kept on working and coming up with ideas and implementing and testing the best ones.
As the months passed by, Andreia and I kept (re-)discovering algorithms, but we noticed a pattern emerge: our latest discoveries had been published by Maurice only a handful of years previously. This meant we were catching up with the state of the art!
Eventually, we discovered one of the reader-writer lock algorithms in this paper, just a few months after the paper was published: https://cs.brown.edu/~irina/papers/ppopp13.pdf
and some months after that we discovered other stuff that was actually new. I think CX was one of the first ones:
https://github.com/pramalhe/CX
although it was too complex for most people to understand and therefore was only published much later in PPoPP 2020.
After this, we were on a roll and kept doing stuff in this field https://dblp.uni-trier.de/search?q=pedro+ramalhete
and still have a lot of cool unpublished work to publish ;)
Going through the peer-review process is important for these complex algorithms because it's one of the few ways to validate its correctness. From the start, we spent years adding and perfecting our correctness tests to make sure that our implementations are bug-free, or at least don't have any bugs that can be observed by invariant testing and stress-testing. But testing is not enough, the work needs to be seen and validated by other experts. That is one of the few advantages in the long and painful process that we call peer-reviewing.
To this day, I believe there are only a handful of people who really understand how CX works, and the only reason why this paper was accepted is because one or two of them were in the review of that paper. This is not to say that CX is like "General Relativity" or something, it's just that Universal Constructions are a rare and complex beasts which most people don't care about or don't have the inclination to spend the time looking into it, and that's fine.
Although over the years I've become pretty good at finding bugs in concurrent code and concurrent algorithm (see this post later below), I am lucky to have access to one of the best tools ever created to find bugs in concurrent algorithms: Andreia's brain ;)
I lost count of how many times I used this powerful tool to validate designs, and find bugs in algorithms which would otherwise would have made me lost months or even years of work on a (in retrospect) dumb idea.
In summary, it's ok to re-discover existing algorithms, it just validates the usefulness and correctness of such algorithms, and it also validates that you, as a researcher, are on the right track to making important discoveries in that field. That is, assuming there are still important discoveries to be made in that field, but that's a topic for a different post.
But what is not ok is to read other people's paper and code, make a stripped down version of it, claim it as your own, and after being so stripped down it is not correct anymore, claim that this is correct and lock-free. And this where we get into the "Peredvizhnikov Engine".
The "Atomic Struct" in the "Peredvizhnikov Engine" is in essence a stripped down version of the OneFile STM (the lock-free) variant: https://github.com/pramalhe/OneFile/
https://ieeexplore.ieee.org/document/8809531
OneFile was published back in 2019 and it was the first wait-free STM.
It's not super novel, in the sense that it uses ideas which already existed:
- It uses double-word CAS where each data word has and adjoining unique version number to ensure progress and ABA-free execution. This had been used previously in Multi-word CAS algorithms;
- OneFile wait-free uses Herlihy's combining consensus, which was discovered by Herliy almost two decades earlier, and was used before in the P-Sim universal construction;
- It uses a variant of Hazard Eras to do memory reclamation in a wait-free way;
- It shares code between threads using std::functions. Rachid Guerraoui had shown some years before that sharing code among threads is the only way to have wait-free generic execution;
- etc
But OneFile is novel because it combines these ideas into something that no one had achieved before: a single-replica lock-free/wait-free STM, with integrated lock-free/wait-free memory reclamation. Previous wait-free universal construction have multiple replicas of the entire data, with CX using 2N replicas in the worst-case, and they may need to make copies of the entire dataset replica, which can be extremely inefficient.
In a short description, OneFile uses a redo-log to execute the code of the transaction, saving all modifications on a redo-log. At the end of the transaction it will CAS on a central variable curTX to indicate that it wants to commit and, assuming the CAS was successful, it will use 1 bit to indicate the write-transaction is now in-flight and 8 bits to indicate its thread-id so that other threads may copy the redo-log and help execute the modifications with double-word-CASes.
When one of the threads sees all DCAS are done, it will CAS again on the central variable curTX to clear the in-flight bit, incrementing the sequence number, and a new write-transaction can execute.
Disjoint read-transactions will read the sequence numbers on the curTX and, if all data read is below this number, then the data is consistent and the transaction doesn't need to abort. Otherwise it needs to abort, help the current in-flight writer, and restart again.
This behavior is lock-free, but it's possible to add a wait-free consensus and make it wait-free, the details of which do not matter for this particular post.
The "Atomic Struct" described here https://github.com/eduard-permyakov/peredvizhnikov-engine/blob/master/docs/lockfree.pdf
uses single-word CAS instead of double-word CAS because it splits the 64 bits into 32 bits of data and 32 bits for the unique version number. This will wrap-around faster than using 64+64 bits, but in practice it can be made safe.
A write operation will try to install a pointer to its function in a central variable, along with a unique sequence number and other threads will follow this pointer and execute the same function. Like on OneFile, each write to memory is replaced by a compare-and-exchange operation, so as to prevent ABA.
However, in Peredvizhnikov there is no redo-log: threads just directly write to memory their CAS. Unfortunately this is incorrect, i.e. it is not linearizable. Consider the following example:
Suppose there is an operation that reads a memory location into a temporary variable and later this variable is used to write into a different memory location. Any helping threads will not be able to figure out the original value.
1: void txn_function() {
2: int t = a; // suppose 'a' is initially 0
3: a = 42;
4: b = t; // b must be 0 (otherwise this is not linearizable)
5: }
In this example, if thread 1 executes line 3 and then thread 2 enters and executes line 2, thread 2 will see variable 'a' as 42 and there is no way to obtain the original value of '0' to place in its own copy of 't', to be able to help it write in line 4 in variable 'b'.
The Peredvizhnikov Engine is not lock-free because if thread 1 dies after executing line 3, no other thread will be able to make progress ever again.
You could argue that it is lock-free and it is correct, if and only if, it is applied to functions where this kind of behavior doesn't happen. The problem with that argument is two-fold:
First, you can no longer claim that is an STM, or a generic concurrency control, or a universal construction. All these things are meant for generic sequential code, which means that if doesn't work with generic code then it's not an STM/UC/generic-CC.
Second, it's actually really hard to read a block of code and understand if one such issue exists for that particular block of code, to the point where only experts can do it. And such experts would rather design their own lock-free code to suit their needs, than go and use the Peredvizhnikov engine. I know this is a nuanced argument, but another way to think of it as this being a tool that is so hard to use correctly that you would rather do everything "by hand" without the help of such a tool.
What I believe happened, but this is just a guess, was that the author of this work read the OneFile paper and tried to make something better. This is a good mindset to start from, but the problem is that the author didn't understand why OneFile needs a redo-log, or sharing of the redo-log and therefore thought it was unnecessary and decide to make a better (incorrect) version of OneFile. This is unfortunately what happens when your ideas don't go through peer-review.
I know peer-reviewing is awful, I've complained about it many times myself, but there is a reason why it exists: it's so that the information we disseminate is reliable, or at least has been validated by someone that knows a bit about what they're talking about.
I blame hackernews and its sensationalist take on the world of Computer Science, because it gives voice to anyone who shouts loud enough. I know that's how you get "disruption", which the Silicon Valley loves, but the downside is that we're infecting everyone's brains with broken tools that simply don't work, and Peredvizhnikov is one such case.
Claiming that an algorithm is correct when it isn't, is bad for our community because it devalues the ones that are correct, the algorithms where people spent many months or years of effort to write tests, write a formal proof, go through the peer-review process.
Claiming that an algorithm is lock-free when it isn't, is even worse because it devalues how hard it is to make correct lock-free algorithms. It makes it look easy, and it diminishes the work I've done for the past decade. So yes, I'm human and I feel wronged and sad by other people saying their algorithm is lock-free when it isn't.
But my feelings don't matter, what matters is that when applied to generic code (or even non-generic code), the technique described in the Peredvizhnikov Engine is neither correct nor lock-free.
No comments:
Post a Comment