The other day, I was daydreaming about what the "perfect language" for concurrency would look like, and one of the design options I was considering, was how useful were coroutines, and can they coexist with transactions?
Spoiler alert: the answer is an almost certainly no.
There is usefulness in the transactional model and in the coroutine model. Transactions are a nice way to solve problems which need to share data across different execution contexts, while coroutines are a way to reduce latency for problems which are naturally asynchronous, and perhaps a way to increase scalability for fully disjoint-access-parallel problems, such as sessions on a web server, though I would argue this scalability stems from the "task" where the coroutine runs, and not the usage of the coroutines by themselves.
Transactions running in threads can also scale, but when a thread executes a blocking call, then that one thread is now blocked, and threads are costly to fire up and maintain, while tasks (which run the coroutines) are cheap to fire up in terms of latency and memory usage.
Leaving that aside for a moment, let's take a look at what happens when we mix coroutines with transactions.
Clearly calling a transaction from inside a coroutine has no semantical issues:
funcA();
result = spawn({
transaction { funcB(); }
});
funcC();
result.wait_for();
This is, assuming that only transactions can access shared data, and that the spawn() will start a new task (possibly on a different thread).
A new task (or thread) will be spawned, and the coroutine will execute in it, at which time it will start a transaction. The transactional scope is fully inside the co-routine's lifetime. All is good.
The other way around is not as obvious:
transaction {
funcA();
result = spawn({ funcB(); });
funcC();
}
result.wait_for();
This seems correct as well.
The transaction starts, executes funcA(), then it starts a new task. Here there are two design choices: either we say that the code inside the coroutine is not part of the transaction, or we say that it is part of the same transaction which spawned it.
The first choice is tricky, because it means we need to carry the transaction's context over to another task or thread and continue the transaction there as if it was the same, which is technically hard. Also, we can't have the same transaction being simultaneously executed from two parallel execution contexts, which means we would not be able to run both at the same time, which defeats the whole point of having co-routines, in which case, we may as well make the spawn() become a no-op and just execute its contents on the same thread.
Semantically, it makes more sense to have the funcB() execute in its own execution context, i.e. outside the transaction, and this is fine too: the two contexts (the main one and the coroutine) must not share any data with each other, but everything should work fine, at least on this simple example.
The troubles start when we use transactions outside and inside a coroutine:
transaction {
funcA();
result = spawn({
transaction { funcB(); }
});
funcC();
}
result.wait_for();
Should this be treated the same as a flat-nested transaction?
The answer is no, this is not possible, because the coroutine transaction may be executing in a different thread, and the developer asked that the call to funcB() be atomic, i.e. not interleaved with whatever is being done in the outer transaction, i.e. the calls to funcA() and funcC().
These are two separate transactions, possibly executed in two different threads, and with their own contexts.
Notice that using a coroutine only makes sense if there is no shared data between the caller and the coroutine, which being true, will allow for both steps to execute at the same time. On the other hand, if they do happen to share data, then the two transactions will conflict and serialize with each other, which is not efficient but will be correct.
At this moment you may be asking: if a coroutine is not supposed to share data with the caller, then why does it need to execute a transaction?
The answer is: the coroutine may not share data with the caller thread, but it may need to access data shared with other threads.
Another question: is it ok to wait inside the transaction for the a coroutine to complete?
transaction {
funcA();
result = spawn({
transaction { funcB(); }
});
funcC();
result.wait_for();
}
At first sight this may seem innocuous enough, but suppose that funcB() accesses data in common with funcC(). This means the two transactions will conflict, which means one of them will be executed after the other. Remember that transactional conflicts can only be detected at run-time.
If the Transactional Engine decides to execute the transaction with funcB() first, and then execute the transaction with funcA() and funcC(), that's ok.
But what if it decides to execute the outer transaction first? Well, then we've dead-locked haven't we? The call to wait_for() will wait indefinitely for the transaction with funcB() to execute, but that transaction won't start until the outer transaction itself has re-started.
To make matters worse, take the following change:
transaction {
sometimes = funcA(); // might return true or false
if (sometimes) {
result = spawn({
transaction { funcB(); }
});
}
funcC();
if (sometimes) result.wait_for();
}
Consider what happens if on the first attempt sometimes=true and then funcB() and funcC() conflict, with both transactions restarting, and on the second execution of the outer transaction we will get sometimes=false, which means that funcB() shouldn't even be executed.
This means that we can't start executing the funcB() transaction until we've executed the funcA(), which means we need to execute the funcC() too, i.e. the whole funcA()+funcC() transaction, but for that transaction to finish, it must wait for the funcB() transaction to complete, which deadlocks again… see where this is going?
It's tempting to think that we can simply forbid calls to wait_for() inside a transaction, but this is only possible at run-time, and by then it's too late, the deadlock may already be happening. The compiler is not capable of checking for this, unless the language forbids the passing of "result" into other functions inside the transaction, because inside those functions, maybe there is a wait_for() call.
And how will the compiler know such a thing? How will the compiler know that inside an object that is being passed there is a sub-sub-sub-sub-object which is a result from a co-routine which in a sub-sub-sub-function will be called with wait_for()?
This doesn't seem possible to identify at compile-time.
Another tempting thing to choose is forbid transactions inside co-routines, but this has similar limitations: it's hard for the compiler to identify which functions contain transactions deeply nested in their code in sub-sub-sub-functions, and therefore, whether code passed to spawn() may have a transaction buried deeply in one of its calls.
It may be possible to overcome these obstacles using function coloring, a technique commonly used with coroutines, but this itself has limitations which personally I would not like to tackle.
https://quuxplusone.github.io/blog/2018/03/16/async-roundup/
So I guess the answer is, we are not able to mix coroutines and transactions. We have to pick one or the other.
I find this kind of disappointing, as both models have their strengths and weaknesses, but if I have to choose only one, then I the choice is extremely clear: go for transactions ;)
The good news, is that tasks (i.e. structured concurrency) can be safely mixed with transactions.
And the truth is that tasks have clear entry and exit points, typically the "yield" function which is where the current task suspends and resumes, while coroutines do not: the coroutine starts at the spawn() but it's the wait_for() which determines the flow of execution.
Tasks by themselves already give the same scalability advantages as co-routines. The only added benefit of using co-routines is the reduced latency on an execution context, which has limited applicability.
The problem with using tasks is that they don't play well with blocking calls, including locks, but there are ways around it. For example, the Java folks did a great job with Virtual Threads, which is just another name for tasks https://www.youtube.com/watch?v=zPhkg8dYysY
They changed all the blocking code in JDK and JVM to have a kind of if-this-is-a-virtual-thread-then-yield-to-the-executor pattern https://github.com/openjdk/jdk/blob/master/src/java.base/share/classes/java/util/concurrent/locks/LockSupport.java#L216
This vanilla approach has the disadvantage of losing fairness, if the lock you're using is fair but the scheduler is unfair. However, there are ways to overcome this, therefore unfairness is not a big concern.
I'm not as radical as Brian Goetz when it comes to coroutines and the reactive programming model https://youtu.be/9si7gK94gLo?t=1359
IMO, there is usefulness in coroutines because they can reduce latency, not a lot, but some. The problem is that, as Brian points out, the bang-for-buck is really low: coroutines introduce a high cognitive overhead when applied in large code bases, with a low performance gain, and difficulty in debugging. Indeed, we can get most of the gains (scalability wise) from using tasks, like Java's Virtual Threads, and then when we take into consideration that none of these models provide sharing of data across threads (or tasks) and that you need another model for that, and that IMO transactions are the best model to share data and reason about code sequentially, then the choice becomes clear.
If you add on top of it, that coroutines seem incompatible with transactions, but tasks are compatible with transactions, then it leads us to the outcome that coroutines are not what you want when designing a new language. In comparison, a better model is to use tasks for dealing with blocking/asynchronous events, and transactions for dealing with sharing of data, or at least, this is my conclusion.