https://github.com/pramalhe/ConcurrencyFreaks/blob/master/C11/locks/tidex_mutex.h
https://github.com/pramalhe/ConcurrencyFreaks/blob/master/C11/locks/tidex_mutex.c
We ran some benchmarks to compare the Tidex Lock versus a pthread_mutex_t and our own Ticket Lock implementation.
--- Opteron 32 cores ---
1 thread:
pthread_mutex_t = 5583126 ops/second
Ticket Lock = 18606375 ops/second
Tidex Lock = 17374496 ops/second
16 threads:
pthread_mutex_t = 1418309 ops/second
Ticket Lock = 5348964 ops/second
Tidex Lock = 5322226 ops/second
32 threads:
pthread_mutex_t = 1338859 ops/second
Ticket Lock = 4004952 ops/second
Tidex Lock = 3775166 ops/second
--- Intel i7 ---
1 thread:
pthread_mutex_t = 13720671 ops/second
Ticket Lock = 37627282 ops/second
Tidex Lock = 39492774 ops/second
4 threads:
pthread_mutex_t = 3418376 ops/second
Ticket Lock = 18810728 ops/second
Tidex Lock = 24807009 ops/second
8 threads:
pthread_mutex_t = 4679020 ops/second
Ticket Lock = 17078066 ops/second
Tidex Lock = 18310183 ops/second
First thing to notice is that the results for the Tidex Lock are very similar to the Ticket Lock (at least on this benchmark). Another thing is that there is a small disadvantage on the AMD Opteron but a small advantage on the Intel i7-3740QM for the Tidex Lock. This benchmark was done without any spinning.
So what does it look like on C11 ?
void tidex_mutex_lock(tidex_mutex_t * self)
{
long long mytid = (long long)pthread_self();
if (atomic_load_explicit(&self->egress, memory_order_relaxed) == mytid)
mytid = -mytid;
long long prevtid =
atomic_exchange(&self->ingress, mytid);
while (atomic_load(&self->egress) != prevtid) {
sched_yield(); // Replace this
with thrd_yield() if you use <threads.h>
}
// Lock has been
acquired
self->nextEgress = mytid;
}
void tidex_mutex_unlock(tidex_mutex_t * self)
{
atomic_store(&self->egress, self->nextEgress);
}
as you can see, this lock is very easy to implement (and understand).
No comments:
Post a Comment