Scalability litmus test for serializable databases:
Do concurrent increments on the same key contend (i.e., force their txns to wait or retry)? If these increments are blind (i.e., the old or new value is never used), then they need not conflict (same applies to any blind RMW op). Yet the only production database I know that satisfies this test is FoundationDB--are there others?
(BTW, commutativity of increment is a red herring: noncommutative blind RMW ops don't conflict either, e.g. destructive string append.)
(I think that OCC is required for this to work, but not MVCC, as long as your OCC isn't pointlessly doing WW conflict detection.)
=> More informations about this toot | More toots from tobinbaker@discuss.systems
@tobinbaker "blind" seems like a better name than "nil-externalizing" as used in https://ramalagappan.github.io/pdfs/papers/nilext.pdf . This seems like a good property to be explicit about in all sorts of contexts.
=> More informations about this toot | More toots from shachaf@y.la
@shachaf @tobinbaker GPUs generally implement their atomic RMWs as "remote atomics" rather than relying on local cache line locking, which works nicely for these kinds of operations. Although it sounds like by blind you mean that they don't return a result, which remote atomics can do (but the result latency is decoupled from throughput).
=> More informations about this toot | More toots from pervognsen@mastodon.social
@shachaf @tobinbaker I think part of the reason it works so well in GPUs is that they're already designed for throughput-oriented computing with the means to hide a few hundred cycles of latency like it's nothing, so you don't really care too much if an individual atomic operation has high latency, but you do care about aggregate throughput across all GPU hardware threads.
=> More informations about this toot | More toots from pervognsen@mastodon.social This content has been proxied by September (3851b).Proxy Information
text/gemini