Brandon.Si(mmons)

code / art / projects

Benchmarking Very Fast Things With Criterion

There’s a pervasive myth that Bryan O’Sullivan’s excellent haskell benchmarking library criterion is only useful for benchmarks that take some significant chunk of time (I’ve even heard some people claim on the ms scale). In fact criterion is useful for almost anything you’d want to benchmark.

At a high level criterion makes your benchmark the inner loop of a function, and runs that loop a bunch of times, measures the result, and then divides by the number of iterations it performed. The approach is both useful for comparing alternative implementations, and probably the only meaningful way of answering “how long does this code take to run”, short of looking at the assembly and counting the instructions and consulting your processor’s manual.

If you’re skeptical, here’s a benchmark we’d expect to be very fast:

import Criterion.Main

main :: IO ()
main = do
    defaultMain [ 
        bench "sum2" $ nf sum [1::Int,2]
      , bench "sum4" $ nf sum [1::Int,2,3,4]
      , bench "sum5" $ nf sum [1::Int,2,3,4,5]
      ]

And indeed it’s on the order of nanoseconds:

benchmarking sum2
time                 27.20 ns   (27.10 ns .. 27.35 ns)
                     0.994 R²   (0.984 R² .. 1.000 R²)
mean                 28.72 ns   (27.29 ns .. 32.44 ns)
std dev              6.730 ns   (853.1 ps .. 11.71 ns)
variance introduced by outliers: 98% (severely inflated)

benchmarking sum4
time                 58.45 ns   (58.31 ns .. 58.59 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 58.47 ns   (58.26 ns .. 58.66 ns)
std dev              654.6 ps   (547.1 ps .. 787.8 ps)
variance introduced by outliers: 11% (moderately inflated)

benchmarking sum5
time                 67.08 ns   (66.84 ns .. 67.33 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 67.04 ns   (66.85 ns .. 67.26 ns)
std dev              705.5 ps   (596.3 ps .. 903.5 ps)

The results are consistent with each other; sum seems to be linear, taking 13-14ns per list element, across our different input sizes.

Trying to measure even faster things

This is what I was doing today which motivated this post. I was experimenting with measuring the inner loop of a hash function:

fnvInnerLoopTest :: Word8 -> Word32
{-# INLINE fnvInnerLoopTest #-}
fnvInnerLoopTest b = (2166136261 `xor` fromIntegral b) * 16777619

These were the results criterion gave me:

benchmarking test
time                 9.791 ns   (9.754 ns .. 9.827 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 9.798 ns   (9.759 ns .. 9.862 ns)
std dev              167.3 ps   (117.0 ps .. 275.3 ps)
variance introduced by outliers: 24% (moderately inflated)

These are the sorts of timescales that get into possibly measuring overhead of function calls, boxing/unboxing, etc. and should make you skeptical of criterion’s result. So I unrolled 4 and 8 iteration versions of these and measured the results:

main :: IO ()
main = do
    defaultMain [ 
        bench "test"  $ nf fnvInnerLoopTest   7
      , bench "test4" $ nf fnvInnerLoopTest4 (7,8,9,10)
      , bench "test8" $ nf fnvInnerLoopTest8 (7,8,9,10,11,12,13,14)
      ]

benchmarking test
time                 9.380 ns   (9.346 ns .. 9.418 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 9.448 ns   (9.399 ns .. 9.567 ns)
std dev              240.4 ps   (137.9 ps .. 418.6 ps)
variance introduced by outliers: 42% (moderately inflated)

benchmarking test4
time                 12.66 ns   (12.62 ns .. 12.72 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 12.68 ns   (12.64 ns .. 12.73 ns)
std dev              158.8 ps   (126.9 ps .. 215.7 ps)
variance introduced by outliers: 15% (moderately inflated)

benchmarking test8
time                 17.88 ns   (17.82 ns .. 17.94 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 17.89 ns   (17.81 ns .. 17.97 ns)
std dev              262.7 ps   (210.3 ps .. 349.7 ps)
variance introduced by outliers: 19% (moderately inflated)

So this seems to give a more clear picture of how good our bit twiddling is in that inner loop. I was curious if I could measure the overhead directly in criterion though. Somewhat surprisingly to me, it seems I could!

I added the following benchmark to my list:

  , bench "baseline32" $ nf (\x-> x) (777::Word32)

The idea being to isolate the overhead of applying the most trivial function and calling nf on an example value of our output type (Word32 in this case).

benchmarking baseline32
time                 9.485 ns   (9.434 ns .. 9.543 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 9.509 ns   (9.469 ns .. 9.559 ns)
std dev              155.8 ps   (122.6 ps .. 227.8 ps)
variance introduced by outliers: 23% (moderately inflated)

If we consider this value the baseline for the measurements initially reported, the new results are both linear-ish, as we would expect, and also the resulting absolute measurements fall about where we’d expect from the assembly we’d hope for (I still need to verify that this is actually the case), e.g. our intial test is in the ~1ns range, about what we’d expect from an inner loop with a couple instructions.

I thought this was compelling enough to open an issue to see whether this technique might be incorporated into criterion directly. It’s at least a useful technique that I’ll keep playing with.

Anyway, benchmark your code.

Announcing Unagi-chan

Today I released version 0.2 of unagi-chan, a haskell library implementing fast and scalable FIFO queues with a nice and familiar API. It is available on hackage and you can install it with:

$ cabal install unagi-chan

This version provides a bounded queue variant (and closes issue #1!) that has performance on par with the other variants in the library. This is something I’m somewhat proud of, considering that the standard TBQueue is not only significantly slower than e.g. TQueue, but also was seen to livelock at a fairly low level of concurrency (and so is not included in the benchmark suite).

Here are some example benchmarks. Please do try the new bounded version and see how it works for you.

Benchmarks

What follows are a few random thoughts more or less generally-applicable to the design of bounded FIFO queues, especially in a high-level garbage-collected language. These might be obvious, uninteresting, or unintelligible.

What is Bounding For?

I hadn’t really thought much about this before: a bounded queue limits memory consumption because the queue is restricted from growing beyond some size.

But this isn’t quite right. If for instance we implement a bounded queue by pre-allocating an array of size bounds then a write operation need not consume any additional memory; indeed the value to be written has already been allocated on the heap before the write even begins, and will persist whether the write blocks or returns immediately.

Instead constraining memory usage is a knock-on effect of what we really care about: backpressure; when the ratio of “producers” to their writes is high (the usual scenario), blocking a write may limit memory usage by delaying heap allocations associated with elements for future writes.

So bounded queues with blocking writes let us:

  • when threads are “oversubscribed”, transparently indicate to the runtime which work has priority
  • limit future resource usage (CPU time and memory) by producer threads

We might also like our bounded queue to support a non-blocking write which returns immediately with success or failure. This might be thought of (depending on the capabilities of your language’s runtime) as more general than a blocking write, but it also supports a distinctly different notion of bounding, that is bounding message latency: a producer may choose to drop messages when a consumer falls behind, in exchange for lower latency for future writes.

Unagi.Bounded Implementation Ideas

Trying to unpack the ideas above helped in a few ways when designing Unagi.Bounded. Here are a few observations I made.

We need not block before “writing”

When implementing blocking writes, my intuition was to (when the queue is “full”) have writers block before “making the message available” (whatever that means for your implementation). For Unagi that means blocking on an MVar, and then writing a message to an assigned array index.

But this ordering presents a couple of problems: first, we need to be able to handle async exceptions raised during writer blocking; if its message isn’t yet “in place” then we need to somehow coordinate with the reader that would have received this message, telling it to retry.

By unpacking the purpose of bounding it became clear that we’re free to block at any point during the write (because the write per se does not have the memory-usage implications we originally naively assumed it had), so in Unagi.Bounded writes proceed exactly like in our other variants, until the end of the writeChan, at which point we decide when to block.

This is certainly also better for performance: if a wave of readers comes along, they need not wait (themselves blocking) for previously blocked writers to make their messages available.

One hairy detail from this approach: an async exception raised in a blocked writer does not cause that write to be aborted; i.e. once entered, writeChan always succeeds. Reasoning in terms of linearizability this only affects situations in which a writer thread is known-blocked and we would like to abort that write.

Fine-grained writer unblocking in probably unnecessary and harmful

In Unagi.Bounded I relax the bounds constraint to “somewhere between bounds and bounds*2”. This allows me to eliminate a lot of coordination between readers and writers by using a single reader to unblock up to bounds number of writers. This constraint (along with the constraint that bounds be a power of two, for fast modulo) seemed like something everyone could live with.

I also guess that this “cohort unblocking” behavior could result in some nicer stride behavior, with more consecutive non-blocking reads and writes, rather than having a situation where the queue is almost always either completely full or empty.

One-shot MVars and Semaphores

This has nothing to do with queues, but just a place to put this observation: garbage-collected languages permit some interesting non-traditional concurrency patterns. For instance I use MVars and IORefs that only ever go from empty to full, or follow a single linear progression of three or four states in their lifetime. Often it’s easier to design algorithms this way, rather than by using long-lived mutable variables (for instance I struggled to come up with a blocking bounded queue design that used a circular buffer which could be made async-exception-safe).

Similarly the CAS operation (which I get exported from atomic-primops) turns out to be surprisingly versatile far beyond the traditional read/CAS/retry loop, and to have very useful semantics when used on short-lived variables. For instance throughout unagi-chan I do both of the following:

  • CAS without inspecting the return value, content that we or any other competing thread succeeded.

  • CAS using a known initial state, avoiding an initial read

Thoughts on FIFO-ness

I’ve been doing a lot of experimenting with concurrent operations in haskell and in particular playing with and thinking about the design of concurrent FIFO queues. These structures are difficult to make both efficient and correct, due to the effects of contention on the parts of the structure tasked with coordinating reads and writes from multiple threads.

These are my thoughts so far on FIFO semantics.

FIFO? And how!

In the interesting paper “How FIFO is your concurrent FIFO queue?”(PDF). A Haas, et al. propose that an ideal FIFO queue has operations that are instantaneous (think of each write having an infinitely accurate timestamp, and each read taking the corresponding element in timestamp order). They then measure the degree to which real queues of various designs deviate from this platonic FIFO semantics in their message ordering, using a metric they call “element-fairness”. They experimentally measure element-fairness of both so-called “strict FIFO” as well as “relaxed FIFO” designs, in which elements are read in more or less the order they were written (some providing guarantees of degree of re-ordering, others not).

The first interesting observation they make is that no queue actually exhibits FIFO semantics by their metric; this is because of the realities of the way atomic memory operations like CAS may arbitrarily reorder a set of contentious writes.

The second interesting result is that the efficient-but-relaxed-FIFO queues which avoid contention by making fewer guarantees about message ordering often perform closer to ideal FIFO semantics (by their metric) than the “strict” but slower queues!

Observable FIFO Semantics

As an outsider, reading papers on FIFO queue designs I get the impression that what authors mean by “the usual FIFO semantics” is often ill-defined. Clearly they don’t mean the platonic zero-time semantics of the “How FIFO… ” paper, since they can’t be called FIFO by that measure.

I suspect what makes a queue “strict FIFO” (by the paper’s categorization) might simply be

If write x returns at time T, then x will be read before the elements of any writes that have not yet started by time T.

The idea is difficult to express, but is essentially that FIFO semantics is only observable by way of actions taken by a thread after returning from a write (think: thread A writes x, then tells B which writes y, where our program’s correctness depends on the queue returning y after x). Note that since a queue starts empty this is also sufficient to ensure writes don’t “jump ahead” of writes already in the queue.

Imagine an absurd queue whose write never returns; there’s very little one can say for certain about the “correct” FIFO ordering of writes in that case, especially when designing a program with a preempting scheduler that’s meant to be portable. Indeed the correctness criterion above is actually probably a lot stricter than many programs require; e.g. when there is no coordination between writers, an observably-FIFO queue need only ensure that no reader thread sees two messages from the same writer thread out of order (I think).

The platonic zero-time FIFO ordering criterion used in the paper is quite different from this observable, correctness-preserving FIFO criterion; I can imagine it being useful for people designing “realtime” software.

Update 04/15/2014:

What I’m trying to describe here is called linearizability, and is indeed a well-understood and common way of thinking about the semantics of concurrent data structures; somehow I missed or misunderstood the concept!

Conclusion

At a certain level of abstraction, correct observable FIFO semantics shouldn’t be hard to make efficient; after all, the moments during which we have contention (and horrible performance) are also the moments during which we don’t care about (or have no way of observing) correct ordering. In other words (although we have to be careful of the details) a thread-coordination scheme that breaks down (w/r/t element-fairness) under contention isn’t necessarily a problem. Compare-and-swap does just that, unfortunately it breaks down in a way that is slower rather than faster.

Announcing Shapely-data v0.1

This is the first real release shapely-data, a haskell library up here on hackage for working with algebraic datatypes in a simple generic form made up of haskell’s primitive product, sum and unit types: (,), Either, and ().

You can install it with

cabal install shapely-data

Motivation and examples

In order from most to least important to me, here are the concerns that motivated the library:

  • Provide a good story for (,)/Either as a lingua franca generic representation that other library writers can use without dependencies, encouraging abstractions in terms of products and sums (motivated specifically by my work on simple-actors.

  • Support algebraic operations on ADTs, making types composable

      -- multiplication:
      let a = (X,(X,(X,())))
          b = Left (Y,(Y,())) :: Either (Y,(Y,())) (Z,())
          ab = a >*< b
       in ab == ( Left (X,(X,(X,(Y,(Y,()))))) 
                  :: Either (X,(X,(X,(Y,(Y,()))))) (X,(X,(X,(Z,())))) )
    
      -- exponents, etc:
      fanout (head,(tail,(Prelude.length,()))) [1..3]    == (1,([2,3],(3,())))
      (unfanin (_4 `ary` (shiftl . Sh.reverse)) 1 2 3 4) == (3,(2,(1,(4,()))))
    
  • Support powerful, typed conversions between Shapely types

      data F1 = F1 (Maybe F1) (Maybe [Int]) deriving Eq
      data F2 = F2 (Maybe F2) (Maybe [Int]) deriving Eq
      f2 :: F2
      f2 = coerce (F1 Nothing $ Just [1..3])
    
      data Tsil a = Snoc (Tsil a) a | Lin deriving Eq
      truth = massage "123" == Snoc (Snoc (Snoc Lin '3') '2') '1'
    

Lowest on the list is supporting abstracting over different recursion schemes or supporting generic traversals and folds, though some basic support is planned.

Finally, in at least some cases this can completely replace GHC.Generics and may be a bit simpler. See examples/Generics.hs for an example of the GHC.Generics wiki example ported to shapely-data. And for a nice view on the changes that were required, do:

git show 3a65e95 | perl /usr/share/doc/git/contrib/diff-highlight/diff-highlight

Why not GHC.Generics?

The GHC.Generics representation has a lot of metadata and a complex structure that can be useful in deriving default instances; more important to us is to have a simple, canonical representation such that two types that differ only in constructor names can be expected to have identical generic representations.

This supports APIs that are type-agnostic (e.g. a database library that returns a generic Product, convertible later with to), and allows us to define algebraic operations and composition & conversion functions.