Brandon.Si(mmons)

code / art / projects

A TypeFamilies Primer

TypeFamilies is a GHC extension that lets you create formerly-impossible abstractions in a very straightforward way. It took me several tries before they clicked for me though, so this is the introduction to TypeFamilies that I wish I had read first (although I just found Brent Yorgey’s, which would have done the trick).

I’m treating the subject very narrowly for most of this post, and try to round things out a little at the very end.

Primal Ignorance

If this isn’t the first thing you’ve read about TypeFamilies, it might be helpful to forget a few things. The question “what precisely is a type family?” isn’t going to be very helpful; in general, the terminology for the constellation of constructions that TypeFamilies gives you is a huge mess, with multiple partially-overlapping terms in the wild, none of which are helpful for developing an intuition about what all of this is about.

I also found various analogies I’d read to be useless, so forget those too.

Type synonyms as type-level functions

Consider the familiar type synonym:

type PairOf a = (a,a)

Normally this is presented as syntactic sugar, with little to do with the type system.

A more interesting way of thinking about PairOf is as a function (as suggested by the =), where evaluation involves substituting occurrences of the left hand side (LHS) with the right, in the usual way. These functions are evaluated in your type signatures at compile time.

The analogous regular term-level function would of course be:

pairOf a = (a,a)

Simple enough. Now let’s think about a simple term-level function, and see what an analogous type-level type synonym/function might look like:

last (a: [])     = a
last (a: (b:bs)) = last (b:bs)

For our type-level Last we need something like lists at the type-level, so we’ll use the common nested tuple representation of (,) as cons and () as the empty list, e.g.:

x :: (Int,(Int,(Int,())))  -- like [1,2,3]

Hopefully I didn’t just lose you. Remember for now we just care about using this list-like tuple thing in our type signatures.

If you were to charge ahead and try to define Last using type synonyms treated as full blown functions, you might come up with:

-- this isn't okay:
type Last (a, ())     = a
type Last (a, (b,bs)) = Last (b,bs)

Unfortunately the compiler will laugh at you. Type synonyms can only have abstract variable arguments on the LHS where above we have tried to deconstruct them using pattern matching, and to define a different RHS for both cases. Further we’ve made the definition recursive. None of that is okay.

In fact the humble type synonym is only a very simple sort of function (a natural transformation or something close) which is very easy to evaluate, but also very limited.

Finally, Enter TypeFamilies

The TypeFamilies extension lets us define Last successfully almost exactly as we did above.

{-# LANGUAGE TypeFamilies #-}

-- we have to "declare" `Last` separately, and the "family"
-- here distinguishes the syntax from a normal type synonym:
type family   Last l  

-- ...and then can define our "cases":
type instance Last (a,())     = a
type instance Last (a,(b,bs)) = Last (b,bs)

At this point when the type-checker sees Last (a,(b,bs)) in a type signature it will replace it with Last (b,bs), and continue until all of these “type functions” are evaluated. I may be fudging things a bit but that’s the general idea.

Since these are a more general sort of type function, they can even be used to replace traditional type synonyms:

type family   PairOf a
type instance PairOf a = (a,a)

But what is that good for?

It would be neat to be able to work with “lists” that look like e.g. (1,('a',("hello",()))); they are heterogeneous, operations like head would be type-safe, etc. So imagine we want to define a last on types of this list-like tuple sort of data.

What would the type of last look like? We know it has to be polymorphic, since its arguments might look like () or (1,(2,())), different types of course. So we’ll need a type-class (and a couple other standard extensions):

{-# LANGUAGE FlexibleInstances, FlexibleContexts #-}
class LastOfTupleList l where
    last' :: l -> Last l      -- < we use our `Last` type function

Our instances are trivial:

instance LastOfTupleList (a,()) where
    last' (a,()) = a  -- < the type-checker can see that, indeed, `a`
                      --   is equal to `Last (a,())`

instance (LastOfTupleList (b, bs))=> LastOfTupleList (a,(b,bs)) where
    last' (a,(b,bs)) = last' (b,bs)

Letting us do:

>>> last' (1,(2,()))
2
>>> last' (1,('a',("hello",())))
"hello"

Notice how our instances of Last and last have almost the same structure; this is very common.

Functions… but open!

If you’re a programmer type you were probably irritated by my initial clumsy definition of last; why not:

last (a:[]) = a
last (a:as) = last as -- `as` matches whatever can fall through the pattern
                      -- above; in this case only non-empty lists

Well, I was being sneaky because the type-level analogue isn’t allowed!

type instance Last (a,()) = a
type instance Last (a,as) = Last as -- BAD!

This is because unlike functions, type families are open meaning, like typeclasses, a new instance can be added at any moment. Therefore there’s no way to define a “default” instance to use after all other matches fail, you simply get illegal overlapping synonym instances such as the one above; the order in which we defined the two doesn’t matter.

For some use cases this is what we need, for others (such as our last') we’d really prefer that type synonym families be closed so that we can pattern match in the usual way. This feature is apparently coming soon.


Other details

That should give you an intuition. At this point you might want to stop and read through the following documentation, or continue reading below before coming back to these links for a more refined understanding and additional details:

Associated type synonyms

Since we so often define define a type class in terms of one or more type families, we’re given a simplified syntax for combining them in one place.

class LastOfTupleList l  where
    type Last l        -- an *associated* type family
    last' :: l -> Last l

instance LastOfTupleList (a,()) where
    type Last (a,()) = a
    last' (a,()) = a

When people say “associated types” they mean type functions that are associated with a typeclass using the syntax above.

Injectivity

Type synonym family instances are said to be not injective, meaning two different type functions can map to the same type on the RHS, e.g.

type instance F Int  = Bool
type instance F Char = Bool

It’s easy to forget this when building new abstractions, and assume that the typechecker will infer from the RHS (e.g. Bool above) the argument passed in to the type function (Int or Char).

Data families

I’ve completely focused on the type synonym flavor of TypeFamilies above, but there is also a data/newtype flavor in which, for each instance definition the RHS is a brand new type declaration, rather than mapping to an existing type

-- from http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/type-families.html
data family T a
data    instance T Int  = T1 Int | T2 Bool -- new constructors T1 and T2 defined here
newtype instance T Char = TC Bool

Because each instance maps to a unique type, data families are injective allowing the type checker to infer the LHS of the equation knowing the right.

More other things

  • TypeFamilies provides the syntax a ~ b to indicate type equality constraints; this is especially useful with type synonym functions, but can be useful on its own as well.
  • kind signatures are required for type functions on types taking arguments, e.g. Maybe

Clearing a Clogged Drain With a Draino Bottle

After pouring 1.5 bottles of horrid caustic chemicals down the drain to no avail, I found I was able to clear a tough clogged shower drain with just the empty bottle:

  • pour kettle full of boiling water into drain
  • stuff overflow drain with plastic wrap (so that pressure is directed toward the clog, not out the top)
  • invert empty draino bottle placing mouth around drain, forming a tight seal
  • squeeze bottle rapidly and forcefully

Don’t get boiling water in your eyes.

Thinking About an Economy of Bit Flips

Disclaimer: un-researched, under-educated, fast and loose hacking follows.


The other day I was thinking about counting in binary and how bits cascade when incrementing a counter.

000
001
010
011
100

I wondered: what if each bit flip was very “expensive”, for the hardware say? Are there other methods we could use that woud result in a “cheaper” increment operation, by avoiding those costly carries?

First, here’s a little bit of boring code that we’ll use below to print a list of Ints as padded binary, and some requisite imports:

import Data.Bits
import Data.Word
import Numeric
import Text.Printf
import Data.Char

prettyPrintBits :: (Integral i, Show i)=> [i] -> IO ()
prettyPrintBits = mapM_ putStrLn . prettyListBits

prettyListBits :: (Integral i, Show i)=> [i] -> [String]
prettyListBits l = map (printPadded . toBin) l where
    toBin i = showIntAtBase 2 intToDigit i ""
    printPadded = printf ("%0"++width++"s")
    width = show $ ceiling $ logBase 2 $ (+1) $ fromIntegral $ maximum l

Anyway I started playing with this by hand, with a 3-bit example, and drawing funny pictures like these:

notebook sketch

On the left is a table with the cost, in bit-flips, to increment a binary counter to that number, from the previous.

Then (in the middle of the pic above) I drew an undirected graph with edges connecting the numbers that differed by only a single bit (lots of symmetry there, no?); so what we’d like to do is walk that graph without repeating, to cycle through all the 3-bit combinations as “efficiently” as possible.

After doodling for a second I found a nice symmetrical path through the graph, and wrote the numbers we pass through in sequence (on the right side of the pic above).

Okay, but can we write a program to generate that sequence? And generalize it?

Generating Sequences

After I marked the bit-flip locations in the “bit-economical” binary sequence, I noticed the pattern from top to bottom is the same as what we’d use to build a binary tree from an ordered list, numbered from the bottom up.

Furthermore glancing back to my original table of “increment costs” I noticed they follow the same pattern; it turns out we can use a few simple bitwise operations on a normal binary count to enumerate all n-bit numbers with optimally-efficient bit flips. The algorithm is:

  • start the sequence at 0
  • for a count 0,1,2.. do
    • find the difference (XOR) of adjacent numbers
    • find the number of 1s in the result (POPCNT)
    • flip the bit at that offset to get to the next number in the sequence

Here’s an implementation in haskell that represents an infinite (sort of) efficient bit count:

lazyBits :: [Word]
lazyBits = scanl complementBit 0 bitsToFlip where
    ns = [0..] :: [Word]
    bitsToFlip = zipWith (\x-> subtract 1 . popCount . xor x) ns $ tail ns

We can play with this & pretty-print the result:

*Main Text.Printf> take (2^3) $ lazyBits 
[0,1,3,2,6,7,5,4]
*Main Text.Printf> prettyPrintBits $ take (2^3) $ lazyBits 
000
001
011
010
110
111
101
100

Aside: Applications and variations

This is similar to de Bruijn sequences and I’m sure this sort of sequence has some interesting applications. For instance just as with de Bruijn sequences I could see this being useful in cracking an imaginary lock consisting of toggling push-button digits.

But most real locks like that have buttons that lock in place and have a single “reset” button. To model those we could do something similar to what we just did, only we’d want to study a directed graph where we only ever have edges resulting from flipping a 0 to a 1, and all numbers have an edge back to 0.

Back to thinking about counting

We’ve gotten side-tracked a bit. We started wanting to find a more efficient binary number system for incrementing, but is that what we got?

Well, no. In particular we have no algorithm for incrementing an arbitrary number in our sequence. For instance, given only the number 010 we have no way (that I’ve been able to figure out) of knowing that the left-most bit should be flipped. In other words we need some amount of state to determine the next bit to be flipped. At least that’s my assumtion.

Comparison

One thing we do have is a method for comparing numbers in the sequence, something that’s pretty much essential if you want to do anything with a counter.

Here is some code to do just that, defined here in terms of bit strings (as output by prettyListBits); I haven’t figure out a clever set of bitwise ops to do this, and have a somewhat foggy idea of why this works:

compareNums :: String -> String -> Ordering
compareNums [] [] = EQ
compareNums ('0':as) ('0':bs) = compareNums as bs
compareNums ('1':as) ('1':bs) = invert $ compareNums as bs
    where invert EQ = EQ
          invert GT = LT
          invert LT = GT
compareNums (a:_) (b:_) = compare a b
compareNums _ _ = error "this assumes 0-padded, aligned numbers"

Counting, amortized increment cost, state

So how much does it really cost on average to increment an arbitrary number in the usual binary number system? We know it’s more than one, but we need a function from bit length to average cost.

To do that efficiently we’ll once again use the pattern we discovered above when we sketched out a 3-bit example. Looking at the way the different costs break down we have:

4 operations at cost 1
2 operations at cost 2
1 operation  at cost 3

…out of a total of 8 - 1 increment operations, so the average cost is:

(4*1 + 2*2 + 1*3) / (8 - 1) = 1.5714

So a general function for b bits would look like:

amortizedIncrCost :: Integer -> Double
amortizedIncrCost b = avgd $ sum $ zipWith (*) (iterate (2*) 1) [b,b-1 ..1]
    where avgd n = (fromIntegral n) / (2^b - 1)

We’d like to know how this behaves as our number of bits b approaches infinity. We could take the limit of the function, but who knows how to do that, so let’s just experiment:

*Main> amortizedIncrCost 3
1.5714285714285714
*Main> amortizedIncrCost 4
1.7333333333333334
*Main> amortizedIncrCost 8
1.968627450980392
*Main> amortizedIncrCost 16
1.9997558556496529
*Main> amortizedIncrCost 32
1.9999999925494194
*Main> amortizedIncrCost 64
2.0

Seems quite clearly to approach 2 as we exceed the accuracy of our Double, i.e. the average cost to increment an arbitrary-size binary counter is two bit flips. Interesting!

Remember above how we mentioned our counting system required some additional state to determine the next bit to flip? Well a single bit is the smallest unit of state we know about; therefore even if we could figure out how to determine bit-flips with a single bit of state, or could derive a way of calculating the offset from the number itself, we’d still never be able to do better than two bit-flips per increment.

This is probably all pretty obvious information theory stuff, but was a fun little diversion.

Using GHC’s RebindableSyntax for a Dynamic State “Fauxnad”

I just learned about GHC’s RebindableSyntax extension from chrisdoner’s thread on reddit and wanted to play around with scratching a couple itches I’ve had. In this post I’ll illustrate using RebindableSyntax to allow us to use haskell’s do notation in a State-monad-like construction, in which our state type is allowed to change (I’ve played with this idea previously).

The dynamic state construction looks like the traditional State, but with separate types for input and output state:

{-# LANGUAGE DeriveFunctor, MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances #-}
newtype DynState s1 s2 a = DynState { runState :: s1 -> (a,s2) }
    deriving Functor

get :: DynState s s s
get = DynState $ \s-> (s,s)

put :: s -> DynState x s ()
put = DynState . const . (,) ()

modify :: (s1 -> s2) -> DynState s1 s2 ()
modify = DynState . fmap ((,) ())

We can compose these by defining a (for the moment, very verbose) function dynStateBind. Interestingly, this is actually easier to understand IMHO than the State monad because the type signature makes explicit the fact that our lambda-bound state s1 is the initial state value that we run the construction on:

infixl 1 `dynStateBind` 
dynStateBind :: DynState s1 s2 a -> (a -> DynState s2 s3 b) -> DynState s1 s3 b
dynStateBind x f = DynState $ \s1-> let ~(a,s2) = runState x s1
                                     in runState (f a) s2

We also need stand-ins for (>>) and return:

-- It would be nice if >> inherited default instances in terms of bind, as in
-- an instance declaration
infixl 1 `dynThen`
dynThen :: DynState s1 s2 a -> DynState s2 s3 b -> DynState s1 s3 b
dynThen x y = x `dynStateBind` \_ ->  y

dynReturn :: a -> DynState s s a
dynReturn a = DynState $ \s-> (a,s)

So then we can use all the nonsense above as follows:

{-# LANGUAGE RebindableSyntax #-}
module Main
    where

import Prelude -- since RebindableSyntax implies NoImplicitPrelude
import DynState

-- avoid lame "Not in scope: `ifThenElse'"
ifThenElse b x y | b = x
                 | otherwise = y
-- oops, infinite loop fail!
--ifThenElse b x y = if b then x else y 

test :: DynState Int Int String
test = let (>>=)  = dynStateBind
           (>>)   = dynThen
           return = dynReturn
        in do i <- get             -- state :: Int
              put (even i, show i) -- state :: (Bool, String)
              (b,i_str) <- get
              put (i+1)            -- state :: Int
              if b then return "uh oh, even!"
                   else return $ "pre-incremented: "++i_str

If we want to we can even enrich our bind and add some machinery to support composing traditional StateT computations with our dynamic state:

-- | a silly class to support comosing regular State monad computations with
-- dynamic state computations.
class Stateful n s1 s2 | n -> s1 s2 where
    expand :: n a -> DynState s1 s2 a

instance Stateful (DynState s1 s2) s1 s2 where
    expand = id

instance Stateful (M.StateT s Identity) s s where
    expand = DynState . M.runState

-- category with bind sort of situation...
polyDynStateBind :: (Stateful x s1 s2, Stateful y s2 s3)=> x a -> (a -> y b) -> DynState s1 s3 b
polyDynStateBind x f = expand x `dynStateBind` fmap expand f

-- | convert a dynamic stateful computation into the usual State monad, so we
-- can compose it with normal state computations
flatten :: DynState s s a -> M.State s a
flatten = M.state . runState 

Zippers

At some point while working on zippo (you
should check out this instead) I made up detailed notes on a DSL for zipper operations that I wished I could shoe-horn into do or Arrow notation, but never quite could.

Lost the notes, but the idea would be to have bind expose the “focus” of the zipper and allow motions down and up to be sequenced nicely with minimal boilerplate, as in the state monad. Maybe I’ll find those and fill in this section properly.

CRRVA Poster Designs

A couple posters for the Classical Revolution RVA events at Balliceaux in November and December. Great programs both.

Poster for Nov 11

These both play with some photograms of found glasses I’d made prior.

Poster for Dec 2

dilly.js: A Library for Loops With Delays in JavaScript

I’ve released a new library for writing sort of idiomatic-looking loops with delays in javascript. This was motivated by this work in which I was implementing an algorithm/simulation; I wanted the code to be readable as straighforward loops, but I needed a delay for the visual portion.

Here is an example of its usage:

withDelay(1000).endingWith(other_stuff)                  
    .for("x",1,2)                // range: 1, 2
        .for("y",1,3 , 8)        // range(with step of 2): 1, 3, 5, 7
            .for("z",['a','b'])  // foreach: 'a', 'b'
                .do(function(){ 
                    console.log(this.var("x"), 
                                this.var("y"), 
                                this.var("z"));
                 });

The inner do block runs with a one-second delay between executions.

You can pick up the code with

$ git clone https://github.com/jberryman/dilly.js.git

The design is actually much closer conceptually to list comprehensions. Also, I’m fairly certain that the approach I took is far messier than necessary, but I was suffering from elegant coder’s block.

Send me any pull requests or bug reports you have.