Friday, June 26, 2015

Failover Trigger: Strictly ordered concurrency

I've stumbled on this concurrency pattern a few times now, and figured it's worth a quick write-up. The basic setup is a number of tasks that must not run simultaneously, and later tasks are only triggered if/when an earlier task fails. With fully-nonblocking code it can be tricky to synchronize everything correctly. Enter the Failover Trigger.

void inputSucceeded(Input input) { /* cleanup... */ } void noInputSucceeded() { /* cleanup... */ } void runTask(Input input, SettableFuture<Void> failover) { // Calls either failover.set(null) or inputSucceeded(input). } final AtomicReference<ListenableFuture<Void>> failovers = new AtomicReference<>(immediateFuture(null)); void addInput(final Input input) { final SettableFuture<Void> failover = SettableFuture.create(); failovers.getAndSet(failover) .addListener(() -> runTask(input, failover)); } void noMoreInputs() { failovers.getAndSet(null) .addListener(() -> noInputSucceeded()); }
This now allows any number of addInputs to be run concurrently, but ensures that everything in runTask (up to the failover.set call, at least) is fully synchronized. Inputs are handled on a first-come-first-served basis. If noMoreInputs is called after all inputs are added, then exactly one of inputSucceeded or noInputSucceeded will run to perform whatever cleanup is necessary.

I call this the "failover trigger" pattern because it provides each task with a trigger it can use to optionally start the next task.

Tuesday, March 11, 2014

git ir: an alternative to git rebase --interactive

My name is Steve and I'm a gitaholic. Git it pervasive in my workflow, both in my job and in my hobby projects. I'm also addicted to rebasing. At work, I tend to have ten to twenty branches open at a time. Many of these are chained in a giant stack, waiting for code reviews one after another. A few are branched out directly from the master branch. When I sync my repository or submit one of my changes, I then go about rebasing all the rest of the outstanding branches on top of that.

Unfortunately, git rebase just doesn't cut it for this. A while back I wrote git rebase-tree that would rebase a whole branched tree from one root to another (bringing all the branches along with), but it had a black eye in conflict handling, requiring a prompt for "continue/skip/abort". So I would need to open up a second terminal to git status and resolve the conflict, etc (or more typically, run git conflicts in emacs). I had been meaning to redo git rebase-tree with its own --continue flag, but my experience with another custom function (git diff-mail, which I use at work to prevent the changelist mailer from counting all the earlier changes that were already counted in a previous changelist) taught me that this can cause problems, as I often needed to git rebase --continue; diff-mail --continue, plus --abort often got confusing and sometimes actually clobbered my real changes (fortunately they've been easy to reconstruct so far). But it occurred to me that git rebase already has a queue, and with some clever manipulation, I can make it do what I want.

Enter git ir.

This function takes a list of branches, a commit to rebase them onto, and optionally a commit to rebase them from (in case they're already on top of the destination). It sets up an interactive rebase session but completely ignores the plan git prepares, instead using its own plan that includes a few additional commands in the rebase plan.
# Extended Commands:
# !, exec!  = mandatory command (the rest of the line), reinserted on failure
# b, branch = sets the named branch to the current commit
# (, push   = pushes the current commit onto the stack
# ), pop    = pops the current commit from the stack
The initial plan is effectively equivalent to git rebase-tree (though slightly more permissive). But with some quick edits (adding or removing parentheses, for instance) a tree of single-commit branches all off the same base can be instantly converted into a chain of dependent branches, and vice versa.

Once the initial plan is complete, it's ordinary git rebase the rest of the way, so when conflicts arise, it's back to the normal git rebase --continue (or --skip or --abort) workflow to handle them! Aborting happens for free. Moreover, no branches are moved until all commits have been successful rebased, so aborting before that brings everything exactly back to where it started. Finally, if you don't want it to be interactive, just call it with :: (which I alias to 'EDITOR=: ').

Saturday, December 14, 2013

Installing the Haskell Platform

It all started with the 2013 Christmas Tree Lecture. Professor Knuth challenged the audience to explore skew ternary trees and their relationship with a subclass of rooted planar graphs. I decided it might be fun to rewrite his SKEW-TERNARY-CALC program in Haskell, but in order to maximize its utility, it should target its front-end to the web browser. I learned out about the Fay monad and decided to give a try (see <future post>, hopefully). So as a first step, I tried to cabal install fay, only to find that I had a Dependency Mess™. Fine, I could use a fresh Haskell install. Unfortunately my laptop is currently stuck on Ubuntu 12.04, which is stuck at ghc-7.4. So I purged all my GHC and cabal data, temporarily installed a ghc-7.4 binary to bootstrap the new GHC, and went on building it. In light of the confusing package database issues, I figured it would be easiest to simply install everything to a user-writable directory, ~/local/opt/ghc-7.6.3 and ~/local/opt/haskell-platform-2013.2.0.0, so that I can then add --global as a default cabal-install option and operate out of a single package db.
$ cd ~/Downloads/ghc-7.6.3 $ sudo aptitude install ghc $ OPT=$HOME/local/opt $ ./configure --prefix=$OPT/ghc-7.6.3 $ make && make install
This went fine (though it took a few hours on my laptop), so I was now ready to install the Haskell platform. I eventually tracked down the right configure flags:
$ cd ~/Downloads/haskell-platform-2013.2.0.0 $ sudo aptitude purge ghc $ ./configure \ --prefix=$OPT/haskell-platform-2013.2.0.0 \ --enable-shared \ --enable-profiling \ --disable-user-install $ make && make install

Now the fun starts

Unfortunately, make failed. It turns out that scripts/build.sh sets GHC_PACKAGE_PATH in its build_pkg function when it builds with Cabal, and Cabal is incompatible with this option. Fine, we can fix that:
$ sed -i '/GHC_PACKAGE_PATH=/ s/^/#/' scripts/build.sh $ make && make install
Not surprisingly, this also fails, a little later in the process. This time, alex is failing to build without happy. But happy is supposed to be part of the Haskell platform, and it should be smart enough to build its dependencies in the right order. Indeed, there's a bug on this that's been open for a year (and the original reporter even included a patch!). I rearranged the dependencies and went on my way. Another error. Apparently happy also needs happy to build. Great, so there's no way around bootstrapping with a binary distribution. I found out right afterwards that alex has the same bootstrapping problem.
$ sudo aptitude install happy alex $ make && make install $ sudo aptitude purge happy alex
Now it looks like we're good to go. Cleaning things up a bit,
$ cd ~/local/opt $ ln -s ghc-7.6.3 ghc $ ln -s haskell-platform-2013.2.0.0 haskell-platform $ for a in ghc/bin/*; do ln -s ../opt/$a bin/ done $ echo PATH=$OPT/haskell-platform/bin:\$PATH \ >> ~/.bashrc $ . ~/.bashrc $ sed -i '/user-install/ s/.*/user-install: False/g' \ ~/.cabal/config

Installing Fay

Finally, it's time to actually install Fay.
$ cabal update $ cabal install fay
The first invocation of cabal update tells me to run cabal install cabal-install, but I'm a bit paranoid because ghc-pkg list tells me that Cabal-1.16.0 is installed, but the new cabal-install wants to bring in Cabal-1.18.0. Let's not start screwing up the dependencies just yet, thank you. However, there was a failure installing pretty-show. Cabal didn't give me a useful message, but attempting to install pretty-show directly reveals that it needs a higher version of happy. Happy doesn't need to install any libraries, so I'm not as worried about installing it.
$ cabal install happy $ cabal install fay
This works, but tells me I also need to install fay-base. No problem.
$ cabal install fay-base
Stay tuned for another post later about skew ternary trees!

Thursday, March 07, 2013

Obfuscated Life

f=lambda g,r,n,s:s if n==0 else'\n'.join([s,'='*50
,g(g,r,n-1,r(s))]);l=lambda q:'\n'.join([''.join([
'*'if x in(5,6,7) else' 'for x in a])for a in [map
(sum,zip(*b))for b in zip(*[[z[x:]+z[:x]for z in [
[(2-(1-abs(x))*(1-abs(y)))*w for w in r]for c in[[
[1 if s == '*' else 0 for s in p] for p in q.split
('\n')]]for r in c[y:]+c[:y]]] for x in(-1,0,1)for
y in(-1,0,1)])]]); iters=65 ;print(f(f,l,iters,"""
**************************************************
*                                                *
*         ***      ** **      ***    **          *
*         *  *    *  *  *    *       **          *
*         ***      *   *      **     **          *
*         *  *      * *         *                *
*         ***        *       ***     **          *
*                                                *
**************************************************
""".strip()))  # LIFE # 2013/3/7 # Stephen Hicks #
You can fill in any initial position you want (but make sure it is a rectangle (i.e. the same number of characters on every line) and that the first and last characters are not spaces (any non-asterisk will do to signify an empty cell)).

Tuesday, September 25, 2012

Solving the Rubik's Cube, Part 2: Exploration

Now that we have a better idea what we're dealing with, we can start rotating things to see what happens.

Fortunately, these cubes come solved. When working with a solved cube, it's much more obvious what is the effect of any transformation, since you can immediately tell (roughly) where each block came from. In order to take advantage of the solved state as long as possible, we start by only making small excursions of 4-8 operations.

Before we can talk about these operations, we need to define a language for describing operations and positions on the cube. We'll start by naming the six directions with their unique first letters: Front, Back, Up, Down, Left, and Right. Each block can be described by three coordinates, using capitalization as follows: we write 'F' for front-most, 'f' for front-middle, 'b' for back-middle, and 'B' for backmost; and likewise for the other two axes. Thus, the front-top-left corner cube will be called 'FUL', the edge cube directly beneath it is 'FuL', and the face cube to the right of it is 'Ful'. Continuing diagonally right and down the same face we have 'Fdr' and finally the opposite corner 'FDR'. We always use the same order F/f/b/B, followed by U/u/d/D, followed by L/l/r/R, so that each block has a single unique name. Note that corner blocks are all caps, edge blocks are two caps and a lowercase, and face blocks are two lowercase and one capital. A block with all three lowercase is impossible as it would be on the inside of the cube.

We now name the elementary rotations by the direction of rotation, subscripted by which layer is turned. Thus, turning the top-most layer to the right is RU. Turning the next lower layer to the left is Lu. Turning the right-most layer down is DR and turning the middle-left layer up is Ul. To name the final axis, we use the right-hand-rule: a counter-clockwise rotation of the front layer is FF since this is a right-hand rotation through the F-axis. A clockwise rotation of the middle-front layer is Bf. We could name the other four rotations by their axis, but I find the consistency to not be worth the increased confusion.

Now that we have names, what sort of explorations can we do? Our goal now is to build up a catalog of simple (and hopefully easy-to-remember) operations that make small changes to the cube that are easy to understand and reason about. For instance, a single rotation Dr affects 12 on four separate faces, which is a bit unwieldy. If we follow it up with Rd we've now affected 22 blocks, touching all the faces: even worse. But if we continue with Ur and then Ld, we arrive mostly back to where we started, but with six face blocks cycled a bit. If we repeat this series of four rotations three more times, we arrive back at a completely-solved cube. Another example is DrRDUrLD. [Note that I write compositions from left to right, which is unconventional from a mathematical point of view—i.e. if you wrote these as functions f and g operating on a state s then one might expect (fg)(s) to mean f(g(s)); but I find this more difficult to read and this leads to more errors in executing operations on an actual cube.] This operation also affects six blocks, but must be repeated more times to return to the initial state.

We've seen 6×4=24 named operations, to which we can add 6 more full-cube rotations: OF, OB, OU, OD, OL, and OR, again using the right-hand rule to determine rotation direction. Note that these are effectively just relabelings: OFUrOB is equivalent to just Ld. In this way, one might consider them a sort of orthogonal (or unitary, for the physicists) transformation.

This is a small sampling of short excursions. In the next installment, we'll look at many more, and discuss some heuristics of how we come up with possible operations and how to represent their effects using cycle notation.

Monday, September 24, 2012

Solving the Rubik's Cube, Part 1: Introduction

I was given a 4x4x4 and 5x5x5 Rubik’s cube as a bachelor gift (“should be easier to figure out than your wife”) and recently began working on the smaller one. I was taught the algorithm for solving the original 3x3x3 cube as a teenager, and somewhat regret not having the chance to solve it completely myself.  Since then I have solved a few other puzzle cubes. As it turns out, there are a number of phases of solving Rubik-style puzzles, and the techniques are actually quite generalizable and are very mathematical. I thought it might be interesting to document the solution process in a blog series, from the point of view of one who has never solved this cube before, and on the way we'll explore the various concepts that arise.  I will not be providing a thorough solution, and will try to warn about specific spoilers, in case anyone wants to solve it themselves.

To get started, we’ll first look at the anatomy of a cube. As everyone should remember from geometry, all polyhedra have vertices, edges, and faces. In a standard Rubik’s cube, this gives us three distinct types of blocks: corner blocks, edge blocks, and face blocks. Note that this is not the only possible mapping: the Skewb (pictured left) has only two types of blocks—corner and face—due to its diagonal cuts. The importance of recognizing the component blocks is that no operation will ever exchange (for example) a corner block with an edge block, or any other mismatch. Even more importantly, it is critical to see the cube as an arrangement of blocks, and not just stickers. Every operation moves and reorients whole blocks: stickers do not move independently.

In the 4x4x4 cube, we have four face blocks on each of six faces, giving us 24 face blocks with four-fold degeneracy (degeneracy simply means that we can’t tell the difference if the “degenerate” blocks are exchanged: the cube is still solved if these are “out of place” from where they started). We have two blocks on each of 12 edges, giving us 24 edge blocks with two-fold degeneracy and two different possible orientations (conversely, the cube is not solved if a block is in the correct location but the wrong orientation). Finally, there are eight unique corner blocks, each of which has three different orientations. Thus the total number of distinguishable configurations is ostensibly
(24!/(4!)6)×(24!/(2!)12×224)×(8!×38)/48 ≈ 4.5×1049,
although a large number of these are impossible to actually achieve without disassembling and reassembling the cube. So the puzzle space is actually several orders of magnitude smaller.

In the next installment we'll look at some initial explorations of cube operations, and define some notation for referring to the different blocks and rotations.

Tuesday, March 15, 2011

Version controlling my home directory

I recently rearranged my home directories on my work machines and decided it was time to finally get my configuration under a proper version control. I found several blogs about this, but one was particularly useful. I've changed it a little, but here's the basic set up.
$ mkdir ~/.config.git
$ cd ~/.config.git
$ git init --bare
$ git config core.worktree ../
Now I have a git repository called ~/.config.git, but it needs special environment variables set so that accidentally calling git outside a normal repository won't trigger it to do things to my config repo. The next step is to make it easier to use this repo. I wrote the following to ~/bin/git-home:
#!/bin/bash
export GIT_DIR=$HOME/.config.git
export GIT_WORK_TREE=$HOME
git "$@"
Now calling "git home ..." explicitly sets the repository to my config repo. But this still isn't so convenient. So I added the following to my (now version-controlled) .bashrc file, at a point after completions are loaded:
alias hgit='git home'
complete -o bashdefault -o default -o nospace -F _git hgit 2>/dev/null \
|| complete -o default -o nospace -F _git hgit
This sets up hgit with completions. Finally, I don't want hgit status to show tons of untracked files. So I added the following ~/.config.git/info/exclude:
[^.]*
!.*/*
*~
Beneath that is a list of specific exclusions, which I'll keep expanding as I discover files and directories I don't want to track.