pup is great!

I definitely do more HTML scraping than is healthy for a person.  And inevitably, I end up spending more time doing the scraping than I would have spent doing some copy-pasting.

So of course I’m excited to have a new tool to make wasting time even easier: pup.

I’m sure I’ve worked with various HTML command line scrapers in the past, but pup seems to have hit a sweet spot — reusing CSS selectors makes it pretty intuitive, and the set of operations to extract what you want is just enough to get your data, without overwhelming you.  Nice!

coding like it’s 1979

It’s old news, but I just discovered Cathode and it’s pretty much made my day week.

There’s something so charming about the feel of analog devices and Cathode comes very close to the real thing. Or at least how I remember it. I installed mutt just so I could stay in the console longer.

Computing the optimal stop placement for transit

Like many people, I take the bus to work most days. My commute isn’t actually that far (about 3 miles), but I am incredibly lazy, and the bus lets me catch up on the magazines that would otherwise be accumulating dust on my table. (And if I keep up on my Harper’s, I can at least pretend I’m up to date on what’s going on).

Anyway, here’s my bus route:


bus route

The main thing to notice here is that it stops an awful lot.  During peak commute hours, I can sometimes walk faster than the bus.  Given that I’m out of shape and my commute involves a big hill, that’s not a good sign.

It’s been pointed out many times that perhaps stops are placed too close together in many locales:

So there are potentially good reasons why you want to have stops closer together than what might be optimal; though I would mostly bucket these into having a customer base that is old, fat, lazy, grumpy or some combination of those 4.  You can see in the humantransit post the outrage expressed at having stops more than 300 meters apart.  The horror of having to walk more than 1.5 blocks to your stop!

But let’s go ahead and assume we live in a world where people are happy to walk longer distances.  Let’s go further and assume they’re willing to walk as far as they need to ensure their overall trip time is minimized.  If we have such a cooperative public, then what’s our optimal stop distance?  I made up a trivial model of what happens in this case in a Ipython notebook here:


Here’s the resulting plot:

This model is incredibly contrived, but still, it’s interesting to toy with the tradeoffs.  Note that even with a very slow walking pace (2 minutes/block, or 50 meters a minute), the optimal distance is over 5 blocks apart.  (Compare that with the spacing on my route at a ~2 blocks between stops.)
If you have ideas on how to improve the model, please let me know!


We use Slack extensively at Iodine. It’s where most of our communication takes place, even within the same office! (You can argue that this isn’t especially healthy…)

Sadly, we’re still forced to use email to interact with some services that haven’t added a hook for Slack. These include some monitoring services, or just when we want a “read-only” address for people to send us updates.

The pain point I was feeling recently was with the Luigi package, which makes building data pipelines a snap. We use it extensively to build the data that backs the openfda API. Since some build steps can take several hours, it’s nice to start up a pipeline and go off to lunch (or to sleep). But when there’s a failure we want to know about it: Luigi conveniently supports email notifications for this task, but there’s no way to add a webhook.

Now the reasonable thing to do would be to just add support for a webhook notification to Luigi and send them a PR to get it merged. But then we’d still be stuck with email for all of our other services. So instead, I thought why not just proxy email to Slack directly? I know absolute nothing about SMTP or MX records, so how hard can it be?  But if I made such a service, we could use it for any service that supported email notifications. (My guess is that Slack will add support for such notifications like… 3 minutes after I post this, but anyway it will be useful until then).

With that in mind, I whipped up slackmail: a SMTP server that forwards messages to Slack. To make this super-extra-convenient, there’s 2 servers: one you can use locally for testing and another one that supports adding and removing hooks dynamically. To avoid getting spammed, you can specify an authorization token which must be present in the incoming mail for it to be forwarded. The code illustrates my almost perverse lack of knowledge about SMTP, but nonetheless appears to work.

If you don’t want to run your own server, feel free to play with my example server. To register a new hook, email [email protected]. Here’s an example using the `mail` command line client:

mail -s 'Add me, yo.' '[email protected]' <<HERE
target_email: [email protected]
webhook_url: https://hooks.slack.com/services/abc/def/awdalwidajwm
authorization_token: you can't guess me

To remove a hook, just email [email protected] with the same content. Once you’ve registered a hook, any emails to `[email protected]` will be forwarded to your webhook. Feel free to email me at [email protected] to say hi!

(You can also email [email protected] if you like building interesting tools and helping make healthcare better!)

on dreams

The other night I dreamed about a plane full of people.  Somehow, I even knew the exact number (240).

Then I dreamed about something else.

Then I started feeling bad, because I realized the lives of those 240 people depended on me dreaming about them, and once I stopped, they disappeared.

On distractions

I just purchased a new phone.  I’m convinced that this was not a brilliant time to introduce a great source of distraction, given I’m frantically working towards finishing my thesis and preparing for my defense.  But it’s so shiny!

Transaction Chain Visualization

We had a paper at the last SOSP on transaction chains.  Our original analysis of chains was done by hand, which is quite a silly way to do it.  We then wrote a simple script to do the graph analysis, but it’s still difficult to picture the interaction of chains (a script telling you that you have an S-C cycle is great, but what should you do about it?)

To make this a bit easier, I made up a little webpage that lets  you enter in a list of chains and indicate commutative links.  This page very effectively illustrates 3 things:

  • My ineptness at Javascript
  • My lack of graph theory knowledge
  • That there are some neat Javascript libraries out there (hello Dagre!)

Try it out here: http://rjpower.org/transaction-chain/

Creating fancy images with Matplotlib

I have to give a short presentation at SOSP next week, and for it, I needed to have some nice pictures representing a distributed array. After trying out several tools for trying to create these, I began to lament and cry over the state of Linux drawing software. But that’s a different story. I ended up writing a simple matplotlib script to generate the pictures I needed, and since it worked out pretty well, I thought I’d share it here.

Here’s the kind of picture I’m referring to:


It turns out this is pretty straightforward using matplotlib. Here’s the basic function:

def draw_array(a, target=None):
    fig = pylab.gcf()
    fig.frameon = False

ax = fig.gca()

ax.set_aspect('equal', 'box')

size = 1.0
z_scale = 1.4
i = 0
for z in reversed(range(a.shape[2])):
    for (x,y),v in np.ndenumerate(a[:, :, z]):
        i += 2
        alpha = a['transparency'][x,y,z]
        color = tuple(a['color'][x,y,z])
        off_x = 0.01 + x + size + z / z_scale
        off_y = y + size + z / z_scale

        rect = pylab.Rectangle([off_x, off_y], size, size,
                               facecolor=color, edgecolor=(0,0,0),
                               zorder = i, alpha = alpha)

        cx = off_x + size/2
        cy = off_y + size/2

        # sigh
        label = str(a['name'][x,y,z])
        w, h = pylab.matplotlib.text.TextPath((0,0), label).get_extents().size / 30

        #print w, h

        text = pylab.Text(cx - w / 2, cy - h / 2, label, zorder=i+1)

if target is not None:
return ax

The first part of this just turns off the various lines for the axes. We then iterate through the elements of the array and create a Rectangle() for each one; each “layer” (z-axis) is shifted off to the right a little bit from the previous, to give our illusion of depth. (We don’t want a normal perspective projection, as it would hide too much of the deeper layers).

The “sigh” comment is where I’m using a hack to determine the size of the text we’re going to put in so I can center it in the array cell. I couldn’t find an easier way to do this, and no, I don’t know why I have to divide the result by 30.

The input array has 3 fields which specify how to render each rectangle:

dtype=([('color', 'f,f,f'), ('name', 'i'), ('transparency', 'f')]))

Now we can construct an arbitrary array and feed it into our function:

shape = (3,3,5)
a = np.ndarray(shape, dtype=([('color', 'f,f,f'), ('name', 'i'), ('transparency', 'f')]))
a['name'] = np.arange(np.prod(shape)).reshape(shape)
a['transparency'] = 1.0
a['color'] = (1,1,1)
return a

draw_array(a, target='array.pdf')

Once we have the basics out of the way, we can do some fancy rendering really easily. First, let’s make a little helper class to draw slices:

class draw_slice(object):
    def <strong>init</strong>(self, a, target=None):
        self.a = a
        self.target = target

def __getitem__(self, slc):
    slice_z = np.copy(self.a)
    slice_z['color'][slc] = (0.9, 0.5, 0.3)
    slice_z['transparency'] = 0.9
    draw_array(slice_z, self.target)

We can wrap an array in draw_slice() to make it easy to construct pictures of slices:



We can be fancier if we like too, drawing the results of a filter operation:,

draw_slice(a)[a[‘name’] &lt;= 1]


If you are interested, the full code for creating these figures is here: https://gist.github.com/rjpower/7249729. All you need is matplotlib and numpy.

statically linking shared libraries with libtool

I run a lot of experiments on our local cluster.  Unfortunately, over time, the library versions on the cluster tend to diverge from those on my local machine. As a result, I’ve gotten used to seeing this:

/usr/bin/python: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.17' not found 
(required by /home/power/w/spartan/build/.libs/libspartan.so.0)

If I was using Go this wouldn’t be a problem, as they statically link everything.  Despite the crowd of people who think shared libraries are the bees knees, I agree with this approach — it’s just far simpler than trying to deal with errors like the above. You can solve this most of the time by simply statically linking everything (–enable-static).  Sadly, if you’re trying to build a shared library (in my case, an extension module for Python), you can’t really go this route. (Statically linking Python API calls into a library which is then hoisted into Python is going to end very, very badly). What am I supposed to do with this error? If you trawl around the web, you find that depending on the exact error, you should either:

  • find an old version of GLIBC and link against that
  • insert assembly directives to indicate the old symbol

If you happen to have a single symbol that’s pulling the newer version, the latter is an easy fix (though it’s a bit annoying to ensure you’ve always got the directive declared before you use a function). If you’ve somehow acquired a dependency on the whole library things become more annoying. In my case, this dependency seemed to result from the chain:

_spartan_wrap.so -> libspartan.so -> libstdc++.so -> libc.so

Oddly enough, depending directly on libstdc++ isn’t the problem. If I remove the dependency on libspartan (just linking directly against all of the objects), we’re fine:

ldd -v .libs/_spartan_wrap.so

        Version information:
                librt.so.1 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/librt.so.1
                libgcc_s.so.1 (GCC_3.0) => /lib/x86_64-linux-gnu/libgcc_s.so.1
                libm.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libm.so.6
                libc.so.6 (GLIBC_2.15) => /lib/x86_64-linux-gnu/libc.so.6
                libc.so.6 (GLIBC_2.14) => /lib/x86_64-linux-gnu/libc.so.6
                libc.so.6 (GLIBC_2.4) => /lib/x86_64-linux-gnu/libc.so.6
                libc.so.6 (GLIBC_2.3.2) => /lib/x86_64-linux-gnu/libc.so.6
                libc.so.6 (GLIBC_2.3.4) => /lib/x86_64-linux-gnu/libc.so.6
                libc.so.6 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libc.so.6
                libpthread.so.0 (GLIBC_2.3.2) => /lib/x86_64-linux-gnu/libpthread.so.0
                libpthread.so.0 (GLIBC_2.2.5) => /lib/x86_64-linux-gnu/libpthread.so.0
                libstdc++.so.6 (GLIBCXX_3.4.14) => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
                libstdc++.so.6 (GLIBCXX_3.4.15) => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
                libstdc++.so.6 (GLIBCXX_3.4.10) => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
                libstdc++.so.6 (CXXABI_1.3) => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
                libstdc++.so.6 (GLIBCXX_3.4) => /usr/lib/x86_64-linux-gnu/libstdc++.so.6

Notice, no reference to GLIBC_2.17. Luckily in this case there’s a simple solution: make all of the helper libraries into convenience libraries. This causes those libraries to be statically linked and avoids pulling in the extra dependencies:

# old
# lib_LTLIBRARIES = _wrap.la liba.la libb.la

# new 
noinst_LTLIBRARIES = liba.la libb.la
lib_LTLIBRARIES = _wrap.la

If we weren’t lucky — we’re actually using the symbol that’s from a later version — than we can force static linking of your dependent library; you do this by listing it explicitly by name in your LIBADD variable:

_spartan_wrap_la_LIBADD = -module -lrt /usr/lib/gcc/x86_64-linux-gnu/4.8/libstdc++.a

libtool will complain that linking against a static library isn’t portable (which is true), but it should work correctly as long as the static library was built with -fPIC.

the Onion

We used to have the Onion (America’s Finest News Source) available for free here in New York, in those little weekly newspaper boxes you see lying around. After the hurricane last year, the supply seems to have dried up. This was a sad event. Not only did I get high quality journalism from the paper, they also had a very nice crossword puzzle for lazy Saturday mornings.

Enterprising individuals have begun reusing the boxes for temporary clothing storage, and some of them have been commandeered for other, not as interesting papers. (New York real estate is valuable, after all).

I don’t often check their website, as it adds to the already crippling amount of distraction that I experience in a day, but this article caught my eye. As a person who finds comfort in the knowledge that the universe will eventually empty out into a cold cinder, I can’t help but approve when others notice the same.