With me: print(life)

Syncing Safari Downloads — an Intro to Screen Scraping

In honor of the upcoming PyCon (which I’ll be attending on behalf of The Open Planning Project) I decided to write about Python today.

Some time back I wrote myself a simple utility for synchronizing Safari downloads (the book service, not the web browser), and I decided to polish it up, release it, and write about the process.  This is the first of two parts where I will talk about my first time handling the start to finish of publishing an open source python package.  The next part will be a tutorial on how to screen scrape the web, from inspecting the HTTP headers to using CSS selectors with lxml to parse out the interesting data.

Anyway, back to the topic at hand.  If you’ve never heard of Safari, and you’re a tech professional, than I hope it’s because you have personal access to the Library of Alexandria.  If not, then let me be your personal cluestick.  For about the price of five tech books (per year), you can maintain an online bookshelf that gives you access of up to 120 books in that year.  In practice, I think I average about 30, but this also gives you the ability to search through their entire library to find the answers you need.  When you find a book, you add can add it to your bookshelf with two clicks (Thanks Amazon!) and then start reading.  What’s more, the service includes 5 downloads per month (usually one chapter or section of a book),  that give you a personalized PDF for offline reading.

My only problem with the service is managing the downloads.  Once you’ve downloaded a chapter, it will always be available to you (at least as long as you have an account), but the PDFs are auto-generated on demand, and when you save them, you end up with files named something like 0EITGkillY6ALIkill3kHfWkillC4RwjkillwKb69kill736MGkillY4UuykillEJTsC.pdf.  I tried to give them sensible names, and organize them, but it was always a pain, and I always had the weirdest urges just afterward.  To top it all off, the last time I changed computers, I decided not to copy the files (knowing that I could re-download them), so I was left with a lot of manual work to do.

Well, I’ve been telling myself for some time that I wanted to play with lxml (it’s the fast python library for working with XML and HTML).  Also, I’ve been working entirely in javascript lately, so I felt that it was time to stretch some mental muscles and get something done in python.  For the impatient, you can get a copy of the script by typing the following at a terminal:

Install Safari Sync from PyPIlink
export STATIC_DEPS=true # Only necessary on a Mac
easy_install safarisync

If the output you get looks something like this:This is Easy Install on Windows

Fear not, poor windows user, I intend to release a simple executable to coincide with the second part of this article.  If you don’t feel like waiting, you can download and install python, then download and install setuptools, and finally fix up your PATH environment variable.

For everyone else, you can start playing along.  Just type safarisync to start the process, or safarisync --help to get a list of options.

Since I’ve only worked with lxml peripherally before (as it was embedded into other projects I was working on), I ended up writing three completely different versions.  The first version was fully functional, using the cookie handling that I learned from this well written tutorial.  It also iterated through all of the elements in the tree to find the ones we were interested in.  Just after finishing it up, I stumbled across this quick intro to lxml, written by a colleague of mine (Ian Bicking).  If you haven’t heard him speak somewhere already, than chances are high that either you’ve used something he’s written, or used something based on something he’s written.

His article introduced me to the CSS selector engine and form handling now built into lxml.  Thus was born the second version of safarisync.  The only problem was that it usually didn’t work.  In the debug shell, I could usually get the code to run, after some tinkering, but never standalone.

The first problem I always had was unnecessarily hard to diagnose.  I was consistently receiving a UnicodeDecodeError from lxml.  I was confused by this because the string I passed in had the proper encoding specified within:

<?xml version="1.0" encoding="utf-8"?>

I received the help I needed from my colleague Luke Tucker (of Melkjug fame, which by the way, you should check out, they just released a new version).  As it turns out, there was a problem in the error handling of lxml such that if you had a bug AND you had unicode data, instead of getting the correct bug reported, you got a UnicodeDecodeError.  He suggested I strip any unicode data and try the same operations to get to the real error.  Thankfully, I’ve been told that this is fixed in the latest version.

Solving the last problem took me outside of the debugging shell, and into the bowels of lxml.  It’s partially written in Cython, which is a python-like language that compiles down to C.  This means (in theory) that you get the speed of C with the beauty of Python.  In practice, this is only half true.  You get the speed of C.  Beauty, however, is in the eye of the beholder.  In any case, peering through the code showed me that while the new form handling code uses python for network access, the rest of lxml uses the built-in downloading facilities of libxml, the C library it wraps.  This means that you have to avoid lxml’s network helpers almost entirely if you need to handle cookies.

The third version of the code can be found at my public source repository.  The interesting code is found in safarisync.py.  I’ve tried to comment it well enough that you can follow through, even without my help.  I’ve had it reviewed by Ian and Robert Marianski, another colleague of mine and talented python programmer.  He helped me with the details necessary to publish the package on PyPI. (For example, if you want your package to have an executable shortcut, you need to create a specially named entry point in setup.py).

Well, thanks for tuning in.  Come back next week for a detailed tutorial teaching you how to write your own screen scraping tools.

Xinha4WP — Wordpress With the Power of Xinha

This won’t be one of my usual blog posts, because I just want to announce the 0.96 beta release of Xinha4WP.  It’s a wordpress plugin that installs Xinha as a drop-in replacement forTinyMCE.  (For those not in the know, Xinha is a community-driven open source web WYSIWYG editor.)

At my employer (The Open Planning Project), we switched to wordpress from Blogger back in 2006 for Streetsblog.  After a short trial, our writers and editors became increasingly frustrated with the state of WYSIWYG editing (powered by TinyMCE).  At the time, TinyMCE  was almost unusable (at least as it was embedded into wordpress).  Our writers and editors were about to give up and switch back when we came across Mike Baptiste‘s wonderful plugin.

Here at TOPP, we believe in open source software not only for idealistic reasons but also for pragmatic ones.  Being in control of our stack gives us much more flexibility in terms of site design and functionality.  By using Mike’s plugin, we were able keep our writers happy while still maintaing control of our entire platform.

Well, fast forward to the beginning of 2009 and the crew over at Wordpress and Moxiecode have put an amazing amount of work and polish into TinyMCE. It is now the first class option it should be and no longer behind other platforms.  At the same time, Mike no longer has time to update the Xinha4WP plugin, and Xinha’s last stable release was over 8 months ago.

That’s where I come in.  I’m now one of the core Xinha developers and we’ve just published a new beta release (0.96 Phoenix beta).  In addition, we’ve received Mike’s blessing to take over the Xinha4WP plugin and bring it up to date.

Because of the time lapse, we’re now playing catch up to TinyMCE in terms of integration into wordpress, but we’ve added some long needed features for our first new release.

  • Autosave was added to wordpress 2.2, two years and five versions ago.  Now we sync textareas to allow this feature to work.

  • Up until recently, Xinha4WP always enabled Xinha, and required the user to disable TinyMCE.  Now we auto-disable TinyMCE and respect the users visual editing preference.

There are some outstanding issues in this release as well.  While Xinha normally supports autoresizing, our embedded version doesn’t correctly resize with the page (requiring a page refresh).  What’s worse, since TinyMCE supports user-draggable resizing, the default size of the visual editor is a bit cramped for normal use.  These two together means that this is a bit of a pain point for writers.  While we’re working on the fix, you can use Xinha’s full screen mode to provide a comfortable editing space for your blog post.

In brief, it’s been awhile, and the people over at wordpress have put a lot of effort into TinyMCE integration, and this has become a viable option.  If, however, you crave more from your users, take a look at Xinha.  We’ve got some catching up to do, but we’ve got a great alternative and we can only go up from here.

If you’re interested in finding out more, download the release and come join our mailing list!

“\n” — One of the Web’s Tough Problems

So I’ve got you.  This doesn’t make much sense, does it?  All except the oldest among you used a newline in your first program.  The tough problem is not re-displaying the page as it changes, that’s easy.  Okay, maybe not easy, but at least it’s already been solved.

Imagine that you’re typing an email to a lovely lady you want to move here from St. Petersburg.  You’ve finished a paragraph about naked scuba diving, you’ve told her about your pet rock collection and now it’s time to add you’re closing line.  “Sincerely yours” is a bit too formal, and “With all my love” might scare her off.  In any case, if you can’t hit enter to type that line, than she’s never going to move here and marry you.

Well, it can’t be too tough a problem, can it?  Olav Kjær wrote a great article about the problems and inconsistencies involved.  HTML is a great language for documents (I write software for the web, they make me say that), but its rules for containing text are pretty lax.  And when you give a user a mouse and allow him to just click anywhere on the page?  That’s just crazy talk!

Why do I care?  Well, I was editing a cough wiki page on OpenPlans.org using the Firefox 3 Beta (took me awhile to finish off this post, eh?).  When I clicked in the middle of the page and hit enter to start typing a new paragraph, half of my page disappeared.  Expected results?  Uh, a new paragraph?  We use Xinha, the open source WYSIWYG editor, and a pretty old version at that, but there was no problem in any other browser, or in previous versions of Firefox.

So I did what any self-respecting software engineer does when the problem’s not in his code, and he can’t understand it. I blamed someone else.  I was so worried about supporting the problem for the life of Firefox 3 that I even called John Resig, Mozilla’s Javascript Evangelist.  You’ll notice that he’s removed the phone number from his site (sorry John).

After filing the bug, I started to search through snapshots of Minefield (the testing and development version of Firefox), and was able to narrow it down to one commit.  Looking at the source (in nsRange.cpp), it turns out that the change was a bugfix that caused Firefox to correctly implement the W3C Range standard.  Before the fix, trying to create a backwards selection raised an exception, and after the fix it returned an empty selection, as it was supposed to.  That meant the Xinha code depended on broken behavior; and that I had work to do.

My first job was to find out what was wrong,  That’s easy, I have access to the code, I just have to find out where things went wrong.  Eeek.  “processRng” and “processSide”.  Well, it’s pretty obvious what those two functions do.  The first processes a range, and the second processes a side.  Thanks to some helpful comments, I know that it returns a neighbor node, and insertion type, and a “roam”. What that means?  No idea.  My favorite comment?

“I do not profess to understand any of this, simply applying a patch that others say is good — ticket:446

After stepping through the code I was finally able to figure out what it was trying to do.  It divided the document into two pieces.  It cuts out everything from the current cursor to the end of the document, inserts a break, and then pastes it back in again.  It sounds like a simple enough idea, but I couldn’t for the life of me figure out what was going wrong.  So again I did what any self-respecting software engineer would do.  I decided to rewrite the algorithm from scratch.

It’s now six months later, and I’ve finally nailed this bug.  Of course, other things happened in between, but that’s always the case.  Let’s take a look at what’s so tough about newlines.

  1. The first difficult problem is determining user intent.  If the user finishes typing a heading and then hits enter, they probably want to start typing text in a new paragraph.  If however, the cursor is in the middle of two sentences in that heading, they probably want to split it into two headings.  In a table cell and they probably just want a line break.  If they’re editing a definition list, they might want to insert a new definition, a new term, or even split two sentences into two separate definitions or terms.

  2. The second problem is cursor position.  Since a cursor is defined as a pointer to a node and an offset, in the following HTML snippet, the position just before the letter ‘T’ can be targeted with two different cursors.

    The first would be a pointer to the <em> element with an offset of zero.  This would mean we were pointed at the text node.  The second would be a pointer to the text node with a zero offset.  In this case, we are pointing at the characters of text, and not at a node.

  3. Third is what it means to break a line.  In a list, breaking a line means creating a new list item.  In a pre-formatted block, it means a newline character.  In a table cell, you want a <br> element, and in a paragraph you want a new paragraph.  I won’t even get into how this changes for shift-enter.

  4. The final tough problem is inline elements.  The formatting of the text at a given cursor position is the result of a tree of inline elements that heads up towards the containing block.  When splitting that block, you have to create a duplicate of this tree with all of the same elements, and you have to split each inline element into the parts that come before the cursor, and the parts that come after.

After having finished the majority of this back when I found the bug, I shelved the code and moved on to other things.  With the help of my colleague, Nicholas Bergson-Shilcock, I’ve picked this up again and finished it off.  This means that the new Phoenix Release (0.96) of Xinha will get a bugfix that makes Firefox 3 usable again.

All of the code for this fix is pluggable, and should be usable by anyone needing to break lines in HTML.  The only dependence is on W3C Ranges and DOM Selections.  Luckily, there’s been talk of a cross-platform W3C Range and DOM Selection library.

When the guys over at 37signals released their own super-light-weight WYSIWYG editor WysiHat, they talked about wanting to help with the problem.  Mozile, the Mozilla Inline Editor, actually has one, but it’s too tied to the editor to be able to useful elsewhere.  TinyMCE goes the other way and has an IE TextRange implementation for Firefox, and I’ve recently been told that FCKEditor has the beginnings of a usable library.  I’ve implemented the tough parts twice now (finding the DOM node and offset of the ranges start and end points) and learned the best way to do it.  For the next release of Xinha (0.97) I hope to bring my work together with the work of all other interested parties and release it as a library.  When we do that, users will finally be able to go back and forth between browsers and not have to fight to edit a document.

Until then…

Blah-blah Blog Post — Getting It Out

No more excuses, I’m publishing this.  This is a post about getting it done; something to help me write.  Something to help get over writer’s block.

I enjoy writing, and I think it’s helps me to be better organized.  When I started working for TOPP, we were encouraged to blog, which is one of the things I like about working here.  (The wild, swinging from the rafter, parties aren’t so bad either.)  Great policy, but it only helped me to keep up blogging for about five minutes (10 if you count a drunken blog post about Calimocho).  The funny thing is that I’ve always wanted to write more, and I’ve often wanted to come back and just do it.

Well, I finally found my voice this summer, and started posting about more technical issues.  (Not that I’ve been prolific.  Unless you count my drafts folder.)  I’ve done some blogging on this site as well as a couple of the work blogs, and I’m really looking forward to a guest post for my favorite political blog, Digifesto.

However, like anyone trying to start a new habit, especially one in an area of non-expertise, I found excuses not to write, or I’d start and never finish.  I knew this was going to happen.  It’s pretty well known that writing is not easy, especially keeping it up regularly.  I’d thought I’d be clever by getting started on a couple of downtime posts.  That way, when I hit a slowdown, I could just pick one and finish it.  It turns out for me, however, having a bunch of unfinished posts wasn’t helping.  When I got to a tough point in a post, I’d turn away, or start a new draft for later.  All of that “unfinished” work started to drag on me, and 50 pounds of blog posts really make your muscles sore.

Well, this is a kick in my pants.  Each line you read is one giant boot to my tookus.  (You’re still reading?  Kind of cruel, don’t you think?  What does that say about you?)

Well, I recently finished reading Pragmatic Thinking and Learning, a book about personal productivity (Thanks, Whit!).  (It’s by the same authors as The Pragmatic Progammer, which you might be familiar with.)  Something they spend a considerable amount of ink on (and carriage returns) is how to focus, and how to transition mentally from the part of your mind that sticks up roadblocks to the part that really flows and has the great ideas.

Inside is the story of a client they were trying to get started with morning writing.  (It’s a tool to harness some of the great ideas that you have and forget about, or just plain ignore.) He thought the exercise was a bit ridiculous, and so couldn’t get anything written down.  They told him to just fill a couple of pages with nonsense sentences (Blah blah blah, I’m writing a sentence) to get over the mental block.  Well, it took him a couple of weeks, but he started having some great ideas and got to actually writing.

Luckily for me, I’m not fighting this quite so overtly, but I still seem to get in my own way.  That’s why I’ve decided to take the same advice every time I want to blog.  This post started with a couple of notes (since I already had the topic) but each time I got blocked, I just wrote down a couple of paragraphs of nonsense (Blah blah, software bugs are good, people like using Windows, etc.).  It didn’t matter if I just didn’t have any ideas, or something interrupted my train of thought. I kept belting it out, and managed to make it all the way through.

So, here it is. I hope you find the idea useful as well.

The Wild West of Javascript.

Just last week, I was working on the new version of Xinha.  If you don’t know, Xinha’s a web-based document editor.  Embed it in your blog, your web software, so that you and your users can create web documents. Xinha is WYSIWYG, so there’s no need to know HTML.  The Open Planning Project, my employer, uses Xinha to power OpenPlans, which is why I get to work on it.  Xinha is Open Source Software, so we use it, and contribute fixes and enhancements back to the original project.

I was working with Nicholas Bergson-Shilcock, my colleague, on his new plugin for Xinha.  With this plugin, you can finally make great footnotes in your documents.  We were testing his code on Internet Explorer, and we noticed IE acting strange.  Now I don’t mean normal IE strange, IE is the bane of all web developers, so I’m used to strange.  (If you use IE, then please don’t.  I don’t care whether you use Mozilla FirefoxGoogle ChromeOperaApple Safari, or if you connect to web servers directly with telnet.  Just do all web developers a favor and stop using IE.)

When I say strange, I mean screwy.  Certain places in the document just didn’t seem to exist.  His code used Xinha in different ways than the rest of the plugins, so we were expecting edge cases.  But black holes?  Nobody expects black holes!

Editable documents are still the wild west of web development, and so I shouldn’t be surprised.  Javascript and DOM has its Wyatt Earp and Doc Holliday, but document editing is too new to have seen the same kind of law enforcement.  When it comes to selection, manipulation, and document processing, the browser differences aren’t well defined, and there are no libraries to abstract the problems.  Even Peter-Paul Koch (of QuirksMode) told me that “IE’s TextRange is a disaster” when I asked for help.

After a bit of exploring the problem we figured out exactly what happens.  In Internet Explorer, you can’t select the end of a text node (in javascript) if it’s followed by a block node.  That means that for the valid HTML snippet:

  This is my first line
  <p>This is my second line</p>

You can’t touch the end of the first line.  Let me say that again, you can’t touch the end of the first line. What does that mean?  All of you DOM jockeys know how to get a reference to the node, and could manipulate the elements, but that’s no help for the user.

Your user pushes that cursor beyond the event horizon.  They click on your footnote button to bring up a dialog.  You insert the text they type, and BAM!  The cursor’s not where the user left it; you’ve just crapped markup at some other place in the document.  When you do things like that, users start to fear pressing buttons, and we can’t have that.

Why haven’t we seen it before?  Xinha was using pop-ups for dialogs, and they don’t change the original selection.  Now that we’ve moved to a lightbox-style dialog system, we’re moving the cursor about on the page, and we don’t have a way to move it back.

How do we fix it?  Our first step was to test in IE8 beta to see if it was fixed.  No such luck; sometimes I wonder why I’m an optimist. ;-)  My next step was to try out StackOverflow, the new Jeff Atwood / Joel Spolsky software development community.  It’s pretty hot right now, so I thought it would be a good place to get help, but again, no go.  The only answer I got was someone who seemed to remember some comments related to this bug in Javascript.  I tried to find the software he was referring to, but no bugfix there.  FCKeditor doesn’t have a fix.  Neither does TinyMCE. Wikipedia offered up this link to a list of 5000 web-based editors.  I tried them all, and all of the software not using pop-ups had the exact same bug.

So, what can we do?  Unfortunately, I tried to see if there was a way to trick IE into moving the selection to where we want.  I tried moving the selection left, or right, and then back again.  I tried inserting content, then deleting it, but there was no direct way to solve the problem.  We ended up with three different workarounds, all of which have drawbacks, but are better than no solution at all:

Change the justification
If you change the justification on the current selection, IE modifies the document so that the selection continues to work.  Set it to no justification, and you even get valid HTML! Unfortunately, it re-parents the following element, moving it one node closer to the root of the document.
Insert an empty span
This works by making sure that you are attempting to select the span element, rather than a text node, and element selection actually works in IE.  It craps spans all over the document, though. and even though we try to clean these up, you never know.
Insert a visual cue
The final method works by inserting a visual cue for the user in the form of a little block (□), then selecting it.  If we're about to modify the document, or the user begins to type, the block will be removed automatically.  In any other case, the user will see the block and naturally want to delete it from the text.

All three are written in to the code, but we decided to default to the visual cue, because it’s the safest in terms of damaging the markup.  Otherwise, we’ve done everything we could to avoid triggering the error, so we hope it won’t affect too many users; it’s always a trade off.

I wrote this to get some visibility for this problem.  This is probably just some sort of off by one error, and IE8 is still in beta, so maybe it can still get fixed.  If not, at least you’ll have a way to work around the problem when you run into it.

Finding the Location of the Current Bash Script

In my work for TOPP, I’m the middle of some changes to our build system.  We’re using an in-house build tool called fassembler.  Considering that it’s completely specific to our needs, and was written mostly from scratch, it’s got some pretty great features (e.g. color coded output, database initialization).  Our config files are stored in subversion, checked out, and then compared against when there’s an update.  If they differ, you’re prompted to either replace, discard, view the diff, or merge the files.  This is great for when you’re running a build.

As the Deployment Manager for openplans.org, however, I’m running tens or hundreds of builds.  My goal is to make building and maintaining a deployment easier, and so I need to be able to run the build unattended, and not in a way that blindly discards or overwrites those changes.

Enter Gentoo Linux.  Gentoo is a distribution of linux where all of the packages are built from source.  On a system-wide level, or for each individual package, build options can be set before installing a piece of software.  A fully installed Gentoo system, whether a server or desktop, can contain hundreds of packages, and users don’t have the time to sit interactively through the building and updating of each package.

Gentoo uses a script called etc-update to handle the merging of configuration files separately from the building of software.  It works by saving the new configurations with a mangled name (e.g. httpd.conf would become ._cfg0000_httpd.conf), building the list of these files, and then allowing the user to diff, overwrite, discard, or merge any of the new configurations.  It allows you to configure which tools to use, defaulting to diff, smerge, and nano.  I’m a vi user, but I have that set at a system level, so that’s picked up by the script.  smerge is just fine for me, but I prefer colordiff (some screenshots), because of it’s nicely readable output, and so I have that overridden in a configuration file.

etc-update is licensed under version 2 of the GPL, and so we will be redistributing it bundled with the rest of our build software.  Where our situation is different, however, is that we can build in a myriad of locations, and the configuration files are specific to each build.  In Gentoo’s version of the script, portage (their packaging system) is queried for the location of configuration files, but we don’t have the luxury of a system level tool to perform that work for us. I looked at a couple of possible solutions to the problem:

The command line
etc-update alread includes a way to pass directories on the command line, but this requires too much typing by the user.
Building a custom script
Easy to type, but it means installing modified versions of the script all over the place, which is just harder to maintain.
Reading from the environment
It requires the user to set the environment somehow, requiring extra steps, and is very hacky
Look in a path relative to the current script
Some magic involved, but if we at least use a configuration file relative to the script, it's relatively straightforward, and the only magic involved is in expecting where the list of directories is saved.

Based on these options, I decided on the latter option.  But this all hinges on knowing during execution where the script is located.  Well, I know how the script has been called.  That’s available as Arg0 ( $0 ) in the shell, I figured it would be pretty easy to go from there to the actual location of the script.

Being a python programmer, my first instinct was to code the logic in python,  This wasn’t too tough.  I took advantage of the fact that you can pipe a script to the python shell, but used bash string interpolation to pass the argument hardcoded into the script.  Since it was a multiline program, I used a bash here document to make it readable. Here’s an example script (that just returns Arg0).

Script location v1
RESULT=`python << EOF
print '$0'

It took me about five minutes to put together a final script. It first checked to see if the script was called with any path information (e.g. relative: ../script.sh or absolute: /home/script.sh)  If not, it looked for the script file in the $PATH command variable.  Failing that, it tried to join the current directory to Arg0 to find the actual location.  (Python’s os.path.normpath command will override the base path if the search path is absolute).

This script worked, and was easy to ready for python programmers.  It bothered me a bit, however, because: 1) I was embedding a python script into a bash script, which could be rather confusing, and 2) it was 32 lines long, not exactly the shortest of solutions.  This is that script:

Script location v2

# Python script to figure out where this file is located.
HERE=`python << EOF

import os
import sys

# The path environemnt variable as a list.

# How the script was called

# The current working directory.

# If the script was called in any way that includes path information
# (relative or absolute), we will not look in the system path.

if search_in_path:
    for dir in path:
        if os.path.exists(os.path.join(dir, arg0)):
            print os.path.join(dir, arg0)

fullpath=os.path.normpath(os.path.join(working_dir, arg0))
if os.path.exists(fullpath):
    print fullpath


My next thought was to re-implement the script algorithm natively in bash.  Unfortunately, bash doesn’t have the python standard library at its disposal.  Thankfully, however, there are a number of commands that allow us to achieve more or less what I wrote above.  I use readlink -f /basepath/../somepath to convert two joined paths into a normalized path.  The only problem with this is that when we executed a symlink to a shell program, it returns the location of the actual file and not the symlink.  I’m not really sure if this is a problem that merits any worrying, but I could imagine having a single “source” script, and symlinking it into different environments.  The second command I needed to replicate was os.path.basename (used to extract the directory from the scripts full path); luckily the basename program handles this identically.

I ran into one final problem in interpreting this algorithm in bash, and that was splitting the $PATH variable.  Normally the for..in control structure in bash splits a string by spaces.  We could use sed or tr to convert the colon seperated pathinto a space seperated path, but that’s going to run into problems when you have spaces in you directory names.  Here’s where the $IFS variable saves us.  The $IFS variable is a variable that tells bash what characters to use to split up a string into a set.  For our purposes, we temporarily save $IFS and set it to a single colon.  This allows you to perform a simple “for DIR in $PATH”.  If you’ve got colons in you directories, well hey, you could have used python… ;-) Here’s that script:

Script location v3
# The same algorithm implemented almost purely in bash
if [ "$0" == "`basename $0`" ]; then
    # The IFS internal variable tells bash how to split a string into
    # variables for a list.  Since the PATH variable is colon seperated, we
    # will temporarily change this variable in order to interpret the path.
    export SAVED_IFS="${IFS}";
    export IFS=":";

    for DIR in $PATH; do
        if [ -f "${DIR}/$0" ] || [ -L "${DIR}/$0" ]; then
            THERE="${DIR}/$0" ;

    # We restore the saved IFS variable to return string handling to normal.
    export IFS="${SAVED_IFS}"
    THERE=`readlink -f $0` ;

The same script is 20 lines in bash, which is an improvement.  At this point I was happy enough with the result that I started to embed it into our local copy of etc-update.  In doing so, however, I ran across a usage of the type built-in command that piqued my interest.  It was being used to test for the existence of egrep on the system.  It turns out that “type -p path” looks for a file-based command and prints it if it exists.  I figured that this could be used in an even shorter bash only script, and wrote a test script to do so.  In checking out the various permutations (in a symlink, from the path, etc.) I found out something interesting: when you invoke a script through in the path directly, bash sets Arg0 to the full path.  “Great!” I thought, combine that with readlink from above, and I have a one-liner.

And then it hit me.


From the which man page:

Which takes one or more arguments. For each of its arguments it prints to stdout the full path of the executables that would have been executed when this argument had been entered at the shell prompt. It does this by searching for an executable or script in the directories listed in the environment variable PATH using the same algorithm as bash(1).

The captian obvious award of the day goes to me.  which $0 will always return the full path, as bash sees it, of the script file.

Greylisting for Comments

Greylisting is an interesting idea that comes from the world of mail servers.  It’s a system used to combat SPAM that’s quite ingenious, and at least on my mail server, is 99% effective.  It’s very effective at blocking SPAM for three reasons:

  1. The internet protocol used for sending mail (SMTP) is quite complex.  Most spammers don’t have the time to write complete mail servers, they instead take shortcuts to cover the majority of cases.
  2. Spam is about turning computer time into money.  Spammers send out millions of mails per day, so if you increase the cost (in time) of sending mail, than you make spamming less attractive.
  3. While both whitelisting and blacklisting require humans to maintain lists of good and bad servers, greylisting is completely automated.  Since it’s automated, it’s easy to use.

They way greylisting works is by keeping a database of people sending mail to your server.  For each mail it receives, it looks at three things:

  1. The person sending the mail
  2. The person receiving the mail
  3. The computer performing the delivery.

If the server doesn’t already recognize all three of these properties, it responds with an error that tells the sender to try back a little bit later.  Real email servers will try again shortly, usually in less than 15 minutes.  A good number of spammers are stopped right here because their spam tools don’t handle this case.  When the real server tries again, this time the mail will just pass right through and be delivered.

That’s it!  That’s the magic of it all.  For any mail coming from people that your users already know, there’s no wait; they don’t see any difference, and mail just keeps coming in.  The first time someone sends a mail to your users, there will be a short wait, normally less than 15 minutes, and since mail isn’t guaranteed to be immediate, most people don’t notice the difference.

Now on top of greylisting, people often throw in Tarpitting).  A tarpit in computers is something that slows down the server, so the server responds more slowly, as if it were under a heavy load.  When combined with greylisting, this means that each mail coming from a new source costs the sender a whole lot more in computer time.  In the case of someone who will be sending you mails regularly, this one-time cost is quickly amortized, costing the sender nothing in the long run.  Spammers, however, who depend on sending millions of unique mails, see this cost with each email they send, and so your server becomes an unattractive target.

How does this relate to comments, you may ask?  Well, I’ve written a a greylisting/tarpitting Django-app for this and patched the code for this blog to use it.  For now, you can download it here: http://douglas.mayle.org/files/greylist.tgz

If you’d like the patch to enable this for your byteflow blog, it’s available at Byteflow Trac Ticket #93

OpenID Server

So the next step in my setup plans has been to get the OpenID server correctly working on my blog.  This wasn’t the easiest, as there is no documentation, but I created a trusted site root via the admin interface, and wrote a small patch to serve the correct header.  Talk about fast response, it’s already been applied to trunk :-)  Here’s the relevant ticket:


Starting a New Blog With Byteflow

I’ve just installed this new byteflow installation, and it there was some work involved, but I’ve learned what was necessary, and I should be able to help out some others in the process.  I’ve written a Gentoo ebuild file that I intend to submit to Gentoo.

The documentation is a bit sparse, but I look forward to getting this set up as my OpenID server, and allow users to comment by OpenID.