Webcomics

Somebody asked me to make a list of webcomics that I subscribe to (using Feedly these days) and I started writing little descriptions next to each one. I figured I’d just place it here in case anyone else is interested. These are the webcomics that have made the cut after having tried out and abandoned many more. Here they are, in no particular order (well, the order of the tabs in Chrome at the moment). I read all of these but some I recommend more whole-heartedly than others (recommended comics have a * next to them).

  • * Hark! A Vagrant – The panels require basic knowledge of literature and history, but it’s Kate Beaton’s art style that usually clinches the joke. She draws insanity and idiocy extremely well.
  • * Left Handed Toons – The conceit is that the right-handed artist draws with his left hand, but he’s been doing it so long that it doesn’t seem like much of a hindrance anymore. The comic relies on brutally dumb puns and literalisms, as well as a recurring cast of characters like Fridaynosaur, Whale, and General Twobabies.
  • Penny Arcade – Mainly of interest to people who play video games. Obscure references to recent video games and lots of gross-out humor.
  • PhD – Only occasionally funny. This is basically Cathy for academics. The jokes are all: graduate students are overworked; dissertations are stressful; advisors are clueless.
  • * Pictures for Sad Children – Grim strips that follow a depressive logic where events often take a surreal turn but nobody acts surprised. People end up inside dogs and idly discuss what to do, etc. Pretty great if you have the fortitude for this kind of thing. Not updated these days.
  • * Poorly Drawn Lines – Just started reading this, but so far the gags are good, absurd but not particularly dark.
  • *  Saturday Morning Breakfast Cereal – SMBC’s Zach Weiner is on par with Gary Larson and xkcd’s Randall Munroe in terms of raw creativity. He does both one-panel gags and longer multi-page stories. The comics generally have a science and science fiction bent with a solid grasp on related philosophical issues. Weiner is also a significantly better illustrator than most webcomic artists.
  • Sheldon – This is more of a traditional syndicated comic, but it occasionally has some interesting storylines.
  • * Subnormality – These are irregularly updated bizarrely dense comics that typically take up several screens. The comics are extremely dialogue-heavy, to the point where I frequently skip them not because I dislike the stories but simply because I don’t have the time to read them all. The stories typically take the form of externalized inner dialogues about insecurity, projection, prejudgment, etc.
  • * Gutters – One-off comics about the comic book industry by comic book artists.
  • The Trenches – Episodic strip about game testers.
  • * Wondermark – Hilarious comic assembled from old illustrations to create absurd hybrid 19th-and 21st-century jokes. Has an alt text joke.
  • * xkcd – Should need no introduction. Alt text joke.
  • * Girls with Slingshots – A well-drawn sitcom about two girls who do polyamory. Sexy but not explicit.
  • * Cyanide & Happiness – Incredibly crass jokes that could be fairly criticized as promoting all kinds of bad culture, but I would be lying if I said I didn’t find many of them to be amusing.
  • * Dinosaur Comics – Dinosaur Comics is what it is: goofy (secretly rather intelligent) rambling about language and culture overlaid on an invariant set of six panels.
  • * Dilbert – Just because Dilbert cartoons are a cubicle cliche doesn’t mean that they don’t generally speak a certain truth. It’s the same stuff over and over about pointy haired bosses and lazy coworkers, but the punchline is usually pretty fresh and often surprisingly edgy.
  • Diesel Sweeties – People seem to love this comic. It’s sufficiently interesting but doesn’t blow me away. It’s mainly jokes about killer robots and coffee.
  • * Crocodile in water tiger on land – Commentary about Indian society, delivered in a self-satirizing manner by a cast of Indian archetypes that I have to partially construct from context (e.g., the hipster, the religious zealot, the business fat cat, etc.) I don’t totally follow the issues being discussed, but it gives me some insight into Indian society and provides a valuable example of cultural self-criticism.
  • * Cat and Girl – I can’t say that Cat and Girl is usually or even often laugh out loud funny. In fact I can’t say that I generally 100% understand Dorothy’s comics. The strips tend to reward taking time to do an analysis of the words and symbolism to derive something like a thesis statement. The thesis statement is often about the nature of authenticity as an ineffable criterion for value in the context of postindustrial society and the age of digital reproduction and social networks. This will certainly not be everyone’s cup of tea, but it definitely is mine.
  • * Blaster Nation – This is another geeky narrative sitcom. Digging the story so far.
  • * Bad Machinery – Scooby-doo-style mystery stories set in England. The fun though is in the hilarious banter between the kids.
  • * Abstruse Goose – Similar to xkcd, jokes about being geeky with a focus on math and science.
  • * A Softer World – Three-image strips with some text that is generally a funny-sad statement about love and loss.
  • * Hijinks Ensue – Violent-gross jokes about geek dude culture.
  • * Scenes from a Multiverse – Strips rotate through several universes. The most popular ones get to come back.
  • * Perry Bible Fellowship – Wonderfully evil, beautifully drawn comic. Every punchline is designed to disturb you. Not updated these days.
  • * Achewood – I’m not going to bother describing this comic. It’s about some dog people and the writer Chris Onsted has a wonderful ear for spoken English. Sadly not updated for more than a year.

Helpful commands

As a follow-up to my last post here are some commands that I use throughout the day. They are admittedly nothing special but they help me out.

.bashrc aliases:

alias grc='git rebase --continue'
alias gca='git commit -a --amend'
alias gs='git status -sb'
alias gd='git diff'
alias gsn='git show --name-only'

The one worth explaining is gca. This one stages and commits everything to the previous commit. I use this constantly to keep adding stuff to my WIP commits. One thing to watch out for is this will mess up a merge conflict fix inside a rebase operation because you’ll end up putting everything in the merge before the conflict. You want to do grc instead.

scripts:

force_push — I use this to automate the process of updating my remote branch and most importantly to prevent me from force pushing the one branch that I must NEVER force push.

#!/usr/bin/env bash
CURRENT_BRANCH=`git rev-parse --abbrev-ref HEAD`
if [ $CURRENT_BRANCH == 'master' ]; then
  echo "YOU DO NOT WANT TO DO THAT"
  exit 0
fi
 
echo "git push origin $CURRENT_BRANCH --force"
read -p "Are you sure? [Yn] "
if [ "$REPLY" != "n" ]; then
  git push origin $CURRENT_BRANCH --force
fi

rebase_branch — There’s not really a lot to this, but I use it reflexively before I do anything.

#!/usr/bin/env bash
git fetch
git rebase -i origin/master

merge_to_master — I do this when I’m done with a branch. This makes sure that there will be a clean fast-forward push. Notice how it reuses rebase_branch.

#!/usr/bin/env bash
rebase_branch
CURRENT_BRANCH=`git rev-parse --abbrev-ref HEAD`
echo "git checkout master"
git checkout master
echo "pull origin master"
git pull origin master
echo "git merge $CURRENT_BRANCH"
git merge $CURRENT_BRANCH

git-vim — this one is still a bit of a work in progress, but the idea is to grab the files you’ve changed in Git and open them in separate tabs inside Vim. You can then run it with git vim which I alias as gv.

#!/usr/bin/env ruby
 
# uncommitted files
files = `git diff HEAD --name-only`.split("\n")
if files.empty?
  # WIP files
  files = `git show --name-only`.split("\n")
end
 
system("vim -p #{files.join(" ")}")

Of course, all these scripts need to be put somewhere in your executable path. I put them in ~/bin and include this location in my path.

So my workflow would look like this

git checkout -b new_branch
# hack hack hack
git commit -a
# hack hack hack
gca
# hack hack hack
gca
# all done now
rebase_branch
# whoops a merge conflict
# resolve it
git add .
grc
# Time to get this code reviewed on Github
force_push
# Code accepted, gonna merge this
merge_to_master

Git workflow

In my last post I described how at my work we use code review feedback to iteratively improve code. I want to describe how Git fits into this process, because this is probably the biggest change I had to make to my preexisting workflow. Basically I had to relearn how to use Git. The new way of using it (that is, it was new to me) is extremely powerful and in a strange way extremely satisfying, but it does take a while to get used to.

Importance of rebasing

I would describe my old approach and understanding as “subversion, but with better merging”1. I was also aware of the concept of rebasing from having submitted a pull request to an open source project at one point, but I didn’t use it very often for reasons I’ll discuss later. As it turns out understanding git rebase is the key to learning how to use Git as more than a ‘better subversion’.

For those who aren’t familiar with this command, git rebase <branch> takes the commits that are unique to your branch and places them “on top” of another branch. You typically want to do this with master, so that all your commits for your feature branch will appear together as the most recent commits when the feature branch is merged into master.

Here’s a short demonstration. Let’s say this is your feature branch, which you’ve been developing while other unrelated commits are being added to master:

Feature branch with ongoing mainline activity
Feature branch with ongoing mainline activity

If you merge without rebasing you’ll end up with a history like this:

History is all jacked up!
History is all jacked up!

Here is the process with rebasing:

# We're on `feature_branch`
git rebase master # Put feature_branch's commits 'on top of' master's
git checkout master
git merge feature_branch

This results in a clean history:

Feature branch commits on top
Feature branch commits on top

Another benefit of having done a rebase before merging is that there’s no need for an explicit merge commit like you see at the top of the original history. This is because — and this is a key insight — the feature branch is exactly like the master branch but with more commits added on. In other words, when you merge it’s as though you had never branched in the first place. Because Git doesn’t have to ‘think’ about what it’s doing when it merges a rebased branch it performs what is called a fast forward. In this case it moved the HEAD2 from 899bdb (More mainline activity) to 5b475e (Finished feature branch).

The above is the basic use case for git rebase. It’s a nice feature that keeps your commit history clean. The greater significance of git rebase is the way it makes you think about your commits, especially as you start to use the interactive rebase features discussed below.

Time travel

When you call git rebase with the interactive flag, e.g. git rebase -i master, git will open up a text file that you can edit to achieve certain effects:

Interactive rebase menu
Interactive rebase menu

As you can see there are several options besides just performing the rebase operation described above. Delete a line and you are telling Git to disappear that commit from your branch’s history. Change the order of the commit lines and you are asking Git to attempt to reorder the commits themselves. Change the word ‘pick’ to ‘squash’ and Git will squash that commit together with the commit on the preceding line. Most importantly, change the word ‘pick’ to ‘edit’ and Git will drop you just after the selected ref number.

I think of these abilities as time travel. They enable you to go back in the history of your branch and make code changes as well as reorganize code into different configuration of commits.

Let’s say you have a branch with several commits. When you started the branch out you thought you understood the feature well and created a bunch of code to implement it. When you opened up the pull request the first feedback you received was that the code should have tests, so you added another commit with the tests. The next round of feedback suggested that the implementation could benefit from a new requirement, so you added new code and tests in a third commit. Finally, you received feedback about the underlying software design that required you to create some new classes and rename some methods. So now you have 4 commits with commit messages like this:

A messy commit history
A messy commit history
  1. Implemented new feature
  2. Tests for new feature
  3. Add requirement x to new feature
  4. Changed code for new feature

This history is filled with useless information. Nobody is going to care in the future that the code had to be changed from the initial implementation in commit 4 and it’s just noise to have a separate commit for tests in commit 2. On the other hand it might be valuable to have a separate commit for the added requirement.

To get rid of the tests commit all you have to do is squash commit 2 into commit 1, resulting in:

  1. Implemented new feature
  2. Add requirement x to new feature
  3. Changed code for new feature

New commit 3 has some code that belongs in commit 1 and some code that belongs with commit 2. To keep things simple, the feature introduced in commit 1 was added to file1.rb and the new requirement was added to file2.rb. To handle this situation we’re going to have to do a little transplant surgery. First we need to extract the part of commit 3 that belongs in commit 1. Here is how I would do this:

# We are on HEAD, i.e. commit 3
git reset HEAD^ file1.rb
git commit --amend
git stash
git rebase -i master
# ... select commit 1 to edit
git stash apply
git commit -a --amend
git rebase --continue

It’s just that easy! But seriously, let’s go through each command to understand what’s happening.

  1. The first command, git reset, is notoriously hard to explain, especially because there’s another command, git checkout, which seems to do something similar. The diagram at the top of this Stack Overflow page is actually extremely helpful. The thing about Git to repeat like a mantra is that Git has a two step commit process, staging file changes and then actually committing. Basically, when you run git reset REF on a file it stages the file for committing at that ref. In the case of the first command, git reset HEAD^ file.rb, we’re saying “stage the file as it looked before HEAD’s change”; in other words, revert the changes we made in the last commit.
  2. The second command, git commit --amend commits what we’ve staged into HEAD (commit 3). The two commands together (a reset followed by an amend) have the effect of uncommitting the part of HEAD’s commit that changed file1.rb.
  3. The changes that were made to file1.rb aren’t lost, however. They were merely uncommitted and unstaged. They are now sitting in the working directory as an unstaged diff, as if they’d never been part of HEAD. So just as you could do with any diff you can use git stash to store away the diff.
  4. Now I use interactive rebase to travel back in time to commit 1. Rebase drops me right after commit 1 (in other words, the temporary rebase HEAD is commit 1).
  5. I use git stash apply to get my diff back (you might get a merge conflict at this point depending on the code).
  6. Now I add the diff back into commit 1 with git commit --amend -a (-a automatically stages any modified changes, skipping the git add . step).

This is the basic procedure for revising your git history (at least the way I do it). There are a couple of other tricks that I’m not going to go into detail about here, but I’ll leave some hints. Let’s say the changes for the feature and the new requirement were both on the same file. Then you would need to use git add --patch file1.rb before step 2. What if you wanted to introduce a completely new commit after commit 1? Then you would use interactive rebase to travel to commit 1 and then add your commits as normal, and then run git rebase --continue to have the new commits inserted into the history.

Caveats

One of the reasons I wasn’t used to this workflow before this job was because I thought rebasing was only useful for the narrow case of making sure that the branch commits are grouped together after a merge to master. My understanding was that other kinds of history revision were to be avoided because of the problems that they cause for collaborators who pull from your public repos.  I don’t remember the specific blog post or mailing list message but I took away the message that once you’ve pushed something to a public repo (as opposed to what’s on your local machine) you are no longer able to touch that history.

Yes and no.  Rebasing and changing the history of a branch that others are pulling from can cause a lot of problems. Basically any time you amend a commit message, change the order of a commit or alter a commit you actually create a new object with a new sha reference. If someone else naively pulls from your branch after having pulled the pre-revised-history they will get a weird set of duplicate code changes and things will get worse from there. In general if other people are pulling from your public (remote) repository you should not change the history out from under them without telling them. Linus’ guidelines about rebasing here are generally applicable.

On the other hand, in many Git workflows it’s not normal for other people to be pulling from your feature branch and if they are they shouldn’t be that surprised if the history changes.  In the Github-style workflow you will typically develop a feature branch on your personal repository and then submit that branch as a pull request to the canonical repository. You would probably be rebasing your branch on the canonical repository’s master anyway. In that sense even though your branch is public it’s not really intended for public consumption. If you have a collaborator on your branch you would just shoot them a message when you rebase and they would do a “hard reset” on their branch (sync their history to yours) using git reset --hard remote_repo/feature_branch. In practice, in my limited experience with a particular kind of workflow, it’s really not that big a deal.

Don’t worry

Some people are wary of rebase because it really does alter history. If you remove a commit you won’t see a note in the commit history that so and so removed that commit. The commit just disappears. Rebase seems like a really good way to destroy yours and other people’s work. In fact you can’t actually screw up too badly using rebase because every Git repository keeps a log of the changes that have been made to the repository’s history called the reflog. Using git reflog you can always back out misguided recent history changes by returning to a point before you made the changes.

Hope this was helpful!

---

  1. Not an insignificant improvement, since merging in Subversion sucks.[back]
  2. Which I always think about as a hard drive head, which I in turn think about as a record player needle[back]

Work

My new work is great1. The main thing I like about it is the consummate professionalism of the team. Everyone I work with is interested in improving their craft and is eager to engage in discussions about software principles, code quality, development processes and tools. In general there is a noticeable group ethos that seems guided by the following set of principles:

Take the time to make your work as high-quality as possible given the scope of the task at hand. This means writing tests and undertaking refactorings to manage complexity in the normal course of development. It also means often having to partially or fully revisit a design based on new findings that come out during development. This may seem like a recipe for low productivity but our team is pretty consistent in its output: to use a decidedly fake and wobbly measurement, each team member seems capable of producing about two significant user-facing features per week. I know from personal experience that the alternative, programming to hard deadlines and arbitrary deployment cycles2, may at times produce the superficial appearance of a greater amount of output (“With Monday’s release we closed ten stories and five bugs!”) but this is just a loan taken out against the day when you have to devote entire “sprints” to fixing the earlier rushed implementations. Taking the requisite time with each task is consonant with the recognition that growth in complexity in a system has to be managed, either all at once in a great “redesign” crusade, or in small thoughtful efforts as you go.

Even sufficiently good work can be improved dramatically through constructive criticism and iterative development. This was something I’ve had considerable trouble with at times, as a person who avoids conflict and criticism at all cost. The key to being able to accept the criticism (which normally comes in the form of code reviews on Github), I’ve found, is to focus on the improvement to the final product brought about by incorporating the criticism. During each round of code review feedback instead of fighting the suggestions I try to let go of my ego (not at all easy) and visualize how the code will look after I implement the changes; I’m generally much happier with the final code than I am with the initial submission. I also try to approach criticism as containing information about what I should do and what I should avoid in the future, so that even when I am receiving brutal criticism about the quality of my code I receive some comfort from the fact that I am learning how to avoid similar criticism in the future3.

There’s more than one way to do it, but you should be able to articulate the reason for why you’re doing something a certain way, even if the articulation of that reason is “because that’s the way I learned it initially and there’s no compelling reason to change”. This of course opens you up to the possibility that somebody else has a positive argument for a certain practice, in which case it’s hard to argue for the original practice purely on the basis of apathy. Often the process of asking oneself why one prefers one practice to another ends up extracting motivations and values that are more interesting than the actual issue of practice in question. As a team we discuss many issues great and small, from high-level questions about design patterns and object composition to nitty-gritty questions about code block indentation. Sometimes these discussions result in some binding decision regarding style or best practice, but on other occasions we’ve articulated dueling reasons and concluded that the arguments for both are strong enough that both practices or styles are valid for different contexts.

Work to improve the product and support your teammates. I know this one sounds like a cheerlead-y cliché, but all I mean by this is that the primary motivator for working hard is to make the application and its codebase better, both because these support one’s self-interest of having the company succeed and remaining employed, but also because you see your teammates working hard and want to make a similar contribution. The codebase and app are a communal creation and as one works with it one begins to feel a sense of loyalty to the whole, as well as a sense of pride for the sections of code that you’ve played a major role in shaping. This is worth mentioning mainly because it stands in contrast to other motivational systems, like those that depend on fear of punitive action or explicit intra-team competition. There are still elements of fear and competition at work in our system, but they are sublimated into cooperative and high-productivity behaviors4.

This ethos doesn’t come from nowhere. It is the product of specific values that are held and promoted by my boss, and it is sustained through a combination of the personalities of the people he chose to hire and the practices (often enforced/encouraged by technological tools) he put in place. It is an impressive accomplishment that is easier to describe than it would be to replicate.

Anyway, it’s a pretty cool job.

---

  1. New as in I’ve been there for ~6 months [back]
  2. These deadlines and cycles often end up being Procrustean beds for feature requirements.[back]
  3. Did I mention that I don’t like criticism?[back]
  4. That is, people compete to be the most helpful![back]

I should update that my current testing workflow is the one described here. At my new work the testing suite is so large (a good problem to have!) that it’s not really feasible to rerun all or even a large subset of the tests on every write. Also I realized that you mentally start waiting for the full suite to finish with a notification before continuing your work. Now I test just the file or just the test I’m working on, using the rails.vim commands and the turbux plugin to run the test in a small tmux pane. Then I run the full test suite on CI, which can take 20-40 minutes.

Local Memes

I have meant to undertake this project for a long time: the compilation of a list of “local memes” in my household. As I get older (and older and older) I become more and more aware of this condition of being composed out of bits and pieces that originated in other people. I am also aware of various recurring gags, puns, patterned exchanges etc.– memes –that gradually died out through disuse or replacement. Every once in a while my wife and I will recall some old joke that we used to do and we’ll momentarily resurrect it, like the dinosaurs in Jurassic Park. We flatter ourselves that we have an unusual number of these active or recalled memes stored up, but probably most couples have equally rich archives. Anyways, ours are special to us, and I’d like to partially preserve them by sharing them now. I’ll just update this post as I think of more.

I’ll be right Barack Obama. Pretty self-explanatory. Also, Barack to the future, etc.

Thass nice. Thass real nice. – Slurring lecher #1 in the bar scene in the terrible movie Eve of DestructionDenotes lustful appreciation of something (in the meme, usually something innocent like food or drink).

snoring/narcolepsy — This is where you decide that the point you’re trying to make is too boring, obscure and convoluted to actually finish and you stop in mid-sentence and nod off. The use of this has decreased because it was really off-putting for my wife and she basically forbid it.

“Oh God” – this was my sort of Joey Lawrence woah during college. I also briefly tried to promote my own pre-packaged meme: embrace the chaos/channel the void.

revolution in your stomach — when you mix foods causing gastrointestinal distress. From a Salvadoran family member. Also the phrase Don’t be lazy.

“What? What did you say?” — I tried to find a relevant video clip from the Ozu movie Good Morning to demonstrate this very simple and satisfying gag, but looks like you’ll have to check out the movie yourself.

cat names — We have two cats, Mackerel/Mack and Ruby. Ruby is variously PoofyMiss Fluffy-shanks and Rubifer Jenkins. Mackerel is usually just Mack, but his friendly, chill demeanor often conjures up declarations that Mack is a buddy and that Mack is a mackimal.

monkey dancing — A monkey dance is a disrespectful display where you show your interlocutor that you are not paying attention to what they are saying because you are no longer a rational being with language. Traditionally you literally pretend to be a monkey by putting your arms above your head and bouncing back and forth on your feet while sticking your tongue out, but any elaborate discourse-destroying dance qualifies.

hot socks – This one is based on the Lords of Acid song “Rough Sex“, where the refrain is a litany of “deep sex, hard sex”, etc. So you replace the word sex with socks and try to come up with descriptions of socks that sound dirty but still make sense in the context of socks. “wet socks, hot socks, smelly socks.”, etc. I actually can’t remember how we kept this one going long enough to be interesting, but we came up with quite a few.

“Bob” Damon – If you’re watching a movie where an actor looks like a more famous actor, you come up with a fake first name and assert that the actor is the more famous actor’s sibling struggling in obscurity.

“And it looked just like a checkerboard!” – The punchline to a ridiculous true story once told to me by a friend. When uttered (in a high-pitched, incredulous voice) it denotes the discovery of an absurd and wonderful fact.

haspiration – This is where you append h’s before all beginning vowels in words and all silent h’s, as in “That’s hannoying” and “hu-wat hare hu-you doing?”

Y’see… Rudy — apocryphal quoting from the Bill Cosby Rap, which doesn’t actually include the name Rudy. Denotes when you’re saying something self-consciously condescending.

Stompy McStomperson — This is apparently me. Related to Messy McMesserson, which is a pretty universal meme.

“_ Joe, everyone’s favorite Joe.” – e.g., Hey look it’s Self-pitying Joe, everyone’s favorite Joe. This came from my roommate in college.

Sometimes a man has to do things that don’t make any sense, nevertheless he must do them, because he is a man. – half-remembered paraphrase from the great movie Fighting Elegy.

“I look good, I smell good, I feel good … I’m a cat!” – half-remembered line from Red Dwarf. Also, “The only thing that can kill a vindaloo: a lager.” Also, from The IT Crowd“You’re making it go back in!”

deathwork — Term from an eccentric book by the cranky sociologist Phillip Rieff, which he uses to describe everything from Piss Christ to Ulysses to this image of a person made out of vegetables. A way to flippantly dismiss something. Related to the phrase The death of meaning.

You’re probably going to die. Used by a friend a lot, the joke being that instead of trying to reassure someone that their neurotic fears aren’t real, you just agree with them that they’re probably right in their catastrophic estimates. This friend also says Happy family, happy family whenever there’s slight social conflict, which is probably from something. Also, the whole concept of icecream cake being inherently more desirable than regular cake, which is apparently from Modern Family.

basic portalology — Recent meme from playing through Portal 2 with same friend, as in That’s basic portalology!, exclaimed when you realize you’ve been overthinking a level.

I think there was something wrong with the beer. The joke being that the beer being skunk is what made you sick, which is why you should always drink out of heinecans.

It’s important. — Said of things that aren’t important. See note.

an all too possible future – a reference to the offensively ridiculous Heinlein book Farnham’s Freehold, read by my friend and I and thankfully few other living people. Also with the same friend the principle that only one person can take off his shirt in a room at a time.

I’m sorry that happened to you — tepid expression of sympathy for an unfortunate event that is either extremely mild or entirely self-induced and preventable.

“Pretty. Pretty. Pretty good” – From Curb Your Enthusiasm, of course.

Uuuuuuuuuuuuumm – Open-ended contemplation sound that a friend’s five-year-old daughter would make. Also from same little girl: “Poop on your head!” And from that friend: “Wake up!”

“Train.” We live next to a railroad, so when the train comes by we translate what its warning horn is saying, which is clearly: “Train. Train. Traiiiiiin. Train.”, etc.  

Goodbye, Autotest

Update (10/15/2012): This isn’t how I’m doing it now. See this aside for my current workflow.

(That Latour article is in a half-finished limbo state, but I’ll get around to posting it eventually).

I wrote in a recent post about how easy it is to configure Autotest these days. And Autotest has been an essential part of my toolset as I’ve developed my humble todo app, Method Todo. I’ve only been using it to run my Rspec stories (and not Cucumber features, since they take too long). Now that I’m adding another testing discipline, testing javascript code with Jasmine, I find that there’s an even simpler way to run all your tests — specs, features and Jasmine specs — continuously and in the background.

I decided to use Guard because I couldn’t get Jasmine to access the necessary asset pipeline files to run Javascript tests, I guess because those files needed to be precompiled. Once I started using guard-rails-assets I decided to try out the other plugins for Rails, Rspec and Cucumber. It seems to all just work and the ecosystem of plugins seems like it will provide quick solutions to obscure problems down the road.

So after deleting the Autotest dependencies out of my Gemfile, I have:

# other sections for rails, rspec, spork, etc., etc. 
 
group :guard do                                                           
  gem 'guard'                                                                   
  gem 'guard-rails'                                                             
  gem 'guard-spork'                                                             
  gem 'guard-rspec'                                                             
  gem 'guard-cucumber'                                                          
  gem 'guard-rails-assets'                                                      
  gem 'guard-jasmine-headless-webkit'                                           
end

Then I run

bundle exec guard init

to create the Guardfile with sensible defaults.

Then running

bundle exec guard

will start running all of these tasks together. The default for the Cucumber guard task seems to only watch changes in feature files before re-running Cucumber, so I’ll leave this guard in as long as it doesn’t delay the re-running of specs. A bonus is that the Rails guard starts up the development server and restarts it when you change major configuration files.

Update (7/25/2012):

I found that guard-rails-assets didn’t seem to be necessary for jquery-guard-headless-webkit to process assets. I removed it to see if it would fix a strange issue with the bootstrap-modal library that only occurs in development mode (it didn’t, but it also didn’t seem to break anything).

I also found that guard-rails was loading into the test environment when paired up with the other testing guard tasks. Changing the initial generated line to

guard 'rails', :environment =&gt; 'development' do                                 
  watch('Gemfile.lock')                                                         
  watch(%r{^(config|lib)/.*})                                                   
end

seemed to fix things.

Reading Habits

I read a lot. Mostly I read blog posts, picked out from my Google Reader stream. I read these all in a blur, constantly hitting the ‘j’ key (a shortcut for next), generally restricting my gaze to the headline, only occasionally stopping to read the article text itself. Often the entry is a description of a longer article or something involved like a photo gallery or video. I open these out into new tabs in my browser and continue on: ‘j’, ‘j’, ‘j’. Then when I’m done gulping down the stream I move on to the opened tabs, reading each article (or determining that I don’t need to read it after all), closing the tab, moving to the next one, from left to right. Sometimes with especially long articles I will click a button associated with the Readability service that will send the article to my tablet or my Kindle for reading later. This process can take from 45 minutes to 2 hours depending on my interest in the articles (this assumes that I’ve ‘kept up’ by ‘clearing’ my list on the previous day). On average my eye passes over something like 150 items, 150 headlines I have to make determinations about. It has the feel of a daily chore; it is, according to the local meme in my house, “important”1 .

In addition to this bizarre, harrowing ritual I read the newspaper. These days I am trying to keep up some hard-won French reading skills by reading Le Monde on my Kindle. I try to give about an hour to this. I have always found artifacts from other cultures, other things being equal, to be more interesting, simply because of the formal differences. A stupid pop song in another language is more interesting than a song in English where I can appreciate the lameness of the lyrics. A conventional comedy or action film from another culture can be enlivened by the strangeness of ambient details which take on their own sociological interest for me. If the dialogue or story are boring I can try to figure out what the characters are eating or the political context behind certain statements. Likewise reading Le Monde is like reading the New York Times, except that it takes longer for me to read, there are certain expressions and allusions that are mysteries to me, and it’s mostly about Europe. But it’s so fun to read! There’s an extra intellectual and creative challenge built in to the task of trying to figure out what the hell is going on. The best moments are when you pick up shades of gallic irony and humor in the diction, like the chicken company on the verge of bankruptcy in danger of being ‘plucked’.

Finally, I try to read longer texts. The truth is that my ability to read books has been in descent ever since college. I was seduced away by that stream of blog posts, also by a tendency to read magazines (I always read Wired cover to cover and used to do the same with Harper’s and Atlantic Monthly), also by video games, consuming television series and movies on Netflix and a general uptick in busy-ness. The Kindle has absolutely resurrected my desire and capacity to read books. It’s hard to say why. It might be something as stupid as a habituated preference for screens. The fact that it’s one of the devices that I bring everywhere and I resort to it whenever bored could be another reason. It’s also very easy to ‘stream’ books by purchasing the next book in a series or another book by the same author as soon as you’re done with a book2. This streaming of books is similar to the rapid “conquering” of a television series or a film genre that Netflix makes possible.

With the Kindle I’ve gotten back into the habit of reading long works, but mostly fiction. I read plenty of long-form political and sociological analysis on the Web, but it’s been a long time since I read a long nonfiction work from beginning to end (I’ve almost never read any history). This is something I’m fairly ashamed about, because I have a liberal arts education and a master’s in philosophy. When these institutions have pushed me to read texts for class I’ve been set ablaze by their ideas, perhaps even going overboard in the degree to which I incorporated their outlooks (in college I was in turn a Stoic, then a Humeian, then a Kantian, then a Hegelian, then a Nietzschean). I found writing papers in graduate school — trying to understand the system of the work and setting up a controversy to hang a paper on — extremely satisfying, extremely fun. I liked the experience of becoming a mini-expert on some marginal issues in the systems of these heavy thinkers. To this day some chance comment might elicit from me an impromptu (and probably unwanted) discourse about Maimonides’ theory of scriptural interpretation or my understanding of the Absolute in Hegel. But time moves on, these institutions no longer terrorize my life, and I’m not reading these texts anymore. I hate what this says about me and about the original importance of those texts for me.

The total impression this account should leave is that my reading is eclectic and geared toward the timely (blog posts and newspaper articles) and my longer reading is dominated by fiction. The blog reading in particular takes the form of a senseless consumption. With how much credibility can I claim that I have a lasting impression from any of the 150 items that I’ve encountered during my feedreader power hour? I’m pretty sure my brain is shutting down at some point, not actually recording anything, for want of any consistency in the subject of the posts. I’m probably just whipping up a batch of ADT3 in there. I get the same feeling to a lesser degree with the other reading and video watching I do. I’m not doing anything with the information or content or knowledge that I’m taking in, I’m just collecting and consuming, and oftentimes I’m hurrying ahead with what I’m currently reading or watching in order to conquer the next thing. This is part of a larger pattern in my life, which is that I keep taking in more and more rich experiences (books and movies but also trips, intense work experiences, major life changes like getting married or learning how to drive, etc.) but I’m doing nothing to process or differentiate these experiences. It seems like you can get away with not processing experiences if you live a relatively simple life but not when you’re a world-devouring monster like me.

Anyway, there are elements of my reading regimen that I hope to change. I recently stopped subscribing to Techcrunch, because I realized that I was literally skipping through almost all the headlines (also the personalities and quality of analysis are awful). I probably don’t need to know about every upcoming Android handset. I would have trouble foregoing Google Reader completely, especially BoingBoing4, but if I could pare my list down even more I could probably transform it into a less soul- and attention-destroying activity. In general, though, I want to start committing to longer, more serious texts5 on the condition that I will also put the work in to process them. The best way I know to do this is to write about my understanding of the text. This post is really a preamble for my next post, which is going to be about a philosophical text I read in full. Something I loved about the experience of reading this book, which at times was quite a slog, was that I genuinely had trouble understanding some of the points. With most things I read there’s a kind of automatic and superficial understanding (that is, it might fall apart if challenged but it never is) but with this text I know that I need to spend more time to have a meaningful understanding, yet I am sufficiently excited about what I did understand to want to make the effort. This is a good feeling because it suggests to me that what I’m reading might actually matter.

---

  1. As in “I can’t leave yet. I have to finish reading my feed. It’s important.“.[back]
  2. That’s how I read the first five books in Song of Ice and Fire so quickly.[back]
  3. Attention Deficit Trait, I read a post about it once.[back]
  4. I like to be down with the freshest memes.[back]
  5. I know this doesn’t preclude fiction. It’s just that fiction is so easy for me to read, consumptively, that I don’t have to make a conscious resolution about it. I would like to write more about fiction, though.[back]

Ruby, Linux, Autotest, Rspec 2, Cucumber

Update: I’ve moved on to using Guard

Of the posts I’ve written this article about Rspec, Autotest, etc. has been one of the more visited. Time to update that ooold information.

If you don’t know what any of this is, the idea of Autotest is to get a test suite to run continuously in the background and provide ambient notifications about the failure/success status whenever a file is saved. You get feedback as soon as you break your test suite. The Linux part of this post involves getting Linux to pop up an on-screen display listing number of failed tests.

Cucumber is a BDD (Behavior-Driven Development) tool where you write out the features in plain language but then parse that natural language with regexps to pin tests behind it (called step definitions).

Rspec is a nice expressive test assertion library that can be used both in Cucumber feature step definitions and for traditional unit tests.

The process is much faster and cleaner now due to better gems and the convenience of Bundler. I’ll update with details about non-Rails projects.

The Gemfile section (note autotest-standalone and autotest-notification:

group :test do
  gem 'rspec'
  gem 'rspec-rails'
  gem 'cucumber'
  gem 'cucumber-rails'
  gem 'database_cleaner'
  gem 'autotest-standalone'
  gem 'autotest-rails'
  gem 'autotest-notification'
end

Setup commands:

# install the gems of course
bundle install
# steps for Rails
rake generate rspec:install
rake generate cucumber:install
# complained if I didn't do this
rake db:generate
# Do some magic for the notifications plugin
an-install

Then to run the tests:

export AUTOFEATURE=true; autotest

It’s working for me on Ubuntu 12.04.

Installing Ubuntu Pangolin on Beagle Bone

Just a quick note if you’re like me and you want to put Ubuntu on your Beagle Bone. The Beagle Bone is a sweet little palm-sized motherboard/processor guy that’s nice for little hardware projects. It comes with the Angstrom operating system loaded onto the SD card. This operating system was fine for initial development but I ran into an issue where I couldn’t leave a server to run and log out. If I checked back the server was always dead no matter what I did to detach it (maybe it was just pegging itself and restarting). So I wanted to see if Ubuntu would handle any better.

Anyways, I kept trying to just flash an image from the official Ubuntu page onto an SD card and boot it, but it wasn’t working. Then I saw that there is a script that does all the work for you mentioned on this page:

http://elinux.org/BeagleBoardUbuntu#Canonical.2FUbuntu_Images

Works like a charm, and Ubuntu does seem to be more stable than Angstrom on the Beagle Bone. Thanks to the author of the script, Robert C. Nelson.