Jumble of thoughts about Ruby and programming

I really like the Ruby language. I like it for its syntax and its idioms and the programming culture. Recently I’ve been thinking about the things that I don’t like about the language. It’ s a relatively short list.

Ruby: some things I don’t like about you

The slow speed. It really is bad, but we get away with it because reasons.

The symbol-string distinction. This seems to exist only to introduce silly bugs involving mismatches (and so Rails users can make jokes about HashWithIndifferentAccess). I know that symbols are singletons and that helps with memory usage, but 1) most Ruby programmers in my experience aren’t profiling memory that much and 2) that argument is less relevant now that there is a plan to treat string literals the same way.

The poor immutability story. I’m not sure that I buy 100% into FP (in part because I don’t know yet how to integrate it with OOP, which I do buy into) but I’ve become more and more disenchanted with mutation of objects. This was an actual pain point on a recent project, where I had a deeply nested hash that was being mutated from afar and I eventually used ice_nine to freeze the whole thing just to figure out where the mutation was taking place.

The MRI codebase. The core interpreter and low-level libraries are implemented in C (also the project uses Subversion for version control). I’ve tried subscribing in my RSS reader to the new commits on master on the Github mirror, but I eventually gave up because I never understood what I was looking at. On the other hand I was able to jump in and contribute to the Crystal standard libraries with a very minor PR just by glancing around the source code. I realize this isn’t necessarily fair, since this part of MRI standard lib would be probably be equally easy to hack on but in general it’s relatively easy to understand what is happening even in the “low-level” Crystal AST parsing code. It won’t happen but I wish Ruby were implemented in a higher level and more OO language like Go, Rust, or even C++ (or Crystal!).

The poor auto-completion and static analysis. Obviously this is the cost we pay for having an untyped, dynamic, late-binding language. More generally I agree with Avdi Grimm’s point here that Ruby has bad development tools. Rubocop is a notable exception that when integrated with tools like Syntastic and more recently Neomake has dramatically improved my development experience (if only because I get instant warnings when I have a unused variable, usually meaning I have a typo).

Peering at the greener grass

So, recently I have been exploring other languages, in particular compiled languages (not for the first time, but my entire working career has been with dynamic languages so it’s most of my programming experience). Not that much; just a little puttering around here and there with Crystal, Rust, and Go. It’s a little too early for me to say anything meaningful, but here are my early reactions:

  • Something I already knew: good autocompletion is really nice. Static typing really pays off when you can see at a glance all the methods and properties on the object that you’re dealing with.
  • Code-compile-execute is awkward (although to be fair,  in Ruby we run our tests constantly, almost like a compile step).
  • I really miss REPL-style investigation with a debugger (actually I should mention Pry as another great Ruby tool). Right now I’m experiencing this lack with a Go program, where I want to know what this struct instance that I’m dealing with is and the suggestions on the web seem to be to marshall it to a JSON object and then print it? I know that there are a couple things going on here and I’m probably making noob mistakes, but my point is that REPL is a really big part of productivity for me. Right now I have the joy of knowing all the types of  my objects through static analysis but I feel blind when it comes to runtime values.
  • You have to think about so many low-level details, like specifying the size and element types of arrays (or the maximum size of slices in Go). There’s your big productivity tradeoff there. These low-level details might help you be more precise about performance, but they also distract from whatever domain you’re usually working in.

Enfin

This is sort of the point of this disorganized jumble of incomplete thoughts. You will sometimes see sentiments like “Ruby is dying because everyone is going to go to X because it’s faster/better”. If X is Go or Rust you’re just talking about a different kind of thing. They’re really interesting and useful languages but you can’t get away from the fact that they impose a certain style and pace of programming that departs from what makes Ruby a great language.

Actually, I thought this guy expressed things pretty well:

I would just add a couple more nuances. I’m not saying that Ruby will never go out of fashion. I’m just dubious that it is currently “dying” or that it will “die”. Apart from the fact that major programming languages never actually die, there’s also the assumption that Ruby can’t improve and overcome many of its pain points. But there is still an incredible vibrancy and degree of communal care in the Ruby world that creates tools like Rubocop and that can mount efforts like Matz’s proposal to improve Ruby’s speed 3x.

Also I’m not saying that it would be unusual for someone to prefer another language, static or dynamic, over Ruby. It sounds like Elixir is really special, and the fact that it is dynamic, has Ruby-like syntax, AND is fast makes it really intriguing. And as I’ve mentioned before I’m especially interested in Crystal’s approach that combines Ruby-like syntax with compilation and inferred types (although the project is moving toward more explicit type declarations to improve compilation times). Really what I’m saying is that we should keep innovating to make the best programming language, which includes the concomitant development experience, with the caveat that you can’t actually make one language that everyone will prefer for every occasion and use case. And that’s just fine.

Bootstrapping

Crystal is a cool new language that I like. One thing that’s neat about it is that even the compiler (as opposed to the runtime libraries) is written in Crystal. That is, the compiler is written in the same high level language that the compiler is intended to compile! As an aside, this is really powerful and important for the future of the language, because of the way that it enables OOP programmers to contribute to the language core. Compare with MRI Ruby, which is written in C, so that you have a divide between core developers who can follow what’s going on in the C code and everyone else.

Anyway, I intend to write more about Crystal later, but here’s the quick point. A version of Crystal that has the compiler written in Crystal has to be compiled… by Crystal. So the way this works is that Crystal version 0.9.0 is compiled by a precompiled binary of Crystal version 0.8.0. If you install crystal-lang from Brew you will see this happening: it pulls down a tar file of 0.8.0 in order to compile 0.9.0.

Obviously this leads to a chicken and egg regress. What compiled 0.8.x? 0.7.x., and so on. So where did 0.1.0 come from then? It came from Ruby! That is, Crystal’s compiler used to be written in Ruby, before Crystal learned how to bootstrap itself.

It might seem that the regress stops there, at the point before the Crystal bootstrap, but MRI Ruby’s C code has to be compiled by something as well, for example the gcc compiler. And how is a compiler like gcc compiled? It is bootstrapped in another regress that goes back to an assembler. Honestly I get fuzzy at this point, but my understanding is that at some point in the past you arrive at humans physically etching circuitry or manually feeding in punch cards to machines. That is, we have to start the sequence before you can get to programs that generate other programs and machines that build other machines. And of course humans bootstrap each other, down through the generations.

Anyways, I know I just recapitulated the concept of the historical regress, from which Aristotle inferred the existence of a Prime Mover. What’s striking for me though is how this account of bootstrapping conflicts with the dominant sense in computing of infinite, cheap, meaningless reproducibility. Every program and document is capable of being reproduced millions of times across the Internet. No particular bit or line of code residing on a particular computer is special. But bootstrapping shows how all code is indebted to a certain lineage, even as it is also potentially free from that legacy to develop in new directions. Well, all the parallels and analogies flow from there.

Helpful commands

As a follow-up to my last post here are some commands that I use throughout the day. They are admittedly nothing special but they help me out.

.bashrc aliases:

alias grc='git rebase --continue'
alias gca='git commit -a --amend'
alias gs='git status -sb'
alias gd='git diff'
alias gsn='git show --name-only'

The one worth explaining is gca. This one stages and commits everything to the previous commit. I use this constantly to keep adding stuff to my WIP commits. One thing to watch out for is this will mess up a merge conflict fix inside a rebase operation because you’ll end up putting everything in the merge before the conflict. You want to do grc instead.

scripts:

force_push — I use this to automate the process of updating my remote branch and most importantly to prevent me from force pushing the one branch that I must NEVER force push.

#!/usr/bin/env bash
CURRENT_BRANCH=`git rev-parse --abbrev-ref HEAD`
if [ $CURRENT_BRANCH == 'master' ]; then
  echo "YOU DO NOT WANT TO DO THAT"
  exit 0
fi
 
echo "git push origin $CURRENT_BRANCH --force"
read -p "Are you sure? [Yn] "
if [ "$REPLY" != "n" ]; then
  git push origin $CURRENT_BRANCH --force
fi

rebase_branch — There’s not really a lot to this, but I use it reflexively before I do anything.

#!/usr/bin/env bash
git fetch
git rebase -i origin/master

merge_to_master — I do this when I’m done with a branch. This makes sure that there will be a clean fast-forward push. Notice how it reuses rebase_branch.

#!/usr/bin/env bash
rebase_branch
CURRENT_BRANCH=`git rev-parse --abbrev-ref HEAD`
echo "git checkout master"
git checkout master
echo "pull origin master"
git pull origin master
echo "git merge $CURRENT_BRANCH"
git merge $CURRENT_BRANCH

git-vim — this one is still a bit of a work in progress, but the idea is to grab the files you’ve changed in Git and open them in separate tabs inside Vim. You can then run it with git vim which I alias as gv.

#!/usr/bin/env ruby
 
# uncommitted files
files = `git diff HEAD --name-only`.split("\n")
if files.empty?
  # WIP files
  files = `git show --name-only`.split("\n")
end
 
system("vim -p #{files.join(" ")}")

Of course, all these scripts need to be put somewhere in your executable path. I put them in ~/bin and include this location in my path.

So my workflow would look like this

git checkout -b new_branch
# hack hack hack
git commit -a
# hack hack hack
gca
# hack hack hack
gca
# all done now
rebase_branch
# whoops a merge conflict
# resolve it
git add .
grc
# Time to get this code reviewed on Github
force_push
# Code accepted, gonna merge this
merge_to_master

Git workflow

In my last post I described how at my work we use code review feedback to iteratively improve code. I want to describe how Git fits into this process, because this is probably the biggest change I had to make to my preexisting workflow. Basically I had to relearn how to use Git. The new way of using it (that is, it was new to me) is extremely powerful and in a strange way extremely satisfying, but it does take a while to get used to.

Importance of rebasing

I would describe my old approach and understanding as “subversion, but with better merging”1. I was also aware of the concept of rebasing from having submitted a pull request to an open source project at one point, but I didn’t use it very often for reasons I’ll discuss later. As it turns out understanding git rebase is the key to learning how to use Git as more than a ‘better subversion’.

For those who aren’t familiar with this command, git rebase <branch> takes the commits that are unique to your branch and places them “on top” of another branch. You typically want to do this with master, so that all your commits for your feature branch will appear together as the most recent commits when the feature branch is merged into master.

Here’s a short demonstration. Let’s say this is your feature branch, which you’ve been developing while other unrelated commits are being added to master:

Feature branch with ongoing mainline activity
Feature branch with ongoing mainline activity

If you merge without rebasing you’ll end up with a history like this:

History is all jacked up!
History is all jacked up!

Here is the process with rebasing:

# We're on `feature_branch`
git rebase master # Put feature_branch's commits 'on top of' master's
git checkout master
git merge feature_branch

This results in a clean history:

Feature branch commits on top
Feature branch commits on top

Another benefit of having done a rebase before merging is that there’s no need for an explicit merge commit like you see at the top of the original history. This is because — and this is a key insight — the feature branch is exactly like the master branch but with more commits added on. In other words, when you merge it’s as though you had never branched in the first place. Because Git doesn’t have to ‘think’ about what it’s doing when it merges a rebased branch it performs what is called a fast forward. In this case it moved the HEAD2 from 899bdb (More mainline activity) to 5b475e (Finished feature branch).

The above is the basic use case for git rebase. It’s a nice feature that keeps your commit history clean. The greater significance of git rebase is the way it makes you think about your commits, especially as you start to use the interactive rebase features discussed below.

Time travel

When you call git rebase with the interactive flag, e.g. git rebase -i master, git will open up a text file that you can edit to achieve certain effects:

Interactive rebase menu
Interactive rebase menu

As you can see there are several options besides just performing the rebase operation described above. Delete a line and you are telling Git to disappear that commit from your branch’s history. Change the order of the commit lines and you are asking Git to attempt to reorder the commits themselves. Change the word ‘pick’ to ‘squash’ and Git will squash that commit together with the commit on the preceding line. Most importantly, change the word ‘pick’ to ‘edit’ and Git will drop you just after the selected ref number.

I think of these abilities as time travel. They enable you to go back in the history of your branch and make code changes as well as reorganize code into different configuration of commits.

Let’s say you have a branch with several commits. When you started the branch out you thought you understood the feature well and created a bunch of code to implement it. When you opened up the pull request the first feedback you received was that the code should have tests, so you added another commit with the tests. The next round of feedback suggested that the implementation could benefit from a new requirement, so you added new code and tests in a third commit. Finally, you received feedback about the underlying software design that required you to create some new classes and rename some methods. So now you have 4 commits with commit messages like this:

A messy commit history
A messy commit history
  1. Implemented new feature
  2. Tests for new feature
  3. Add requirement x to new feature
  4. Changed code for new feature

This history is filled with useless information. Nobody is going to care in the future that the code had to be changed from the initial implementation in commit 4 and it’s just noise to have a separate commit for tests in commit 2. On the other hand it might be valuable to have a separate commit for the added requirement.

To get rid of the tests commit all you have to do is squash commit 2 into commit 1, resulting in:

  1. Implemented new feature
  2. Add requirement x to new feature
  3. Changed code for new feature

New commit 3 has some code that belongs in commit 1 and some code that belongs with commit 2. To keep things simple, the feature introduced in commit 1 was added to file1.rb and the new requirement was added to file2.rb. To handle this situation we’re going to have to do a little transplant surgery. First we need to extract the part of commit 3 that belongs in commit 1. Here is how I would do this:

# We are on HEAD, i.e. commit 3
git reset HEAD^ file1.rb
git commit --amend
git stash
git rebase -i master
# ... select commit 1 to edit
git stash apply
git commit -a --amend
git rebase --continue

It’s just that easy! But seriously, let’s go through each command to understand what’s happening.

  1. The first command, git reset, is notoriously hard to explain, especially because there’s another command, git checkout, which seems to do something similar. The diagram at the top of this Stack Overflow page is actually extremely helpful. The thing about Git to repeat like a mantra is that Git has a two step commit process, staging file changes and then actually committing. Basically, when you run git reset REF on a file it stages the file for committing at that ref. In the case of the first command, git reset HEAD^ file.rb, we’re saying “stage the file as it looked before HEAD’s change”; in other words, revert the changes we made in the last commit.
  2. The second command, git commit --amend commits what we’ve staged into HEAD (commit 3). The two commands together (a reset followed by an amend) have the effect of uncommitting the part of HEAD’s commit that changed file1.rb.
  3. The changes that were made to file1.rb aren’t lost, however. They were merely uncommitted and unstaged. They are now sitting in the working directory as an unstaged diff, as if they’d never been part of HEAD. So just as you could do with any diff you can use git stash to store away the diff.
  4. Now I use interactive rebase to travel back in time to commit 1. Rebase drops me right after commit 1 (in other words, the temporary rebase HEAD is commit 1).
  5. I use git stash apply to get my diff back (you might get a merge conflict at this point depending on the code).
  6. Now I add the diff back into commit 1 with git commit --amend -a (-a automatically stages any modified changes, skipping the git add . step).

This is the basic procedure for revising your git history (at least the way I do it). There are a couple of other tricks that I’m not going to go into detail about here, but I’ll leave some hints. Let’s say the changes for the feature and the new requirement were both on the same file. Then you would need to use git add --patch file1.rb before step 2. What if you wanted to introduce a completely new commit after commit 1? Then you would use interactive rebase to travel to commit 1 and then add your commits as normal, and then run git rebase --continue to have the new commits inserted into the history.

Caveats

One of the reasons I wasn’t used to this workflow before this job was because I thought rebasing was only useful for the narrow case of making sure that the branch commits are grouped together after a merge to master. My understanding was that other kinds of history revision were to be avoided because of the problems that they cause for collaborators who pull from your public repos.  I don’t remember the specific blog post or mailing list message but I took away the message that once you’ve pushed something to a public repo (as opposed to what’s on your local machine) you are no longer able to touch that history.

Yes and no.  Rebasing and changing the history of a branch that others are pulling from can cause a lot of problems. Basically any time you amend a commit message, change the order of a commit or alter a commit you actually create a new object with a new sha reference. If someone else naively pulls from your branch after having pulled the pre-revised-history they will get a weird set of duplicate code changes and things will get worse from there. In general if other people are pulling from your public (remote) repository you should not change the history out from under them without telling them. Linus’ guidelines about rebasing here are generally applicable.

On the other hand, in many Git workflows it’s not normal for other people to be pulling from your feature branch and if they are they shouldn’t be that surprised if the history changes.  In the Github-style workflow you will typically develop a feature branch on your personal repository and then submit that branch as a pull request to the canonical repository. You would probably be rebasing your branch on the canonical repository’s master anyway. In that sense even though your branch is public it’s not really intended for public consumption. If you have a collaborator on your branch you would just shoot them a message when you rebase and they would do a “hard reset” on their branch (sync their history to yours) using git reset --hard remote_repo/feature_branch. In practice, in my limited experience with a particular kind of workflow, it’s really not that big a deal.

Don’t worry

Some people are wary of rebase because it really does alter history. If you remove a commit you won’t see a note in the commit history that so and so removed that commit. The commit just disappears. Rebase seems like a really good way to destroy yours and other people’s work. In fact you can’t actually screw up too badly using rebase because every Git repository keeps a log of the changes that have been made to the repository’s history called the reflog. Using git reflog you can always back out misguided recent history changes by returning to a point before you made the changes.

Hope this was helpful!

---

  1. Not an insignificant improvement, since merging in Subversion sucks.[back]
  2. Which I always think about as a hard drive head, which I in turn think about as a record player needle[back]

Non-Rails Autotest + RSpec + LibNotify + Linux

Update: The information below is VERY outdated. Take a look at new instructions for getting this set up.

Each of those terms actually means something! If you don’t know what any one of those terms means, then you probably won’t enjoy this post.

It’s surprising that something so cool and so central to the Ruby TDD world as Autotest is completely undocumented. Autotest with Rails and RSpec just works, and there are many googleable examples of how to add hooks for using Growl or libnotify. I couldn’t find any reliable advice for how to use Autotest with a non-Rails project and how to use RSpec instead of Unit/Test. The libnotify part is easy once the other two are solved, but I’ll share what I have going on anyway. This information may be available in pieces somewhere else, but I’ll just put it all here to help the other lost googlers.

The first thing to know is that this blog post is out of date. Autotest now speaks RSpec as one of its native “styles”. The trick is that you have to tell autotest about this style by creating a file called autotest/discover.rb in your project root. This contains:

Autotest.add_discovery do
  "rspec"
end

The second thing to know is that Autotest looks in your lib/ folder for your files and in your spec/ folder for your tests, which should be called
file_name_spec.rb.

There’s not much more to know. You can add awesome heads-up notification using libnotify. You need to get libnotify-bin using something like sudo apt-get install libnotify-bin. Then in the root of your project directory create an .autotest file like this:

module Autotest::GnomeNotify
 
  # Time notification will be displayed before disappearing automatically
  EXPIRATION_IN_SECONDS = 2
  ERROR_STOCK_ICON = "gtk-dialog-error"
  SUCCESS_STOCK_ICON = "gtk-dialog-info"
 
  # Convenience method to send an error notification message
  #
  # [stock_icon]   Stock icon name of icon to display
  # [title]        Notification message title
  # [message]      Core message for the notification
  def self.notify stock_icon, title, message
    options = "-t #{EXPIRATION_IN_SECONDS * 1000} -i #{stock_icon}"
    system "notify-send #{options} '#{title}' \"#{message}\""
  end
 
  Autotest.add_hook :red do |at|
    example_text = ""
    num_examples = 0
    examples = at.files_to_test.each_pair do |key, values|
      example_text += "- #{key}\n"
      values.each do |value|
        num_examples += 1
        example_text += "  * #{value}\n"
      end
    end
    notify ERROR_STOCK_ICON, "Tests failed", "<strong>#{num_examples} examples failed in #{at.files_to_test.size} files</strong>\n#{example_text}"
  end
 
  Autotest.add_hook :green do |at|
    notify SUCCESS_STOCK_ICON, "All tests passed, good job!", ""
  end
end

I’ve modified mine to have very verbose notifications. I got it from here.

Rails and Ext non-Ajax Signup Form with Password Confirmation

This is, uh, a technical post.

Probably there are others who want to do the same somewhat senseless thing: use Ext to do form validation while keeping a boring non-Ajax post-and-response. The bottom line is that Ext favors doing it the Ajax way, and the Ajax way isn’t that hard to set up with Rails (just handle the form submission as normal but return JSON or XML to signal success or failure). But if you’re like me and working on a deadline, there can be a cognitive burden to switching to Ajax posting that you might want to avoid. Paradoxically, you might find yourself wasting a lot of time trying to figure out how to do it the “old-fashioned” way. Well, here’s one working standard-submission Signup Form, with fancy validations and all the kinks worked out.

Here’s the top half of the file users/new.html.erb, which is nearly the same as the code generated by restful-authentication:

<% @user.password = @user.password_confirmation = nil %>
<%= error_messages_for :user %>
<div id="no-js-form">
    <% form_for :user, :url => users_path, :html => {:id => "signup-form"} do |f| -%>
    <p>
        <label for="login">
            Real Name
        </label>
        <br/>
        <%= f.text_field :name, :id => "signup_name_field" %>
    </p>
    <p>
        <label for="login">
            User Name
        </label>
        <br/>
        <%= f.text_field :login, :id => "signup_login_field" %>
    </p>
    <p>
        <label for="email">
            Email
        </label>
        <br/>
        <%= f.text_field :email, :id => "signup_email_field" %>
    </p>
    <p>
        <label for="password">
            Password
        </label>
        <br/>
        <%= f.password_field :password, :id => "signup_password_field" %>
    </p>
    <p>
        <label for="password_confirmation">
            Confirm Password
        </label>
        <br/>
        <%= f.password_field :password_confirmation, :id => "signup_password_confirmation_field" %>
    </p>
    <p>
        <label for="password_confirmation">
            Role
        </label>
        <br/>
        <%= f.select :role, [["consumer","consumer"],["vendor","vendor"]], :id => "signup_role_field" %>
    </p>
    <p>
        <%= submit_tag 'Sign up', :id => "signup_submit_button" %>
    </p>
    <% end -%>
</div>
<div id="js-form-panel">
</div>

The only differences are a div wrapping the form (“no-js-form”) and the “js-form-panel” at the end. You’re going to laugh at me, but this form is buzzword-friendly; it’s unobtrusive in an ugly way. If javascript is turned on, the form will work, and the following will fail:

<script type="text/javascript">
    /* 
     Thanks to:
     http://www.extjswithrails.com/2008_03_01_archive.html for standardSubmit tip (hard to find!)
     http://extjs.com/forum/showthread.php?t=23068 for password confirmation
     Anyone else I stole semantics from
     */
    // Look, I'm copying over the authenticity token to send in the JS-generated form. LOL!
    var authenticity_token = document['forms'][0]['authenticity_token'].value;
 
    Ext.onReady(function(){
        $('no-js-form').hide();
 
        var myForm;
 
        function submitHandler(){
            form = myForm.getForm();
            form_as_dom = form.getEl().dom;
            form_as_dom.action = form.url;
            form_as_dom.submit();
        }
        myForm = new Ext.form.FormPanel({
            monitorValid: true,
            standardSubmit: true,
            url: "/users",
            applyTo: "js-form-panel",
            title: "Signup as a New User",
            width: 310,
            autoHeight: true,
            items: [new Ext.form.TextField({
                allowBlank: false,
                msgTarget: 'side',
                name: "user[name]",
                id: 'js_signup_name_field',
                fieldLabel: "Real Name"
            }), new Ext.form.TextField({
                allowBlank: false,
                vtype: 'alphanum',
                msgTarget: 'side',
                name: "user[login]",
                id: 'js_signup_login_field',
                fieldLabel: "Username"
            }), new Ext.form.TextField({
                allowBlank: false,
                vtype: 'email',
                msgTarget: 'side',
                name: "user[email]",
                id: 'js_signup_email_field',
                fieldLabel: "Email"
            }), new Ext.form.TextField({
                allowBlank: false,
                inputType: 'password',
                vType: 'password',
                msgTarget: 'side',
                name: "user[password]",
                id: 'js_signup_password_field',
                fieldLabel: "Password"
            }), new Ext.form.TextField({
                fieldLabel: "Password Confirm:",
                allowBlank: false,
                inputType: 'password',
                name: "user[password_confirmation]",
                initialPasswordField: 'signup_password_field',
                vType: 'password',
                msgTarget: 'side',
                id: 'js_signup_password_confirmation_field',
                fieldLabel: "Confirm Password",
                validator: function(value){
                    return (value == document.getElementById("js_signup_password_field").value) 
|| "Your passwords do not match";
                }
            }), new Ext.form.Hidden({
                name: "authenticity_token",
                value: authenticity_token
            }), new Ext.form.Hidden({
                name: "user[role]",
                value: "consumer"
            }), ],
            buttons: [{
                handler: submitHandler,
                text: "Signup",
                formBind: true
            }]
        });
 
    });
 
</script>

The noteworthy steps are: first, I hide the ‘no-js-form’, then I copy the authenticity_token that gets generated by a rails form to put in the js-generated form. Then, standardSubmit : true is the config option that makes a FormPanel not submit as an XmlHttpRequest. The funny code in the submitHandler is getting the underlying form object and calling submit on it, but as I write this it doesn’t make sense why both would be necessary. Finally, formbind : true causes the submit button to be deactivated while there are failing validations, and there’s some handy code for making sure that the password_confirmation matches password (totally lifted from somewhere else, see above).

Setup for Alexandria Development: Part II

(…after too much grief today installing Mephisto and mucking with Apache virtualhosts; I’ll get Part I back from the ether eventually) Update: Done. Update: This is a post moved over from the short-lived Mephisto blog, and ported back in time.

First of all, the alexandria binary is just a ruby script that does a require ‘alexandria’ and runs Alexandria.main.

Alexandria.main is a method on the Alexandria ‘module’ that is used throughout the code (modules are ‘namespaces’ to avoid naming conflicts). This method is found in lib/alexandria.rb:

As you should be able to see, this method isn’t doing anything but setting up some global variables (like $DEBUG) and logging, and doing something weird with http_proxy. The real line is Alexandria::UI.main. That’s in lib/alexandria/ui.rb:

module Pango
  def self.ellipsizable?
    @ellipsizable ||= Pango.constants.include?('ELLIPSIZE_END')
  end
end
 
module Alexandria
  module UI
    def self.main
      Gnome::Program.new('alexandria', VERSION).app_datadir =
        Config::MAIN_DATA_DIR
      Icons.init
      MainApp.new
      Gtk.main
    end
  end
end

Gtk.main is the main loop of a gtk program. You set up your windows and widgets before running it, and it makes them all spin until you exit. So, after Icons.init runs (guess what that does), MainApp.new does all the work from now on.

The Pango code above this is interesting for seeing some Ruby syntax and features. Pango is a text-rendering and layout library inside gtk. The code is adding an elipsizable? “question” method (return true/false) to the Pango module. self.elipsizable? means that it’s defining a class method, a method on a class that doesn’t depend on instance data. ||= is a way of saying, “set the variable to this unless it’s already been set to something else (ie, it’s not nil)”.

Unfortunately, MainApp.new is in the massive MainApp class at lib/alexandria/ui/main_app.rb. This class does a lot (too much). The main thing it does is handle all the callbacks from the main window and its widgets. Let’s just take a look at the top:

 
module Alexandria
  module UI
    class MainApp < GladeBase
      attr_accessor :main_app, :actiongroup, :appbar
      include Logging
      include GetText
      GetText.bindtextdomain(Alexandria::TEXTDOMAIN, nil, nil, "UTF-8")
 
      module Columns
        COVER_LIST, COVER_ICON, TITLE, TITLE_REDUCED, AUTHORS,
        ISBN, PUBLISHER, PUBLISH_DATE, EDITION, RATING, IDENT,
        NOTES, REDD, OWN, WANT, TAGS = (0..16).to_a
      end
 
      # The maximum number of rating stars displayed.
      MAX_RATING_STARS = 5
 
      def initialize
        super("main_app.glade")
        @prefs = Preferences.instance
        load_libraries
        initialize_ui
        on_books_selection_changed
        restore_preferences
      end
    #... snip
    end
    # ... snip
  end
end

A couple points here. MainApp inherits from GladeBase. The attr_accessor is a declaration that makes the @main_app, @actiongroup and @appbar instance variables publicly readable and settable. super(“main_app.glade”) calls the initialize method on GladeBase with the glade file that contains the definitions for all the widgets Alexandria uses. The names of the methods tell you about what they do (good!). Because these methods need to know about what the user’s preferences are, @prefs has been made available before they are called.

To understand what MainApp is doing, it seems like we need to understand what GladeBase is.

module Alexandria
  module UI
    class GladeBase
      def initialize(filename)
        file = File.join(Alexandria::Config::DATA_DIR, 'glade', filename)
        glade = GladeXML.new(file, nil, Alexandria::TEXTDOMAIN) { |handler| method(handler) }
        glade.widget_names.each do |name|
          begin
            instance_variable_set("@#{name}".intern, glade[name])
          rescue
          end
        end
      end
    end
  end
end

So GladeBase is using GladeXML to get the widgets out of the xml file and load them into memory. It then iterates through them, *adding them to MainApp (instance_variable_set is doing the work). So if there’s a widget called @main_menu, MainApp will get this variable to work with. These widgets work exactly as though they had been created “by hand”.

If you’ve been following, take a look at load_libraries and see if the code there makes sense. Here’s a short snippet:

      def load_libraries
        completion_models = CompletionModels.instance
        if @libraries
          @libraries.all_regular_libraries.each do |library|
            if library.is_a?(Library)
              library.delete_observer(self)
              completion_models.remove_source(library)
            end
          end
          @libraries.reload
        else
          #On start
 
          @libraries = Libraries.instance
          @libraries.reload
# ...

This is where things start to get confusing. load_libraries is also being used to reload libraries, so first it checks to see if @library has been defined already (refactoring opportunity). In the normal case, Libraries gets called by by invoking Libraries.instance. To understand this, you have to know that Libraries uses a factory class method to make sure that Libraries only gets created once (making the Libraries instance a “singleton”).

At the bottom of load_libraries is some interesting code:

# ...
        @libraries.all_regular_libraries.each do |library|
          library.add_observer(self)
          completion_models.add_source(library)
        end
# ...

This is telling each library in @libraries (the Libraries singleton) to add self as an “observer”. What does this mean? It means that class Library is “observable”. To see what that means you have to look at Library. First let’s look at Libraries, in lib/alexandria/library.rb:

  class Libraries
    attr_reader :all_libraries, :ruined_books
 
    include Observable
    include Singleton
 
# ... snip
 
    #######
    private
    #######
 
    def initialize
      @all_libraries = []
    end
 
    def notify(action, library)
      changed
      notify_observers(self, action, library)
    end
  end
end

Libraries is including the Observable and Singleton modules to give it special methods (in Python these are called “mixins”). Singleton gave it the instance method. Observable is giving it the notify_observers method. What this method does is “call up” all the observers of this instance by calling their update methods.

Libraries has many Librarys (it’s a little weird to give a class a plural name). Each library is an observer of Libraries. Library is also Observable:

 
  class Library < Array
    include Logging
# ...
    include Observable

As we saw above, MainApp adds itself as an observer to each library. If you look on MainApp you’ll see that it has an update method:

def update(*ary)
# ...
  end

*ary means that it accepts an array as its argument. This method gets called from many places in Library, like this:

        source_library.notify_observers(source_library,
                                        BOOK_REMOVED,
                                        book)

That’s all for now. To learn more about Observers read this.