Review: Django 1.0 Web Site Development

Introduction

Several months ago, a UK-based book publisher, Packt Publishing contacted me to ask if I would be willing to review one of their books about Django. I gladly jumped at the opportunity, and I received a copy of the book a couple of weeks later in the mail. This happened at the beginning of September 2009. It just so happened that I was in the process of being hired on by ScienceLogic right when all of this took place. The subsequent weeks were filled to the brim with visitors, packing, moving, finding an apartment, and commuting to my new job. It was pretty stressful.

Things are finally settling down, so I've taken the time to actually review the book I was asked to review. I should mention right off the bat that this is indeed a solicited review, but I am in no way influenced to write a good or bad review. Packt Publishing simply wants me to offer an honest review of the book, and that is what I indend to do. While reviewing the book, I decided to follow along and write the code the book introduced. I made sure that I was using the official Django 1.0 release instead of using trunk like I tend to do for my own projects.

The title of the book is Django 1.0 Web Site Development, written by Ayman Hourieh, and it's only 250 pages long. Ayman described the audience of the book as such:

This book is for web developers who want to learn how to build a complete site with Web 2.0 features, using the power of a proven and popular development system--Django--but do not necessarily want to learn how a complete framework functions in order to do this. Basic knowledge of Python development is required for this book, but no knowledge of Django is expected.

Ayman introduced Django piece by piece using the end goal of a social bookmarking site, a la del.icio.us and reddit. In the first chapter of the book, Ayman discussed the history of Django and why Python and Django are a good platform upon which to build Web applications. The second chapter offers a brief guide to installing Python and Django, and getting your first project setup. Not much to comment on here.

Digging In

Chapter three is where the reader was introduced to the basic structure of a Django project, and the initial data models were described. Chapter four discussed user registration and management. We made it possible for users to create accounts, log into them, and log out again. As part of those additions, the django.forms framework was introduced.

In chapter five, we made it possible for bookmarks to be tagged. Along with that, we built a tag cloud, restricted access to certain pages, and added a little protection against malicious data input. Next up was the section where things actually started getting interesting for me: enhancing the interface with fancy effects and AJAX. The fancy effects include live searching for bookmarks, being able to edit a bookmark in place (without loading a new page), and auto-completing tags when you submit a bookmark.

This chapter really reminded me just how simple it is to add new, useful features to existing code using Django and Python. I was thoroughly impressed at how easy it was to add the AJAX functionality mentioned above. Auto-completing the tags as you type, while jQuery and friends did most of the work, was very easy to implement. It made me happy.

Chapter seven introduced some code that allowed users to share their bookmarks with others. Along with this, the ability to vote on shared bookmarks was added. Another feature that was added in this chapter was the ability for users to comment on various bookmarks.

The ridiculously amazing Django Administration utility was first introduced in chapter eight. It kinda surprised me that it took 150 pages before this feature was brought to the user's attention. In my opinion, this is one of the most useful selling points when one is considering a Web framework for a project. When I first encountered Django, the admin interface was one of maybe three deciding factors in our company's decision to become a full-on Django shop.

Bring on the Web 2.0

Anyway, in chapter nine, we added a handful of useful "Web 2.0" features. RSS feeds were introduced. We learned about pagination to enhance usability and performance. We also improved the search engine in our project. At this stage, the magical Q objects were mentioned. The power behind the Q objects was discussed very well, in my opinion.

In chapter 10, we were taught how we can create relationships between members on the site. We made it possible for users to become "friends" so they can see the latest bookmarks posted by their friends. We also added an option for users to be able to invite some of their other friends to join the site via email, complete with activation links. Finally, we improved the user interface by providing a little bit of feedback to the user at various points using the messages framework that is part of the django.contrib.auth package in Django 1.0.

More advanced topics, such as internationalization and caching, were discussed in chapter 11. Django's special unit testing features were also introduced in chapter 11. This section actually kinda frustrated me. Caching was discussed immediately before unit testing. In the caching section, we learned how to enable site-wide caching. This actually broke the unit tests. They failed because the caching system was "read only" while running the tests. Anyway, it's probably more or less a moot point.

Chapter 11 also briefly introduced things to pay attention to when you deploy your Django projects into a production environment. This portion was mildly disappointing, but I don't know what else would have made it better. There are so many functional ways to deploy Django projects that you could write books just to describe the minutia involved in deployment.

The twelfth and final chapter discussed some of the other things that Django has to offer, such as enhanced functionality in templates using custom template tags and filters and model managers. Generic views were mentioned, and some of the other useful things in django.contrib were brought up. Ayman also offered a few ideas of additional functionality that the reader can implement on their own, using the things they learned throughout the book.

Afterthoughts

Overall, I felt that this book did a great job of introducing the power that lies in using Django as your framework of choice. I thought Ayman managed to break things up into logical sections, and that the iterations used to enhance existing functionality (from earlier chapters) were superbly executed. I think that this book, while it does assume some prior Python knowledge, would be a fine choice for those who are curious to dig into Django quickly and easily.

Some of the beefs I have with this book deal mostly with the editing. There were a lot of strange things that I found while reading through the book. However, the biggest sticking point for me has to do with "pluggable" applications. Earlier I mentioned that the built-in Django admin was one of only a few deciding factors in my company's choice to become a Django shop. Django was designed to allow its applications to be very "pluggable."

You may be asking, "What do I mean by 'pluggable'?" Well, say you decide to build a website that includes a blog, so you build a Django project and create an application specific to blogging. Then, at some later time, you need to build another site that also has blog functionality. Do you want to rewrite all of the blogging code for the second site? Or do you want to use the same code that you used in the first site (without copying it)? If you're anything like me and thousands of other developers out there, you would probably rather leverage the work you had already done. Django allows you to do this if you build your Django applications properly.

This book, however, makes no such effort to teach the reader how to turn all of their hard work on the social bookmarking features into something they could reuse over and over with minimal effort in the future. Application-specific templates are placed directly into the global templates directory. Application-specific URLconfs are placed in the root urls.py file. I would have liked to see at least some effort to make the bookmarking application have the potential to be reused.

Finally, the most obvious gripe is that the book is outdated. That's understandable, though! Anything in print media will likely be outdated the second it is printed if the book has anything to do with computers. However, with the understanding that this book was written specifically for Django 1.0 and not Django 1.1 or 1.2 alpha, it does an excellent job at hitting the mark.

Another Bash Tip

I just learned yet another goodie about the Bash shell that I must share with you. This trick made my day on so many levels.

You know how annoying it is when you get those ridiculously long commands in a terminal window? You know how much more annoying it is when you generally can't Ctrl+arrow around the command to change bits and pieces when you're on OSX? If you've ever been in that boat, this tip is for you.

Bash allows you to hit Ctrl+x Ctrl+e to edit your current command in your "preferred" editor. Your "preferred" editor is determined from the EDITOR environment variable. Since I'm a fan of VIM, all I need to do is make sure I've got export EDITOR=vim in my .bashrc or something along those lines. Once I do that, I can hit Ctrl+x Ctrl+e anytime I am using Bash and have a smelly, long command I want to manipulate.

See it in action.

Tip: easy_install / pip

With all of the exciting updates to Mercurial recently, I've been on a rampage, updating various boxes everywhere I go. I'm in the habit of using easy_install and/or pip to install most of my Python-related packages. It's pretty easy to install packages that are in well-known locations (like PyPI or on Google Code, for example). It's also pretty easy to update packages using either utility. Both take a -U parameter, which, to my knowledge, tells it to actually check for updates and install the latest version.

That's all fine and dandy, but what happens when you want to install an "unofficial" version of some package? I mean, what if your favorite project all of the sudden includes some feature that you will die unless you can have access to it and the next official version is weeks or months in the future? There are typically a few avenues you can take to satisfy your needs, but I wanted to bring up something that I think not many people are aware of: easy_install and pip can both understand URLs to installable Python packages.

What do I mean by that, you ask? Well, when you get down to the basics of what both utilities do, they just take care of downloading some Python package and installing it with the setup.py file contained therein. In many cases, these utilities will search various package repositories, such as PyPI, to download whatever package you specify. If the package is found, it will be downloaded and extracted.

In most cases, you can do all of that yourself:

$ wget http://pypi.python.org/someproject/somepackage.tar.gz
$ tar zxf somepackage.tar.gz
$ cd somepackage
$ python setup.py install

Both easy_install and pip obviously do a lot of other magic, but that is perhaps the most basic way to understand what they do. To answer that last question, you can help your utility of choice out by specifying the exact URL to the specific package you want it to install for you:

$ easy_install http://pypi.python.org/someproject/somepackage.tar.gz
$ pip install http://pypi.python.org/someproject/somepackage.tar.gz

For me, this feature comes in very handy with projects that are hosted on BitBucket, for example, because you can always get any revision of the project in a tidy .tar.gz file. So when I'm updating Mercurial installations, I can do this to get the latest stable revision:

$ easy_install http://selenic.com/repo/hg-stable/archive/tip.tar.gz

It's pretty slick. Here's a full example:

[user@web ~]$ hg version
Mercurial Distributed SCM (version 1.2.1)

Copyright (C) 2005-2009 Matt Mackall <mpm@selenic.com> and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[user@web ~]$ easy_install http://selenic.com/repo/hg-stable/archive/tip.tar.gz
Downloading http://selenic.com/repo/hg-stable/archive/tip.tar.gz
Processing tip.tar.gz
Running Mercurial-stable-branch--8bce1e0d2801/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Gnk2c9/Mercurial-stable-branch--8bce1e0d2801/egg-dist-tmp--2VAce
zip_safe flag not set; analyzing archive contents...
mercurial.help: module references __file__
mercurial.templater: module references __file__
mercurial.extensions: module references __file__
mercurial.i18n: module references __file__
mercurial.lsprof: module references __file__
Removing mercurial unknown from easy-install.pth file
Adding mercurial 1.4.1-4-8bce1e0d2801 to easy-install.pth file
Installing hg script to /home/user/bin

Installed /home/user/lib/python2.5/mercurial-1.4.1_4_8bce1e0d2801-py2.5-linux-i686.egg
Processing dependencies for mercurial==1.4.1-4-8bce1e0d2801
Finished processing dependencies for mercurial==1.4.1-4-8bce1e0d2801
[user@web ~]$ hg version
Mercurial Distributed SCM (version 1.4.1+4-8bce1e0d2801)

Copyright (C) 2005-2009 Matt Mackall <mpm@selenic.com> and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Notice the version change from 1.2.1 to 1.4.1+4-8bce1e0d2801. w00t.

Edit: devov pointed out that pip is capable of installing packages directly from its repository. I've never used this functionality, but I'm interested in trying it out sometime! Thanks devov!

Mercurial 1.4.1 Released

I just noticed that Mercurial 1.4.1 was released today. Most of the changes are pretty minor, but I wanted to voice my appreciation for a new extension that is included with this release: schemes.

This extension basically makes your life easier by shortening redundant URLs for you. For example, you can now use the following command to snag my simple Mercurial extensions repo from BitBucket:

hg clone bb://codekoala/hgext

Without hgext.schemes, that command would be something like one of the following commands:

hg clone http://bitbucket.org/codekoala/hgext
hg clone ssh://hg@bitbucket.org/codekoala/hgext

Not the most ground-breaking of extensions, but still pretty slick!

Bash Time Saver

The other day I was helping a friend get their Django site back online, and I found myself doing a couple of very similar commands when backing up the databases:

$ pg_dump -U some_database -f some_database_backup.sql

I use Bash as my shell in Linux, and apparently it's capable of doing some neat string substitution. I wanted to try something I had seen a few days prior to save myself a few keystrokes. Here's what I was trying to do:

$ pg_dump -U some_database -f some_database_backup.sql
$ ^some_database^some_other_database

Which, I had hoped would replace all instances of some_database with some_other_database from the previously executed command. It turns out that the ^search-for^replace-with is only good for one substitution. That means that my sneaky attempt at saving keystrokes just overwrote the first database's backup with the backup of the second database. In other words, I was getting this:

$ pg_dump -U some_database -f some_database_backup.sql
$ pg_dump -U some_other_database -f some_database_backup.sql

...when I wanted this...

$ pg_dump -U some_database -f some_database_backup.sql
$ pg_dump -U some_other_database -f some_other_database_backup.sql

I did a little more digging today, and it turns out that I should have been using the following syntax:

$ pg_dump -U some_database -f some_database_backup.sql
$ !!:gs/some_database/some_other_database/

The !! is shorthand for executing the previous command, and the :gs is used for global string replacement. And there you have it!

Out of curiosity, does anyone know of similar functionality in other shells?

OSX, Growl, And Subversion

Today I found myself trying to figure out how to make a terminal window stay permanent on my desktop or dashboard on OSX, similar to what I've done in the past with Linux. I just wanted to have the terminal window monitoring things in the background for me. Actually, all I wanted to do was keep track of when my local working copy of our Subversion repository was out of sync. I wanted a solution that would keep out of my way, but I also wanted it to be easy.

My search for a solution seemed short-lived when a Google search suggested a dashboard widget for the Terminal application. The problem with it was that the download server was dead or simply blocked by my company's Internet filter. One way or another, it wasn't long before I went in search of another solution.

At that very instant, I received a Growl notification from some program. That's when it dawned on me--I could tell Growl to tell me when my working copy was out of sync. I had done stuff like that in the past, so I set out to write my solution. This is what I came up with:

#!/bin/bash
MY_BOX=[my IP address]
DEV_ROOT='/path/to/svn/working copy'
cd $DEV_ROOT

MY_REV=`svn log --limit 1 | awk '/^r/ {print $1}' | sed 's/[^0-9]//g'`
SVN_REV=`svn log --limit 1 -r HEAD | awk '/^r/ {print $1}' | sed 's/[^0-9]//g'`

if [[ $MY_REV != $SVN_REV ]]; then
    ssh username@$MY_BOX "growlnotify -s -d47111 -n 'iTerm' -t 'Out Of Sync' -m 'Your working copy is out of sync.  Repository is at revision $SVN_REV, and your working copy is at $MY_REV.'"
fi

Now, a little bit about my environment. As I've mentioned before, all of our development really takes place on Linux-powered virtual machines. We simply use our Macs as the system to interact with those virtual machines. That is why there's the ssh line in that script.

Basically, this script just checks the most recent revision in your local working copy. Then it checks the latest revision in the repository itself. It compares the two revision numbers, and if it finds a difference, it will SSH into my OSX box to send me a Growl notification. On the OSX side, I have Growl and growlnotify installed. Here's a summary of the options to growlnotify:

  • -s: make the notification sticky--don't hide the notification until the user specifically closes it.
  • -d47111: a unique identifier for the notification. This makes it so you can send the same message over and over and it would update any existing notifications with that ID instead of creating a new notification (unless one doesn't exist already).
  • -n 'iTerm': I believe this was supposed to be the "source" application. I don't remember right now.
  • -t 'Out Of Sync': The title for the notification.
  • -m 'Your working copy...': The message to send to my Mac.

This is a fabulous little reminder to me. I have it set up as a cronjob that runs every minute on my Linux-powered development virtual machine. Hopefully this will help others!

Automatic Config Replication With Mercurial

I've done a lot of neat things since I started my new job earlier this month. I'm really excited about the things I've learned and experimented with, and I would like to share some of the concepts with my visitors.

At work we use a lot of virtual machines in our individual development environments. Most of these virtual machines use very similar configuration settings, but the settings are not a standard part of the installation. That is because we build our virtual machines using the same installation tools that our customers would use. The configuration I'm talking about is just stuff specific to our development environment.

Creating and configuring these virtual machines is one of the first things my mentor showed me how to do my first day on the job. He commented on how quickly I would probably start learning all of the configuration tasks because we tend to setup our development VMs several times a month. That was all fine and dandy, and I did get a pretty good feel for what needed to go into a development VM that first day.

However, after doing it so many times, I realized how much time I was using just trying to get the VM set up just right. It wasn't hard to configure--it was just time-consuming. It wasn't long before I started thinking of ways to optimize the process.

One of the ideas I came up with, which seems to be serving my purposes perfectly, is that of using Mercurial to quickly and easily get the exact same configuration from one box to another. It also has the added benefit of keeping a history of the changes I make to my configuration as time goes on.

I won't go into exact detail on how I have things setup at work, but I would like to try to describe a similar scenario that should illustrate my goal just as well.

Getting Started

One of the first things I would encourage you to do is follow along. It will make the concept sink in much faster, and you will probably see other applications very quickly. Please note, however, that if you're following along exactly, it could be a very time-consuming process. I will be using 3 virtual machines as I write this, but you could just as easily use 5, 10, or 100,000. Likewise, you could eliminate the virtual machines altogether if you're in an environment with several physical computers.

One virtual machine will act as the "master" server, or the one that will be configured first. The other virtual machines will act as "slave" servers, which will simply receive configuration updates that happen on the master server. We will also modify this behavior to be a bit more interesting toward the end of the article.

Virtual Machines Galore!

First off, I will create some basic virtual machines using the net install version of Debian 5.0.3. I really only need to create 1 VM and then clone it a couple of times. I am willing to furnish my virtual machines to those who are interested in using them. I will install some additional software in the VM to make sure the demo works smoothly. Among the packages that I will install are:

  • Python
  • Mercurial
  • OpenSSH server

Initialize a Repository

Once I have all of that set up in my virtual machines, I will initialize a Mercurial repository on the master server to maintain the configuration files that I am interested in. Let's just use the /etc directory for the time being. There's a pretty good chance that most of our system-wide configuration will all be contained somewhere beneath /etc.

cd /etc
hg init

Now let's have a gander at the files that we can have Mercurial manage for us:

hg st

Wow! That is quite a set of files, isn't it? Thankfully, they should mostly be plain text files. Mercurial is very efficient at managing text files. Let's now add all of the files in /etc to our repository, so they can be tracked and easily pushed out to other systems.

hg add

That command will happily add everything that hg st printed. Obviously, we can get a little more picky about what we do and do not add to our repository, but that's not the goal of this article. Now, this step merely tells Mercurial that it needs to pay attention to changes in these files. The files have not yet been committed to the repo. Let's do that, so we have a backup of our configuration files in their pristine state:

hg ci -m "Initial import"

The -m "Initial import" is just a comment, to describe what happened to warrant a commit to the repository. It is for your use and the use of anyone who has access to your repo.

Clone The Configuration

Now let's try to push the configuration we just committed on the master server to one of the slave servers. Since my virtual machines are all essentially in the same state, there should be no conflicts, right? Try running the following command on the master server:

hg push ssh://root@slave1//etc
root@slave1's password:
remote: abort: There is no Mercurial repository here (.hg not found)!
abort: no suitable response from remote hg!

Blast! We can't simply push the configuration files out to another computer. For that to work, we'd first have to have the repository itself exist on the slave server. Let's try this another way. One the slave server, run this command:

hg clone ssh://root@master//etc /etc
root@master's password:
abort: destination '/etc/' is not empty

Doh! Mercurial won't let us clone the repository from the master server! That's because Mercurial wants to clone to a new directory, with nothing already in it. One way to get around this hairball of a show-stopper is to just copy the repo using conventional UNIX utilities. Execute this command on one of your slave servers:

scp -r root@master:/etc/.hg /etc/

The .hg directory contains all of the repository information, and it's really all we need to snag in order to clone the repository. This might not be the most elegant solution in the world, but it will suffice for the time being. Once the scp command completes, we should have a full copy of the configuration file repository. Run this command to verify:

hg st

If your setup is anything like mine, you'll probably have a few files that are listed as being modified. Chances are that these files will vary from host to host anyway, and they are probably not worth keeping in a version control system. That would just be begging for conflicts.

I wrote an extension for Mercurial that should make this part of my tutorial a little less hacky. On your other slave server, run the following commands:

hg clone http://bitbucket.org/codekoala/hgext /root/hgext
echo "[extensions]" >> /root/.hgrc
echo "neclone = /root/hgext/neclone.py" >> /root/.hgrc

This extension gives you a new Mercurial command called neclone (N. E. Clone, or "not empty clone"). As we saw earlier, Mercurial doesn't let us clone a repository into a directory that is not empty. This extension allows us to do that. It works almost identically to the regular clone command... takes the same options and everything.

Still on your second slave server, run these additional commands:

hg neclone ssh://root@master//etc /etc
cd /etc
hg up -C

The last step is optional, and soon to be included as part of the extension. It will update your working copy to the latest revision in the repository. Beware that it overwrites any uncommitted changes you may have made to files that are tracked by Mercurial.

So now both slave servers should have a clone of the configuration repository from the master server.

Being Picky

Let's start to be a little picky about the files we are tracking in our repository. Some of the files appears as being modified on my slave server after copying the .hg directory from the master server are:

  • adjtime
  • alternatives/pager
  • alternatives/pager.1.gz
  • mailcap
  • network/run/ifstate
  • udev/rules.d/70-persistent-net.rules

I think it's safe to remove these from the repository, to avoid conflicts with other systems. To tell Mercurial to stop tracking files it is tracking, without actually deleting the file from the filesystem, you can use the following command:

hg forget adjtime
hg forget mailcap

And so on. Go ahead and do that for each of the files that appeared to be modified on your slave server immediately after copying the .hg directory. I'm going to add /etc/hostname to the list of files to forget too.

After doing that, each of those files should appear as being marked for removal when you run hg st. Don't worry, this is normal. The files will not be deleted from the filesystem, but they will be deleted from the repository. Go ahead and commit those changes to the repository on your slave server.

hg ci -Am "Removed some files from version control"

Now let's push those changes out to the master server:

hg push
abort: repository default-push not found!

Since we copied the .hg directory directly using scp, our slave won't know where the changes need to go when we run the push command with no explicit destination repository. To fix that, let's create a file in /etc/.hg/ called hgrc on the slave server. In that file, put the following text:

[paths]
default = ssh://root@master//etc

The hg push command should now push directly to the master server. Yay! The problem we face now is that every other slave server in the group is out of date. How can we fix that? We'll use Mercurial hooks.

Automating Config Replication

Mercurial offers some very useful hooks that we can use to automatically push configuration changes out to each of our slave servers. We will use the commit and changegroup hooks to do the magic. Let's create a script that will live on the master server to take care of pushing our changes out to each slave server. Create a new file in /etc/ on the master server called propagate.sh:

#!/bin/bash
hg up
for node in 'slave1' 'slave2'
do
    ssh root@$node "cd /etc; hg pull -u"
done

Let's also make sure this script is executable:

chmod +x /etc/propagate.sh

This script assumes that your /etc/hosts file or your nameserver are configured appropriately to allow slave1 and slave2 to be resolved to IP addresses. The reason we're SSH'ing into each slave server and using hg pull instead of simply using hg push ssh://root@$node//etc is because you can't force an update on a remote server using push. You can, however, request an update when you're using pull.

Obviously, this script is not the most sophisticated of scripts. It might work well for my demonstration, with only a few servers, but once you get beyond that it would be a nightmare to maintain the list of servers the script has to connect to. You can use whatever means you'd like to keep track of the servers you want to replicate your configuration to. I don't want to bother with all of the crap I'd get for suggesting one thing over another, so it's now your call.

Now it's time to configure the Mercurial hook to execute that script when the master server sees a changeset get into its repository. Open up /etc/.hg/hgrc on the master server, or create it if it doesn't exist. Make sure it has at least the following in it:

[hooks]
commit.propagate = /etc/propagate.sh
changegroup.propagate = /etc/propagate.sh

Let's try it out! Run these commands on your master server:

echo "" >> /etc/hosts
hg ci -m "Added a blank line to the hosts file"
root@slave1's password:
remote: Permission denied, please try again.
remote: Permission denied, please try again.
remote: Permission denied (publickey,password).
abort: no suitable response from remote hg!
Connection closed by slave2
warning: commit.propagate hook exited with status 255

Blast! The script failed because it wanted us to type in a password, but it was not in interactive mode. Let's fix that with a little preshared key magic. I won't go into the details about how this works, but the following commands on your master server should get us rolling:

ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys2
scp -r ~/.ssh root@slave1:~
scp -r ~/.ssh root@slave2:~

Warning

Keep in mind this is not secure and should probably not be how your production machines are configured, especially with the root user.

For simplicity's sake, just accept all of the details and don't set a passphrase. These commands enable us to SSH into our slave servers without using a password. If you get an error such as:

remote: Host key verification failed.
abort: no suitable response from remote hg!

...it just means you need to manually log into your master server from the slave machine that threw that error. When doing so, you will have to answer "yes" to a question about the authenticity of the host you're logging into.

Testing It Out

It is now time to see if we can make a configuration change on one slave server and have it show up on the other slave server. Let's update the hosts file a little bit. Let's add the following line on the second slave server:

10.0.0.5        nonexistanthost

Now let's commit the change and push it off to the master server:

hg ci -m "Added a dumb line to the hosts file"
hg push

My system actually told me that that it had copied the change out to another host. I know because I saw these lines:

remote: pulling from ssh://root@master//etc
remote: searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

Now when I look at the first slave server, I should see that new line in my /etc/hosts file. Also, the log on each server should have the same entry that I just made about adding "a dumb line to the hosts file."

Seem Like A Lot of Work?

A lot of what we just did probably seemed like more work that it is worth, right? Well, being a nerd typically comes with a few qualities. One quality which I have observed many a time in my most geeky of friends is that they will spend hours and hours up front on a program or script just so they can save 2 minutes in the future. They work hard to be lazy.

There is a lot of boilerplate configuration that takes place in this particular scenario. I realize that. What I haven't shared with you, though, is how I automated the boilerplate configuration as well as the propagation of configuration. I'm tired of putting this article off, so I will have to leave those details for another article. Sorry!

Why?! There's a Better Way (tm)

There is always a better way. Always. Go ahead and use whatever you feel is the most efficient method for keeping configuration files in sync across several computers. This is just one more option to add to your toolkit. Don't worry, I won't be offended if you don't like it or don't use it. It works perfect for me and it's free, and I just wanted to share!

Recent Events

This entry is more of a personal nature than technical. I owe my visitors an explanation for the lack of activity on my part for the past few months. I plan on describing what has been occupying so much of my time, and also addressing what I plan to be doing to occupy my time in the near future.

First of all, our little baby boy was born last month, on the 8th of August. It has been a treat to have him finally join us, and I spend quite a bit of time with him. I currently work from home, so just about any chance I get I run over and play with Logan for a few minutes. It's great!

Much of the past month and a half or more has been spent adjusting to the new lifestyle with an infant and working from home. It's very easy to get distracted. Another part of the past several months has been spent searching for new employment.

I graduated from Brigham Young University-Idaho in April 2009 with a BS in information systems and a minor in accounting. My wife and I have been looking at jobs since before I graduated. With the arrival of our son, the quest became a bit more urgent (we were uninsured). It got to the point where it wasn't unusual for me to be applying to between 5 and 10 jobs a night. After more than 6 months of searching and applying to probably close to 100 jobs, things finally got a little more interesting.

A couple of weeks ago, I began doing my traditional job search/application routine. I saw a posting for a Python developer position, and the description sounded right up my alley. My first thought was, "Oh, I'll never get that--it's too good to be true. It sure can't hurt to apply though!" So I did. I shot my resume off to the specified e-mail address and went on looking for other new positions.

I was looking at jobs until about 5am that night (starting at about 8pm the previous day). My wife finally convinced me that I needed to get some sleep. I woke up again at 9am, and I couldn't go back to sleep for some reason. I had an unsolicited email from some contract position in Arizona in my inbox when I woke up. I kindly wrote back, informing the individual that my wife and I were more interested in permanent options. Then I got out of bed and described that exciting part of my morning to my wife.

A few minutes later, I looked over at my computer, and I noticed that I had a new e-mail in my inbox. I decided to take a closer look. To my surprise, it was a response from the Python developer position I had applied to a mere 13 hours earlier. The message was from the company's CTO, and he wanted to make sure I understood that the job would require relocation. My response informed the CTO that my wife grew up in the area, and that we were definitely willing to relocate (especially considering that this was the first time anyone has gotten back to me about a full-time, permanent position).

Only a couple minutes after sending off that reply, the CTO wrote back again to set up a phone interview. Having only been awake for just over two hours, after a 4-hour nap, I was very excited about the progress of my day thus far. The phone interview seemed to go very well. It lasted a solid 40 minutes. I actually really enjoyed the interview, which is really saying something because I abhor telephones. But my interviewers were very nice and easy to talk to.

The interview consisted of several questions to gauge my understanding of various things, including Linux, Python, and MySQL. The questions were all very fun for me. Thankfully, I was able to respond relatively clearly, or at least clear enough for those on the other end of the line to get a decent feel for what I know.

Only a few minutes after the interview concluded, I received another phone call from the company. They set me up with a flight for the following Monday (the interview took place on a Friday), and booked a king suite in a fantastic hotel for me to stay in. This was all within about 16 hours of sending my resume to the company.

I flew out of Idaho Falls on Monday afternoon and arrived at the hotel around 10pm at night. I set three alarms (I'm a deep sleeper) to make sure I'd wake up with plenty of time to make it to the office, chatted with my wife for a bit, and passed out. The bed was amazing after being on an airplane for half a day.

Tuesday morning, I headed out to the office at about 9:45 for my 10am meeting. I arrived a little early and had enough time to meet some of the great people there. It wasn't long before I found myself in the CTO's office, starting the real interview process.

We spent the rest of that morning and several hours of the afternoon in a grill session. The CTO and lead developers grilled me all over the place, asking very interesting questions. I felt like I had wasted their time and money, because I could not formulate very acceptable answers to several of the questions. There were several questions that I had to answer flat-out, "I don't know." It was a very stressful morning.

I think if I were in their shoes, I probably would have booted me out the door. Though, I do have to say, I feel like I learned more about Python and MySQL from those few hours with the developers than I had learned in a very, very long time. These folks are very intelligent.

After all of that, the CTO, the two lead developers who were grilling me, and I all went out to eat. We ate some delicious food, had some fun conversations, and then started to talk about the benefits package that the company offers.

After lunch, we returned to the office, played a little guitar, and then the CTO called me into his office. He handed me an offer. It was far more generous than my wife and I had hoped, so I accepted it on the spot. He and I discussed a few things pertaining to my new job, and he then dropped me off at my hotel. I spent the rest of the evening trying to wrap my mind around what had just happened and sharing the news with all of my friends and family.

I begin work for ScienceLogic, LLC as a Software Architect on the 5th of October. I immediately submitted my official 2-week notice when I arrived back in my hotel room. When I returned home, my wife and I had to scramble to get everything set for how and when we were going to move. We are going to be driving out to Virginia on Tuesday, the 29th of September. It's about a 2,200-mile drive.

So. There you have it. We had a baby, I got a new job, and we're in the process of moving. Oh, and one other exciting bit of news. Packt Publishing has asked me to review one of their Django books! I have received the book, and I will read it and post my review as soon as things settle down a bit more. Stay tuned!

Even More New VIM Fun

It appears that my last article generated a bit of interest! I had one good friend share some enlightening tips with me after he read about my findings. He opened my eyes to the world of abbreviations in VIM. Oh boy, oh boy!!

I'll pretty much just continue today's earlier post, by showing another (super awesome) way to insert blocks into your Django templates. If you were to enter this line into your ~/.vimrc file...

ab bl <Esc>bi{% block <Esc>ea %}{% endblock %}<Esc>h%i

...you would be able to type this...

content bl

...while you're in insert mode. As soon as you hit your spacebar after the bl, VIM would turn it into...

{% block content %} {% endblock %}

...and put you in insert mode between the two tags. Perfect!

For those of you who are interested in understanding the abbreviation, first it will take you out of insert mode (<Esc>). Then it jumps to the beginning of the word before the bl abbreviation and drops back into insert mode (bi). Next it inserts {% block and hops out of insert mode long enough to move the cursor to the end of the word you typed before the bl abbreviation. It finishes inserting {% endblock %}, gets out of insert mode, goes back a character (to account for the space you typed after bl) (h), moves the cursor to the matching opening brace for the } at the end of {% endblock %} (%), and finally puts you back into insert mode (i). Whew! It's all the same old VIM stuff, just packed into one uber-powerful abbreviation!

Thank to Jonathan Geddes for the guidance on this! I find this method to be superior to the one I posted about earlier because you don't even have to leave insert mode to get the code block stuff inserted into your templates!

New Fun with VIM

Hey everyone! I apologize for my lack of writing lately. Our baby boy was born on August 8th, and I'm still trying to catch up to everything from April :)

However, today I figured out some neat magic with VIM that I just have to share. Part of it is also so I don't forget it in the future! That's how awesome it is.

As you well know, I'm a big fan of Django. And I'm a relatively recent convert to VIM. One thing that I find to be kind of a common thing for me when developing a Django site is adding new blocks in the templates. Typing the "boilerplate" code for these blocks is easy, but it takes time.

Today, while tinkering with VIM, I figured out a way to automate the insertion of the boilerplate code:

nnoremap <Leader>b i{% block  %}{% endblock %}<Esc>16hi

Inserting that in your ~/.vimrc will allow you to type \b in normal mode while in VIM, which will insert the text:

{% block  %}{% endblock %}

...and move your cursor to where you would type in the block's name. Finally, it puts you straight into insert mode so you can immediately type the block name and get to work. Alternatively, if you prefer to have the opening and closing block tags on separate lines, you could use:

nnoremap <Leader>b i{% block  %}<CR>{% endblock %}<Esc><Up>$2hi

This would insert:

{% block  %}
{% endblock %}

...and move your cursor to the same place as the other one, so you could start typing the block name. If you'd like to have access to both, you could change the letter that comes after <Leader> to whatever key you'd like. Pretty fancy stuff, eh?

I love it! I'm sure there are other, perhaps better, ways to accomplish the same thing. I am just excited that I figured it out on my own :)

Enjoy!