Big Day in My Career

To prefix this post, I would just like to make sure that you are aware of just how big a nerd I really am. I've probably got 7 computers laying around my apartment. I dream about programming (in fact, I've solved some frustrating programming problems in my sleep). The other day I had a conversation with a good friend of mine about how much faster and more efficient it is to use the keyboard for various things as opposed to moving a mouse around and clicking on things. That's just a taste.

Anyway, for several years, I've wanted to contribute something--a fix, a new feature, etc--to at least one open source project. The problem is that I've never really found anywhere to contribute amongst the programs I actually use. Either I didn't know how to accomplish something or I just didn't see that anything could be improved. It was quite frustrating.

Yesterday I was going along, doing my regular work, when I encountered a problem in an e-commerce framework called Satchmo. This problem made the website I was working on blow up. I couldn't successfully complete an order. At first I had no idea where the problem was. Eventually, I figured out a way to find what part of the framework was causing problems. I took a peek at the code and saw what seemed to be a solution. I made the change and all of the sudden I could complete orders on the website!

I was so stoked! I created a patch from the change that I made to the code. Then I opened a ticket on Satchmo's issue tracking system, described the problem briefly, attached my patch, and went on working. A few hours later, I got an email from the issue tracking system, saying that my patch has been accepted and has been applied to the codebase!

Finally!!! After all these years! I am an official contributor to an open source project. It feels good. Hopefully this is the first of many contributions to come.

Here It Is

Alright, alright... the idea of maintaining a blog has always seemed somewhat cool and somewhat retarded at the same time to me. Most of the time, I have no idea why people think blogs are so important. Other times, however, I find them to be an invaluable resource. Being a nerd, I love to learn. There are oh-so-many situations that I'm put in from day to day that require me to stunt my possibilities for growing my knowledge. Thanks to deadlines and lack of funding, my jobs almost always seem to require "just the basics." This provides absolutely no opportunity for me to learn and grow.

I think that a lot of people are put in the same sort of situation all the time. That's when certain creative individuals keeping a blog seems like a good idea. It's a way for people to discuss the progress and findings they make on their own time. We can learn about a lot of interesting (albeit often useless) ideas from blogs. I hope that my ramblings on my blog will be of some use to various individuals around the world.

How To Compile and Install a 2.6.x Series Linux Kernel

The Linux kernel is the core component in any Linux distribution. Without a kernel, your computer would be essentially useless. It is the piece of software which allows interaction between you, your computer's applications, and your computer's hardware. With such a powerful role in your computing experience, it is important to keep your kernel up-to-date. Each new release provides more hardware support and many performance enhancements. It is also important to keep your kernel up-to-date for security purposes.

Let's upgrade our Linux kernels together. I will walk you through each of the steps I take, from beginning to end, to upgrade my kernel. Just as a warning, I prefer to do the whole process on the command line, so you might want to pull up a terminal, konsole, xterm or whatever you prefer to use for your command line operations.

First you need to download the kernel source code. Many Linux distributions provide specialized editions of the Linux kernel. Typically, you don't want to manually compile and install a custom kernel for these distributions. This does not mean that you can't, it simply means that you might be better off using the "official" kernels for your distribution, which can usually be obtained through your distribution's package manager. You can get the official, 100% free, and complete Linux kernel source code from http://www.kernel.org/. Look for "The latest stable version of the Linux kernel is:" and click the link on the F on the same line. Currently, the latest stable version is 2.6.20, and that's what I'll be using for this tutorial. Please note that commands which begin with a dollar sign ($) are executed as a regular user and commands beginning with a pound sign (#) are executed as a superuser.

$ cd /home/user/download
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.20.tar.bz2

Now login as the superuser, and navigate to the /usr/src directory. Then extract the kernel source into that directory.

$ su -
# cd /usr/src
# tar jxf /home/user/download/linux-2.6.20.tar.bz2

You probably already have a symlink or shortcut called linux which points to your most recent kernel. If you do, delete the link and create another link to the new source tree. Then go into your kernel source tree.

# rm /usr/src/linux
# ln -s /usr/src/linux-2.6.20 /usr/src/linux
# cd /usr/src/linux

I like to identify each compile of my kernel uniquely, to make sure that I'm using the right one. To do that, you have to modify your Makefile

# vi Makefile

You will see the following lines, or something similar, at the very top of the file:

VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 20
EXTRAVERSION =
NAME = Homicidal Dwarf Hamster

Change the EXTRAVERSION property to something you want to use to identify this kernel. I will use -jcv1

EXTRAVERSION = -jcv1

The rest of the Makefile should be fine. In fact, I discourage editing Makefiles unless you know what you're doing. This next step is totally optional, but I like to do it to save some time. You can copy your existing kernel's configuration file in order to have a very similar kernel configuration. My previous kernel version was 2.6.19.1, so this is the command I use:

# cp /usr/src/linux-2.6.19.1/.config /usr/src/linux/

Then I run make oldconfig or make silentoldconfig to update my older kernel configuration file to be able to handle newer features. If you use oldconfig you are required to specify whether or not you want the new features included in your kernel, whereas silentoldconfig will use the defaults determined by kernel developers (they usually know best), asking for minimal input. Let's update our configuration file and then customize it by running make menuconfig (there are several options here, such as make xconfig and make gconfig, but I prefer the text-based menuconfig; there is another you can run by using make config, which runs through each and every option available--it's scary).

# make silentoldconfig
# make menuconfig

menuconfig is a graphical command line application which lets you navigate the features offered by the kernel. Each computer is considerably different from the next, so it really does no good to provide a list of things that I tweak. However, it is important to note what some of the symbols are in the menuconfig utility:

  • M = Module. Modules are loaded when they are required and can contribute to the speed of your system
  • * = built into the kernel. These are typically things which are necessary for your machine to function properly, such as support for your root file system.
  • X = exclusively selected. You'll see this when you select what type of processor you have, for example.

One thing to note before we go further is MAKE SURE YOU KERNEL HAS BUILT-IN SUPPORT FOR YOUR ROOT FILE SYSTEM!!!! My root file system is reiserfs. In my configuration, I made sure that reiserfs was marked with a star. If you don't do this, your kernel won't boot and you will be very frustrated. Trust me.

Your computer is probably quite different than mine, so you might want to just poke around and see if you recognize things that deal with your computer's hardware. Once you are done tweaking your kernel configuration, exit the configuration utility and make sure the configuration is stored in /usr/src/linux/.config

Next we get to build and install the kernel. After that, we have to add an entry to our boot manager so that we can try out our new kernel. The compilation part usually takes just about a half hour on my 2.2Ghz Turion64 processor with 1.25GB of RAM. It takes about 6 hours on my 300Mhz Pentium 2 with 32MB of RAM. Let's find out how long it takes for you to compile your kernel!

# time make
...
real    27m29.663s
user    23m34.476s
sys     2m56.575s

Now let's install the modules and install the appropriate files in the boot area:

# make modules_install
# make install

This is the part that always used to mess me up. I use Slackware Linux, which is more UNIX-ish than most distributions. It's actually the oldest surviving Linux distribution to date, but that's another story. For some reason, the make install command doesn't always work with Slackware. There is a process I use to setup my boot directory when I compile a new kernel. I wrote a simple shell script called fixkernelinstall to take care of it for me:

#!/bin/bash
# Configure my computer for a new kernel
# Author: Josh VanderLinden
# Assisted By: Dan Purcell

# if the user didn't supply a kernel number, ask for it
if [ $# -eq 0 ]; then
    echo -n "Kernel: "
    read kernel
else
    kernel=$1
fi

# determine root partition
echo "Determining root partition..."
rootpart=`mount -l | grep ' / ' | cut -f 1 -d\ `
echo "Root partition is $rootpart"

# copy kernel configuration file
cp /usr/src/linux/.config ./config-$kernel

# now rename everything
echo "Renaming files..."
mv System.map System.map-$kernel
mv vmlinuz vmlinuz-$kernel

# if the config file exists and it's a symlink, remove it
if [ -f 'config' -a `stat config | grep -c 'symbolic link'` = '1' ]; then
    echo "Removing link to configuration file"
    rm config
else
    # otherwise it might be important
    echo "Renaming configuration file"
    mv config config.bak
fi

# Link files
echo "Creating symlinks..."
ln -s System.map-$kernel System.map
ln -s config-$kernel config
ln -s vmlinuz-$kernel vmlinuz

# Update lilo
echo "Adding entry to /etc/lilo.conf for $kernel"
echo "image = /boot/vmlinuz-$kernel" >> /etc/lilo.conf
echo "  root = $rootpart" >> /etc/lilo.conf
echo "  label = $kernel" >> /etc/lilo.conf
echo "  read-only" >> /etc/lilo.conf
echo "Linux kernel $kernel has been configured."
echo "Please check your lilo configuration and run lilo before rebooting"

I'm not an expert on shell scripts, so please feel free to offer suggestions for doing things better if you know how. This script uses the kernel version (given by the user) to setup by /boot directory properly. In my case, I run the script as such

# cd /boot
# fixkernelinstall 2.6.20-jcv1

And the output is something like:

Determining root partition...
Root partition is /dev/hda5
Renaming files...
Renaming configuration file
Creating symlinks...
Adding entry to /etc/lilo.conf for 2.6.20-jcv1
Linux kernel 2.6.20-jcv1 has been configured.
Please check your lilo configuration and run lilo before rebooting

As you can see from the script, I use LILO instead of the arguably more popular GRUB. Either one works for me, but LILO is sufficient for my needs. If you want to use the same kind of script for a GRUB installation, just change the LILO part at the end to something like:

echo 'Adding entry to /boot/grub/menu.lst for $kernel'
echo '  title Linux on ($rootpart)' >> /boot/grub/menu.lst
echo '  root (hd0,4)' >> /boot/grub/menu.lst
echo '  kernel /boot/vmlinuz-$kernel root=$rootpart ro vga=normal' >> /boot/grub/menu.lst

Make sure you change the line with root (hd0,4) to fit your setup. With GRUB, you don't have to worry about applying changes to see the menu entry at boot. It's automatically there. With LILO, however, you have to actually apply changes each time you make them. You do this by running the lilo command as the superuser:

# lilo
Added Windows
Added Linux
Added 2.6.20-jcv1 *

The star (*) signifies the default kernel to boot. Make sure that your root partition is correctly specified in your boot loader configuration. My root partition is on /dev/hda5, but yours may be (and probably is) on a different partition. If you fail to specify the correct root partition, your system will not boot that kernel until the configuration is fixed. GRUB makes this a lot easier than LILO.

And this is the point when you start to cross your figures and hope that your computer doesn't blow up... We get to reboot our computer and hope that our configuration file plays well with our computer. So, let's do that! See you in a few minutes (hopefully).

# shutdown -r now

So here I am, back on Linux on my freshly-rolled kernel. I hope you are as successful as I have been this time around. Keep in mind that you have to reinstall custom kernel modules if you installed others while you were on your other kernel. For example, I use ndiswrapper to access wireless Internet. I have to recompile and reinstall the ndiswrapper module and device drivers before I can use wireless. Likewise, I have VMWare Server on my laptop, which installed special modules. I have to run vmware-config.pl to reconfigure VMWare Server for my new kernel before I can run any virtual machines.

To summarize, here are the commands that I used in this tutorial. Remember that lines beginning with a dollar sign ($) are executed as a non-privileged user, while lines beginning with the pound sign (#) are executed as the superuser (root).

$ cd /home/user/download
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.20.tar.bz2
$ su -
# cd /usr/src
# tar jxf /home/user/download/linux-2.6.20.tar.bz2
# rm /usr/src/linux
# ln -s /usr/src/linux-2.6.20 /usr/src/linux
# cd /usr/src/linux
# make clean
# vi Makefile (to change EXTRAVERSION to -jcv1)
# cp ../linux-2.6.19.1/.config .
# make silentoldconfig
# make menuconfig (just to ensure settings were good)
# time make
# make modules_install
# make install
# cd /boot
# fixkernelinstall 2.6.20-jcv1
# vi /etc/lilo.conf (to make sure things were good)
# lilo
# shutdown -r now

I hope that you are able to use this tutorial to successfully install or upgrade your kernel. Good luck! Any comments or suggestions are welcome!

Use Your Linux System With A Broken Bootloader

Here are some simple steps that should allow you to make some very simple changes that could end up saving you a lot of time and possibly a reinstall of your Linux system.

  1. Acquire a Linux liveCD of your choice. I prefer SLAX, as it is very small and quick. There are countless others such as KNOPPIX, ZenWalk, Ubuntu, Fedora Live, and DSL. Each of these would easily be up to the task.
  2. Boot the live CD and get to a command line somehow. If you are in a GUI environment, this means starting a Terminal or Konsole session. If you are booted into a classic Linux login prompt, you're good to go.
  3. Make sure you're the root user (you can do this by typing whoami). If you're not root, type su - and enter the root password when prompted.
  4. Mount your linux root partition. These steps may vary depending on preference, but they should be fairly similar:
    1. cd /mnt
    2. mkdir myroot
    3. mount /dev/hda5 /mnt/myroot
    4. cd /mnt/myroot
  5. Depending on the complexity of the tasks you wish to perform, you may need to mount your device list according to what your live CD detected:
    1. mount /dev /mnt/myroot/dev
    2. mount -t proc /proc /mnt/myroot/proc
  6. Finally, switch into your installed system, specifying where your root partition is mounted and they shell you wish to use:
    1. chroot /mnt/myroot /bin/bash

At this point you should be able to do several useful things, such as reinstall your bootloader into the MBR. You can also edit hosed configuration files this way.

If you need to add Windows into your boot configuration, and you use LILO, you could try adding something like this to your /etc/lilo.conf:

other = /dev/hda1
    label = Windows
    table = /dev/hda

(be sure to run lilo again to save the change)

When you are all done with your chroot environment, simply type exit or logout or what have you. Then be sure to unmount anything you may have mounted in order to use the system.

Format Large Drives as FAT32 in Windows

Have any of you ever purchased a drive larger than 32GB that you wanted to be able to plug into a Mac, Windows, and Linux machine at anytime, being able to both read and write on all of them? Did you ever try to format such a drive in Windows, simply to find out that Windows only permits formatting large drives as NTFS, which is not always writable outside of Windows?

I recently experienced this problem, so I decided to find a way to make Windows format my 120GB drive as FAT32, which basically any platform can read from and write to. A bit of searching pointed me to Ridgecrop Consultants.

They have provided a simple, fast, and efficient application which will format any size drive as FAT32 in Windows. You can download it here.

Once you download the file, extract the fat32format.exe file from the archive and put it someplace easy to remember, like C:\. Then open up a command prompt (Click Start > Run... and type cmd in the box). Navigate to the location where you extracted the program and type something like:

fat32format d:

Where d: is the Windows drive letter of the drive you want to format. A couple seconds later you should have a fully FAT32 formatted drive!

Source: http://www.ridgecrop.demon.co.uk/index.htm?fat32format.htm

How to Upgrade a NetBeans Project to JDK 6

NOTE: I am currently using NetBeans 5.5.

Here are the simple steps:

  1. Download and install JDK 6 if you haven't already
  2. Add the JDK 6 platform to NetBeans
    1. Tools > Java Platform Manager
    2. Add Platform
    3. Navigate to JDK 6's installation root directory. It will probably have a fancy overlay icon over it.
    4. Click Next
    5. Give the platform a name and continue onward
  3. Open a project in NetBeans
  4. Right-click on the project and open its properties
  5. Open the Libraries item on the left
  6. Change the Java Platform to your new 1.6
  7. Open the Sources item on the left
  8. Change the source level at the bottom to 1.6
  9. Enjoy!

There really are quite a few nice features in this release. So far I've found time to play with the built-in JTable sorting methods and the system tray icon stuff. It's so much easier than it used to be!!

Disable Page Navigation with Horizontal Scroll

The following information was snagged from http://gentoo-wiki.com/HARDWARE_Synaptics_Touchpad#Horizontal_Scroll_Issues_with_Firefox

I was looking for this information for a long time and couldn't find it. I finally found reference to this in a cached version of this (gentoo-wiki).

So that firefox will not misinterpret the horizontal scroll as 'back' and 'forward' For many like me this is irritating because you are reading a webpage and by moving the mouse you accidently go to another page.

Some forums suggest disabling horizontal scroll (editing xorg.conf to: Option "HorizScrollDelta" "0"), but instead the correct way is to configure firefox so that it doesn't misinterpret the horizontal scroll. In firefox type in URL (address bar):

about:config

and double-click on the line:

mousewheel.horizscroll.withnokey.action

to set it to 0. And then also set

mousewheel.horizscroll.withnokey.sysnumlines

to true.

Disable Restore Session in Firefox 2.0

If any of you get annoyed by this feature, there actually is a way to disable it.

  1. Enter your Firefox configuration mode by entering about:config into your address bar.
  2. Right click anywhere in the list of properties and select New > Boolean
  3. Make the name of the new boolean property browser.sessionstore.enabled
  4. Set the value to false

Once you do that, you should be set! Enjoy!

MySQL on Slackware

Many of us who have installed Slackware on our machines in the past few years have noticed something annoying on the first boot: the MySQL service fails to start!!

In response to this, I would like to offer this simple tutorial. Right now I am doing this blindly, meaning I don't have a fresh install to work with. Please bear with me if you notice some errors in the tutorial, and please tell me about them!

  1. Log in as root: # su -
  2. Make sure the mysql package is installed. You can do this by running pkgtool and selecting the view option. Hit n and it will take you to the packages in the list that begin with n. MySQL should be right above the n packages. If you don't have mysql already installed, you can download it from http://www.linuxpackages.net/ or wherever you'd like. Once you have a copy of it, install it by using the installpkg mysql......tgz command
  3. Setup the databases: # mysql_install_db
  4. Apply the proper permissions: # chown -R mysql.mysql /var/lib/mysql
  5. Start the database: # mysqld_safe &
  6. Setup the root user:
# mysqladmin -u root password 'newpassword'
# mysqladmin -p -u root -h localhost password 'newpassword'

Finally, test your installation:

# mysql -p

That should do it! Please comment if you notice any errors with this posting.

Linux Basics

Filesystem

  • /bin - This is where basic Linux commands reside (ls, du, dd, cp, etc).
  • /boot - Your boot images are stored here.
  • /dev - Links to access your machine's devices.
  • /etc - Configuration files and boot scripts.
  • /home - User directories, equivalent to "Documents and Settings" in Windows XP.
  • /lib - System libraries, codecs, etc.. similar to Windows/System and Windows/System32.
  • /mnt, /media - Mount points. A mount point is a directory that the contents of your hard drives, cd/dvd drives, floppy drives, or jump drives will be accessible.
  • /opt - Optional packages and programs. Could be thought of as a "Program Files" directory.
  • /proc - Special dynamic information about your system.
  • /root - System administrator home. Could be thought of as a "Documents and Settings/Administrator" directory.
  • /sbin - Super-user binaries. These programs need super-user (root) privileges to execute.
  • /tmp - Temporary files. Every user usually has read, write, and execute permissions here.
  • /usr - The main place for programs to be installed. Most like "Program Files" in Windows.
  • /var - System logs, mail spools, default web server directory, databases, etc...

Basic Commands

  • cd - Change Directory: moves to a different directory.

    Usage: cd directory, cd .., cd /directory

  • cp - CoPy: Copy a file or directory. If you wish to copy recursively and retain all attributes associated with the file or directory, use the -a option.

    Usage: cp original original.backup, cp -a /home/user/directory /home/user/backup

  • df - Disk usage on Filesystem: Display an overall summary of disk usage on mounted mountpoints. If you want human-readable sizes, use the -h option.

    Usage: df, df -h, df /mnt/mountpoint

  • du - Disk Usage: Display the disk usage of each file (recursively, by default) in the current directory. If you want human-readable sizes (1024 bytes = 1Kb, 1024Kb = 1Mb, etc), use the -h option. If you want a summary of the total disk usage by a directory and everything inside, use the -s option.

    Usage: du, du -s, du -h, du -sh, du -s /directory

  • ln - LiNk: Create a link, or shortcut, to a file or directory. I prefer to do symlinks by using the -s option.

    Usage: ln original link, ln original /directory, ln original /directory/link, ln -s original /directory/link

  • ls - LiSt: lists the contents of a directory.

    Usage: ls, ls .., ls /directory/subdirectory

  • man - View the MANual page for a program or other file. Probably the most useful program ever.

    Usage: man program, man xorg.conf

  • mkdir - MaKe DIRectory: create a new directory/folder.

    Usage: mkdir dirname, mkdir /directory/newdirname

  • mv - MoVe: Move a file or directory to a new location, or rename a file or directory.

    Usage: mv file /directory/newhome, mv file newfilename

  • pwd - Print Working Directory: returns the full path of the directory in which you are working.

    Usage: pwd

  • rm - ReMove: Remove a file or directory. If you want to get rid of a directory and all of its contents, use rm -R or rm -Rf for recursive deletion.

    Usage: rm filename, rm /directory/filename, rm -Rf /directory/dirname

  • rmdir - ReMove DIRectory: remove a directory. The directory must be empty.

    Usage: rmdir dirname, rmdir /directory/dirname

  • whereis - Determine where a certain file exists (if it's in your path)

    Usage: whereis filename

  • whoami - Detemine which user you are currently logged in as

    Usage: whoami

Linux Permissions

Linux has a great permission scheme. Since its inception, three basic levels of security have existed: user, group, and everyone. A simple way to change the permissions on a file or directory is to use the chmod, or CHange MODe, command. Changes to the permissions can be either a symbolic representation or an octal number representing the bit pattern for the new permissions. I prefer the symbolic method, myself, but many others prefer to see the octal pattern.

When working with permissions in Linux, always remember the following orders: User, Group, All; Read, Write, Execute. Those are the orders you will put the permissions in. Let's say that we want to make a file readable and writable only to the owner, while no one else will even be able to read the file. Here are some examples:

NOTE: Commands that begin with $ are executed as a regular user. Commands that begin with # are executed by a superuser (root). These two symbols (when they are the very first character in the command) are not entered by the user.

Symbolic:

$ echo "Hi" >> testing
$ chmod a-rwx,u+rw testing

Octal:

$ echo "Hi" >> testing
$ chmod 600 testing

Let's now examine the commands individually.

$ echo "Hi" >> testing

This command will append "Hi" (without the quotes) to the end of the file called testing. The file will be created if it does not already exist, assuming that the user has write permissions in the current directory. If you didn't want to append, you could overwrite anything that may be in the file by using a single > rather than >>.

$ chmod a-rwx,u+rw testing

This command removes (the - in a-rwx) read (the r in a-rwx), write (the w in a-rwx), and execute (the x in a-rwx) permissions from all (the a in a-rwx) users on the file called testing. Next we add (the + in u+rw) permissions for the owner (the u in u+rw) of the file: read (the r in u+rw) and write (the w in u+rw) on the file called testing.

$ chmod 600 testing

This command sets the permissions for everyone in one shot. I think of the digits in binary:

  • 1 = execute only;
  • 2 = write only;
  • 3 = write and execute, but no read;
  • 4 = read, but no write or execute;
  • 5 = read and execute, but no write;
  • 6 = read and write, but no execute;
  • 7 = read, write, and execute.

A digit is required for each level of permissions (user, group, and all). It is also possible to put another digit before the 3 levels of permissions, but to be honest, I don't know what significance it has. A little bit of testing has shown that it puts either an S or T in place of the execute permissions (depending on the digit).

A couple more things about chmod: Directories must also be executable in order to list the contents. chmod is very powerful. Finally, you can recursively apply permissions to directories and everything underneath with the -R option.

$ chmod -R a+rx /home/user/share

A couple of commands closely associated with chmod are chgrp (CHange GRouP) and chown (CHange OWNer).

chown will change the user ownership of files or directories. This can be done recursively with the -R option. It also has the capability to change the group ownership built into it. The syntax is: chown [options] user[:group] file1 [file...]

chgrp will change the group ownership of files or directories. You can do this recursively with the -R option. The syntax is: chgrp [options] groupname file1 [file...]

Cronjobs

Cronjobs are similar to scheduled tasks in the Windows world. Schedule tasks or cronjobs are simply programs that you want to run regularly, without having to type in the command every time you want it to run. Most distributions come with a cron daemon of some sort installed by default. Generally speaking, you can edit your cronjobs by typing crontab -e. This will bring up an editor like vi (it usually is vi by default) in which you edit your cronjob file. Each user can have their own cronjobs (unless it's been disabled by the administrator, I would assume). Here is an example of a cronjob entry:

47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null

You'll notice the 47 with four *'s after it. This is how the daemon knows when to execute a job. This is what each of the stars represent in their order:

  1. Minute: 0-59
  2. Hour: 0-23 (0 = midnight)
  3. Day of month: 1-31
  4. Month: 1-12
  5. Day of week: 0-6 (0 = Sunday)

So the example above will run after 47 minutes every hour of every day of the month of every month. You can also do some fancy things like having a job run every 5 minutes, after 15 and 45 minutes, etc. Let's say that we want to grab our mail every 5 minutes. The cronjob entry would look something like:

*/5 * * * * /usr/bin/fetchmail

If we wanted to grab our mail every 2 hours but only on Mondays, we would use the following:

0 */2 * * 1 /usr/bin/fetchmail

To have a job run after 15 and 45 minutes, we could do this:

15,45 * * * * /usr/bin/fetchmail

Pretty nifty, eh?

Make

This fancy utility is usually the means used for compiling programs from source. The usual sequence of commands for compiling and installing a program from the source in Linux is as follows:

$ ./configure
$ make
# make install

Most packages will follow this convention, but some require special procedures. Sometimes you can even get away with skipping the make and jump straight from ./configure to make install. It is always a good idea to read the README and INSTALL files included in source packages. They will generally tell you about anything out of the ordinary when compiling the source. Obviously, there is a lot more to this utility, but I'm not the person to explain it to you.

Package Management

There are several different types of package managers. The most popular these days are .rpm (RedHat Package Manager) and .deb (Debian). There are some other kinds of packages, but they aren't as popular as RPM and DEB. Slackware uses a straight .tgz (gzipped tarball) as its package system. Frugalware uses .fpm, which are bzipped tarballs. In the end, packages are almost always gzipped or bzipped tarballs.

Each package system has its ups and downs. I've personally found RPM-based distributions to be overly slow, especially with the package management. DEB-based distributions seem to be a lot more speedy when put up against RPM-based distros. However, I have found Slackware's TGZ-based system the most efficient and the fastest. Both RPM's and DEB's have dependency checking. In other words, the package manager will attempt to locate all entities upon which a program may depend in order to function properly before installing or upgrading that program.

A lot of people claim that .tgz packages are inferior to RPM and DEB because of the lack of dependency checking. By default, Slackware does not have dependency checking, but if you know what you're doing, you can get your dependencies a lot easier than you can with RPM or DEB (in my opinion).

RPM packages usually seem quite large compared to other package systems like DEB and TGZ. As far as I have seen, TGZ packages are smaller than both RPM and DEB packages. Here are a few options to help you use the RPM and TGZ package managers. I am not sure on Debian packages, so I won't attempt that one:

  • RPM:
    • rpm -q or rpm --query: look for a package on your system
    • rpm -i or rpm --install: install a new package on your system
    • rpm -U or rpm --upgrade: upgrade a package which is already installed on your system
    • rpm -e or rpm --erase: remove a package from your system
  • TGZ:
    • pkgtool: a text-based package manager
    • installpkg: install a package onto your system
    • upgradepkg: upgrade a package which is currently installed on your system
    • removepkg: remove a package from your system

You can also get some other programs that are VERY useful for package management. I think the latest craze for RPM's is YUM. I have not had great luck with this utility, but a lot of people really like it. Debian packages have used apt-get for ages now. My favorite add-on for Slackware packages is called swaret. Other distributions use the pacman utility, which is very efficient. Each one of these applications has several options and operation procedures.

Secure Shell and Secure Copy

One of my favorite aspects of Linux and other UNIX-derived systems is their secure shell capability (which is usually installed by default). Secure shell, or SSH, is a way for users to log into a remote computer and work on the remote computer as though it were right in front of them. Granted, it's all text-based unless you have X11 forwarding setup properly on both machines. But the command line interface (CLI) is extremely powerful--you should not be afraid to learn and use it. If you need to SSH from a Windows machine, you can use PuTTY.

In order to ssh into another computer, you simply type:

$ ssh hostname

or use the computer's IP address:

$ ssh xxx.xxx.xxx.xxx

By default, ssh on Linux machines will use the username of the account that you are running ssh from. Sometimes you need to login as a different user than the one you're currently using. To do that, you use the -l (lowercase L) or make it look like an e-mail address:

$ ssh hostname -l differentuser
$ ssh differentuser@hostname

Once your ssh session begins with the remote host, you will be asked to enter the password associated with the account with which you are attempting to login. If you do a lot of ssh'ing between machines, typing in your password several times is not only annoying but it could also pose a security risk--some wandering eyes might be watching you each time you enter your password. A great way to get around this is to generate a public and private key for your account. Once you do this, you can use the private key file on the machine you're ssh'ing out of and the public key on the remote machine.

To generate a public/private key, you can use ssh-keygen:

$ ssh-keygen -t rsa

You will be asked to enter and verify a passphrase for your private key. If you were aiming to get around not typing in your password, just hit enter twice for this part. It's still not secure, but it is a lot less hassle if you're only working on machines that no one else has "access" to. Usually your keys will be stored in ~/.ssh/ (or your /home/username/.ssh/ directory: ~ refers to your /home/yourusername).

The next step is to create your identification:

$ cd ~/.ssh
$ echo "IdKey private_key_file" > identification

Now you have to copy your public key (usually the one that ends in .pub) to the remote host:

$ scp public_key_file.pub username@xxx.xxx.xxx.xxx:/home/username/.ssh

And finally you should add your public key to the list of authorized users on the remote host by adding a line like the following to the ~/.ssh/authorization file:

Key public_key_file.pub

At this point you should be able to log into your remote host without your password (assuming you skipped the passphrase part of the key generation above).

As for the secure copy utility, you can get an idea of how to use it from the scp command above. This program uses the SSH system to securely copy files between two computers. This is how I use the scp command:

$ scp [-r] user@remote:/path/to/remote/file /local/destination/path/
$ scp [-r] /path/to/local/file user@remote:/remote/destination/path/

If you have setup public key authorization, you will not have to enter your password each time you use scp. Otherwise, you are asked for a password each time you run scp.

Archiving and Backup

There are many different kinds of compression and archiving tools in Linux. The most common types are tarballs, gzipped, and bzipped files. Below is a list of purposes for each of the three and some of the options:

  • tar - multiple files, little or no compression
    • c, --create - create a tarball
    • f, --file - specify the tarball's filename
    • x, --extract, --get - extract the contents of a tarball
    • j, --bzip2 - use bzip2 compression/decompression
    • v, --verbose - show verbose output
    • z, --gzip, --ungzip - use gzip compression/decompression
    • to create a tarball called filename.tar which contains all of the files in /dir/to/archive: $ tar cf filename.tar /dir/to/archive
    • to create a tarball called filename.tar.gz which contains all of the files in /dir/to/archive and gzip it: $ tar zcf filename.tar.gz /dir/to/archive
    • to create a tarball called filename.tar.bz2 which contains all of the files in /dir/to/archive and bzip it: $ tar jcf filename.tar.bz2 /dir/to/archive
    • to extract the contents of a tarball called filename.tar.gz to the current directory: $ tar zxf filename.tar.gz
    • to extract the contents of a tarball called filename.tar.bz2 to the current directory: $ tar jxf filename.tar.bz2
  • gzip - single file compression
    • To gzip a file called filename to make it filename.gz: $ gzip filename
    • To gunzip a file called filename.gz to make it filename: $ gunzip filename.gz
  • bzip2 - single file compression
    • To bzip a file called filename to make it filename.bz2: $ bzip2 filename
    • To bunzip a file called filename.bz2 to make it filename: $ bunzip2 filename.bz2