Why I Like Python

For the past 8 years or so, I've been very much involved with programming using the PHP scripting language. It is a powerful scripting language that suits building websites very well. PHP has a huge set of useful built-in functions, and more recent versions support object-oriented programming. I first started teaching myself PHP when I got tired of having to build each and every web page on my site manually. I hated having to change dozens of web pages just because I added a new link to my navigation. All sort of reasons like this prompted me to investigate PHP. Little did I know then that this language would occupy so much of my time in the future.

I rapidly learned that PHP offered much more than just allowing me to update one part of my website to change all pages. I started tinkering with all aspects of what PHP offered, and I'm still learning about it. After many years of searching, I finally found a programming language that was easy, fast, and efficient for my needs.

Through the years, I continued to develop various applications using PHP. I attempted to write my own forum/bulletin board software while I was still in high school. If I may say so myself, the forum really had some awesome concepts behind it. But my problem was that I lost interest too fast. I also built a very large application that reduced a 1.2GB MS Access database down to less than 15MB using PHP and MySQL. The new application offered many enhancements over the previous system. For one thing, it was much faster. Second, it allowed multiple simultaneous users to modify the database. Three, so far it has lasted more than 3 years, compared to the 1 year maximum that the MS Access solution always seemed to hit before it crashed.

Using PHP, I helped revolutionize the way one of the companies I work for developed websites. I built a simple in-house web framework that supposedly reduced development time by allowing us to forget about the mundane details involved in virtually every website and just get to the developing. In a matter of two weeks (with a full class load and another job), I managed to write an e-commerce solution for the same company using PHP.

Basically, PHP has treated me well over the years. But this post is not supposed to be about PHP. If that's the case, why have I rambled about PHP this whole time, you ask? Well, it's mostly to demonstrate that I have a lot of experience with the language. I have a pretty good feel for what it's capable of and how I can accomplish most anything I need.

With all of that in mind, I've encountered my frustrations with PHP. They may seem petty and moot to most people, but they have turned out to be the determining factor in what scripting language I prefer. Here is a short list of things I now despise about PHP:

  • dollar signs ($) to signify variables -- while this is a useful feature, it becomes quite bothersome when you're programming all day long (at least it does for me). I'll get to why later.
  • using an actual arrow (->) to access attributes -- most other modern programming languages simply use a period (.) for this functionality. I'll comment more on this and why it frustrates me later as well.
  • lack of true object-oriented constructs -- in other object-oriented languages, like Java, if you have a string and you want to determine its length, you call the length() method of that string. In PHP, you call a function such as strlen($var). This sort of behavior plagues the language.
  • too many unnecessary keystrokes -- as I mentioned before, all mutable variables are preceded by a dollar sign ($). That is 2 keystrokes (shift and 4) every time you want to refer to a variable, wheres most languages nowadays have none). Likewise, accessing attributes of objects in PHP uses an arrow (->), which is three keystrokes (minus, shift, and .). Most other object-oriented languages only require a period (one keystroke) for such functionality. The main reason I make such a big deal out of the number of keystrokes is simple. The more keystrokes a program requires, the more likely you are to have bugs. The fewer keystrokes a program requires, the less likely it is that your program will be broken. It boils down to maintainability. Also associated with the number of keystrokes is the pure laziness within me and most other programmers.

These frustrations have been bothering me for several years now. I continued using PHP mostly because it's so widely supported, but also because I could not find a suitable replacement for it. I investigated a few others, but they apparently didn't have a great influence on me right now because I don't remember any names.

When the whole Ruby on Rails bandwagon was rolling through town, I decided to hop on to see what all of the hubbub was about. I started studying the Ruby script language, and I found that it had some really neat things about it. It uses a more solid approach to object-oriented programming, which I really liked. I also noticed that it employs some intriguing structures for accomplishing things in ways I've never seen before. Despite these things, Ruby still didn't seem like a viable replacement for my PHP. It didn't come up to snuff in performance in many cases, so I essentially abandoned it.

For at least a year now, I've been interested in learning Python. I've heard a lot about it over the years, but I just never seemed to make the time to actually sit down and study it. That is, not until about the beginning of August of 2007. After I made my decision that Ruby and Ruby on Rails weren't quite up to par for my needs, I stumbled upon the Django Project, which is a web framework similar to Ruby on Rails, only built using Python.

I decided this was my chance to actually sit down and learn a little about this "Python" so I could see what it had to offer. I mostly used Django as my portal to Python. As I started learning Django, I became more and more familiar with the way Python works and how I work with Python.

At some point in time, I decided that I actually liked Python, and my wife let me buy some really cool books to help me learn it. By the beginning of October 2007, I had convinced my supervisor at work to let me start building websites using Django instead of our home-grown PHP framework.

And here comes a story. This is the main reason I blabbered about my experience with PHP so much at the start of this article. Again, after all these years, I feel very confident that I can do just about anything I want efficiently and elegantly with PHP.

Back in October of 2006 (after using PHP for some 7 years), I was asked to write a PHP script to parse some log files and output various bits of information in a certain format. After maybe a week, I had a script that did the job fairly well. Most of the time it worked, but there were occasions when it didn't and I had to fix it. The script turned out to be 365 lines of code with very few comments scattered throughout. It's also a maintenance nightmare, even for me.

In October of 2007, I rewrote that same script in Python. After only a couple days, the script seemed to be perfect. It did its job, and it did it well. With comments for just about every single line of code, the Python version of the script took up a mere 118 lines of code. Take out the comments and it is 56 lines of code. The script is several times more understandable and maintainable than its PHP counterpart. I also believe that it is much more efficient at doing its task. Keep in mind that I had only been using Python for about 2 months at this point in time.

It's been through various experiences like the log parser that I have decided I prefer Python over PHP. Obviously, I'm not quite as comfortable with it as I am with PHP, but I don't feel too far behind. Now, less than 6 months after deciding that we'd use Django at work, I don't think my supervisor could be happier. Building a typical website with our PHP framework takes between 1 week and a couple months. Thanks to Python and Django, most of our websites can be "ready" within just a few hours. That time assumes that the website's design itself is ready for content to be put into it and also that the client does not require custom-designed applications.

Python and Django have helped revolutionize the way we do things at work, and I can hardly stop thinking about it. Python fixes nearly all of the frustrations I had with PHP. The frustrations it doesn't take care of are worth the sacrifice. Python is capable of object-oriented programming. It uses a period (.) to access object attributes. Variables are not preceded by some arbitrary symbol.

Also, the fact that Python code can be compiled to bytecode (like Java) is enormously beneficial. Each and every time a PHP script is executed, the PHP interpreter must parse the code. With Python, the first time a script is executed after an edit, the program is compiled to bytecode and subsequent executions are faster. That is because the bytecode is processed directly by the Python Virtual Machine (as opposed to being compiled to bytecode _each_ time and then executed). Python also offers a vast amount of standard library functions that I would really appreciate having in PHP. But from now on (at least for the foreseeable future), I will try to do all of my scripting in Python and leave PHP for the special cases.

Big Day in My Career

To prefix this post, I would just like to make sure that you are aware of just how big a nerd I really am. I've probably got 7 computers laying around my apartment. I dream about programming (in fact, I've solved some frustrating programming problems in my sleep). The other day I had a conversation with a good friend of mine about how much faster and more efficient it is to use the keyboard for various things as opposed to moving a mouse around and clicking on things. That's just a taste.

Anyway, for several years, I've wanted to contribute something--a fix, a new feature, etc--to at least one open source project. The problem is that I've never really found anywhere to contribute amongst the programs I actually use. Either I didn't know how to accomplish something or I just didn't see that anything could be improved. It was quite frustrating.

Yesterday I was going along, doing my regular work, when I encountered a problem in an e-commerce framework called Satchmo. This problem made the website I was working on blow up. I couldn't successfully complete an order. At first I had no idea where the problem was. Eventually, I figured out a way to find what part of the framework was causing problems. I took a peek at the code and saw what seemed to be a solution. I made the change and all of the sudden I could complete orders on the website!

I was so stoked! I created a patch from the change that I made to the code. Then I opened a ticket on Satchmo's issue tracking system, described the problem briefly, attached my patch, and went on working. A few hours later, I got an email from the issue tracking system, saying that my patch has been accepted and has been applied to the codebase!

Finally!!! After all these years! I am an official contributor to an open source project. It feels good. Hopefully this is the first of many contributions to come.

How To Compile and Install a 2.6.x Series Linux Kernel

The Linux kernel is the core component in any Linux distribution. Without a kernel, your computer would be essentially useless. It is the piece of software which allows interaction between you, your computer's applications, and your computer's hardware. With such a powerful role in your computing experience, it is important to keep your kernel up-to-date. Each new release provides more hardware support and many performance enhancements. It is also important to keep your kernel up-to-date for security purposes.

Let's upgrade our Linux kernels together. I will walk you through each of the steps I take, from beginning to end, to upgrade my kernel. Just as a warning, I prefer to do the whole process on the command line, so you might want to pull up a terminal, konsole, xterm or whatever you prefer to use for your command line operations.

First you need to download the kernel source code. Many Linux distributions provide specialized editions of the Linux kernel. Typically, you don't want to manually compile and install a custom kernel for these distributions. This does not mean that you can't, it simply means that you might be better off using the "official" kernels for your distribution, which can usually be obtained through your distribution's package manager. You can get the official, 100% free, and complete Linux kernel source code from http://www.kernel.org/. Look for "The latest stable version of the Linux kernel is:" and click the link on the F on the same line. Currently, the latest stable version is 2.6.20, and that's what I'll be using for this tutorial. Please note that commands which begin with a dollar sign ($) are executed as a regular user and commands beginning with a pound sign (#) are executed as a superuser.

$ cd /home/user/download
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.20.tar.bz2

Now login as the superuser, and navigate to the /usr/src directory. Then extract the kernel source into that directory.

$ su -
# cd /usr/src
# tar jxf /home/user/download/linux-2.6.20.tar.bz2

You probably already have a symlink or shortcut called linux which points to your most recent kernel. If you do, delete the link and create another link to the new source tree. Then go into your kernel source tree.

# rm /usr/src/linux
# ln -s /usr/src/linux-2.6.20 /usr/src/linux
# cd /usr/src/linux

I like to identify each compile of my kernel uniquely, to make sure that I'm using the right one. To do that, you have to modify your Makefile

# vi Makefile

You will see the following lines, or something similar, at the very top of the file:

VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 20
EXTRAVERSION =
NAME = Homicidal Dwarf Hamster

Change the EXTRAVERSION property to something you want to use to identify this kernel. I will use -jcv1

EXTRAVERSION = -jcv1

The rest of the Makefile should be fine. In fact, I discourage editing Makefiles unless you know what you're doing. This next step is totally optional, but I like to do it to save some time. You can copy your existing kernel's configuration file in order to have a very similar kernel configuration. My previous kernel version was 2.6.19.1, so this is the command I use:

# cp /usr/src/linux-2.6.19.1/.config /usr/src/linux/

Then I run make oldconfig or make silentoldconfig to update my older kernel configuration file to be able to handle newer features. If you use oldconfig you are required to specify whether or not you want the new features included in your kernel, whereas silentoldconfig will use the defaults determined by kernel developers (they usually know best), asking for minimal input. Let's update our configuration file and then customize it by running make menuconfig (there are several options here, such as make xconfig and make gconfig, but I prefer the text-based menuconfig; there is another you can run by using make config, which runs through each and every option available--it's scary).

# make silentoldconfig
# make menuconfig

menuconfig is a graphical command line application which lets you navigate the features offered by the kernel. Each computer is considerably different from the next, so it really does no good to provide a list of things that I tweak. However, it is important to note what some of the symbols are in the menuconfig utility:

  • M = Module. Modules are loaded when they are required and can contribute to the speed of your system
  • * = built into the kernel. These are typically things which are necessary for your machine to function properly, such as support for your root file system.
  • X = exclusively selected. You'll see this when you select what type of processor you have, for example.

One thing to note before we go further is MAKE SURE YOU KERNEL HAS BUILT-IN SUPPORT FOR YOUR ROOT FILE SYSTEM!!!! My root file system is reiserfs. In my configuration, I made sure that reiserfs was marked with a star. If you don't do this, your kernel won't boot and you will be very frustrated. Trust me.

Your computer is probably quite different than mine, so you might want to just poke around and see if you recognize things that deal with your computer's hardware. Once you are done tweaking your kernel configuration, exit the configuration utility and make sure the configuration is stored in /usr/src/linux/.config

Next we get to build and install the kernel. After that, we have to add an entry to our boot manager so that we can try out our new kernel. The compilation part usually takes just about a half hour on my 2.2Ghz Turion64 processor with 1.25GB of RAM. It takes about 6 hours on my 300Mhz Pentium 2 with 32MB of RAM. Let's find out how long it takes for you to compile your kernel!

# time make
...
real    27m29.663s
user    23m34.476s
sys     2m56.575s

Now let's install the modules and install the appropriate files in the boot area:

# make modules_install
# make install

This is the part that always used to mess me up. I use Slackware Linux, which is more UNIX-ish than most distributions. It's actually the oldest surviving Linux distribution to date, but that's another story. For some reason, the make install command doesn't always work with Slackware. There is a process I use to setup my boot directory when I compile a new kernel. I wrote a simple shell script called fixkernelinstall to take care of it for me:

#!/bin/bash
# Configure my computer for a new kernel
# Author: Josh VanderLinden
# Assisted By: Dan Purcell

# if the user didn't supply a kernel number, ask for it
if [ $# -eq 0 ]; then
    echo -n "Kernel: "
    read kernel
else
    kernel=$1
fi

# determine root partition
echo "Determining root partition..."
rootpart=`mount -l | grep ' / ' | cut -f 1 -d\ `
echo "Root partition is $rootpart"

# copy kernel configuration file
cp /usr/src/linux/.config ./config-$kernel

# now rename everything
echo "Renaming files..."
mv System.map System.map-$kernel
mv vmlinuz vmlinuz-$kernel

# if the config file exists and it's a symlink, remove it
if [ -f 'config' -a `stat config | grep -c 'symbolic link'` = '1' ]; then
    echo "Removing link to configuration file"
    rm config
else
    # otherwise it might be important
    echo "Renaming configuration file"
    mv config config.bak
fi

# Link files
echo "Creating symlinks..."
ln -s System.map-$kernel System.map
ln -s config-$kernel config
ln -s vmlinuz-$kernel vmlinuz

# Update lilo
echo "Adding entry to /etc/lilo.conf for $kernel"
echo "image = /boot/vmlinuz-$kernel" >> /etc/lilo.conf
echo "  root = $rootpart" >> /etc/lilo.conf
echo "  label = $kernel" >> /etc/lilo.conf
echo "  read-only" >> /etc/lilo.conf
echo "Linux kernel $kernel has been configured."
echo "Please check your lilo configuration and run lilo before rebooting"

I'm not an expert on shell scripts, so please feel free to offer suggestions for doing things better if you know how. This script uses the kernel version (given by the user) to setup by /boot directory properly. In my case, I run the script as such

# cd /boot
# fixkernelinstall 2.6.20-jcv1

And the output is something like:

Determining root partition...
Root partition is /dev/hda5
Renaming files...
Renaming configuration file
Creating symlinks...
Adding entry to /etc/lilo.conf for 2.6.20-jcv1
Linux kernel 2.6.20-jcv1 has been configured.
Please check your lilo configuration and run lilo before rebooting

As you can see from the script, I use LILO instead of the arguably more popular GRUB. Either one works for me, but LILO is sufficient for my needs. If you want to use the same kind of script for a GRUB installation, just change the LILO part at the end to something like:

echo 'Adding entry to /boot/grub/menu.lst for $kernel'
echo '  title Linux on ($rootpart)' >> /boot/grub/menu.lst
echo '  root (hd0,4)' >> /boot/grub/menu.lst
echo '  kernel /boot/vmlinuz-$kernel root=$rootpart ro vga=normal' >> /boot/grub/menu.lst

Make sure you change the line with root (hd0,4) to fit your setup. With GRUB, you don't have to worry about applying changes to see the menu entry at boot. It's automatically there. With LILO, however, you have to actually apply changes each time you make them. You do this by running the lilo command as the superuser:

# lilo
Added Windows
Added Linux
Added 2.6.20-jcv1 *

The star (*) signifies the default kernel to boot. Make sure that your root partition is correctly specified in your boot loader configuration. My root partition is on /dev/hda5, but yours may be (and probably is) on a different partition. If you fail to specify the correct root partition, your system will not boot that kernel until the configuration is fixed. GRUB makes this a lot easier than LILO.

And this is the point when you start to cross your figures and hope that your computer doesn't blow up... We get to reboot our computer and hope that our configuration file plays well with our computer. So, let's do that! See you in a few minutes (hopefully).

# shutdown -r now

So here I am, back on Linux on my freshly-rolled kernel. I hope you are as successful as I have been this time around. Keep in mind that you have to reinstall custom kernel modules if you installed others while you were on your other kernel. For example, I use ndiswrapper to access wireless Internet. I have to recompile and reinstall the ndiswrapper module and device drivers before I can use wireless. Likewise, I have VMWare Server on my laptop, which installed special modules. I have to run vmware-config.pl to reconfigure VMWare Server for my new kernel before I can run any virtual machines.

To summarize, here are the commands that I used in this tutorial. Remember that lines beginning with a dollar sign ($) are executed as a non-privileged user, while lines beginning with the pound sign (#) are executed as the superuser (root).

$ cd /home/user/download
$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.20.tar.bz2
$ su -
# cd /usr/src
# tar jxf /home/user/download/linux-2.6.20.tar.bz2
# rm /usr/src/linux
# ln -s /usr/src/linux-2.6.20 /usr/src/linux
# cd /usr/src/linux
# make clean
# vi Makefile (to change EXTRAVERSION to -jcv1)
# cp ../linux-2.6.19.1/.config .
# make silentoldconfig
# make menuconfig (just to ensure settings were good)
# time make
# make modules_install
# make install
# cd /boot
# fixkernelinstall 2.6.20-jcv1
# vi /etc/lilo.conf (to make sure things were good)
# lilo
# shutdown -r now

I hope that you are able to use this tutorial to successfully install or upgrade your kernel. Good luck! Any comments or suggestions are welcome!

MySQL on Slackware

Many of us who have installed Slackware on our machines in the past few years have noticed something annoying on the first boot: the MySQL service fails to start!!

In response to this, I would like to offer this simple tutorial. Right now I am doing this blindly, meaning I don't have a fresh install to work with. Please bear with me if you notice some errors in the tutorial, and please tell me about them!

  1. Log in as root: # su -
  2. Make sure the mysql package is installed. You can do this by running pkgtool and selecting the view option. Hit n and it will take you to the packages in the list that begin with n. MySQL should be right above the n packages. If you don't have mysql already installed, you can download it from http://www.linuxpackages.net/ or wherever you'd like. Once you have a copy of it, install it by using the installpkg mysql......tgz command
  3. Setup the databases: # mysql_install_db
  4. Apply the proper permissions: # chown -R mysql.mysql /var/lib/mysql
  5. Start the database: # mysqld_safe &
  6. Setup the root user:
# mysqladmin -u root password 'newpassword'
# mysqladmin -p -u root -h localhost password 'newpassword'

Finally, test your installation:

# mysql -p

That should do it! Please comment if you notice any errors with this posting.

Linux Basics

Filesystem

  • /bin - This is where basic Linux commands reside (ls, du, dd, cp, etc).
  • /boot - Your boot images are stored here.
  • /dev - Links to access your machine's devices.
  • /etc - Configuration files and boot scripts.
  • /home - User directories, equivalent to "Documents and Settings" in Windows XP.
  • /lib - System libraries, codecs, etc.. similar to Windows/System and Windows/System32.
  • /mnt, /media - Mount points. A mount point is a directory that the contents of your hard drives, cd/dvd drives, floppy drives, or jump drives will be accessible.
  • /opt - Optional packages and programs. Could be thought of as a "Program Files" directory.
  • /proc - Special dynamic information about your system.
  • /root - System administrator home. Could be thought of as a "Documents and Settings/Administrator" directory.
  • /sbin - Super-user binaries. These programs need super-user (root) privileges to execute.
  • /tmp - Temporary files. Every user usually has read, write, and execute permissions here.
  • /usr - The main place for programs to be installed. Most like "Program Files" in Windows.
  • /var - System logs, mail spools, default web server directory, databases, etc...

Basic Commands

  • cd - Change Directory: moves to a different directory.

    Usage: cd directory, cd .., cd /directory

  • cp - CoPy: Copy a file or directory. If you wish to copy recursively and retain all attributes associated with the file or directory, use the -a option.

    Usage: cp original original.backup, cp -a /home/user/directory /home/user/backup

  • df - Disk usage on Filesystem: Display an overall summary of disk usage on mounted mountpoints. If you want human-readable sizes, use the -h option.

    Usage: df, df -h, df /mnt/mountpoint

  • du - Disk Usage: Display the disk usage of each file (recursively, by default) in the current directory. If you want human-readable sizes (1024 bytes = 1Kb, 1024Kb = 1Mb, etc), use the -h option. If you want a summary of the total disk usage by a directory and everything inside, use the -s option.

    Usage: du, du -s, du -h, du -sh, du -s /directory

  • ln - LiNk: Create a link, or shortcut, to a file or directory. I prefer to do symlinks by using the -s option.

    Usage: ln original link, ln original /directory, ln original /directory/link, ln -s original /directory/link

  • ls - LiSt: lists the contents of a directory.

    Usage: ls, ls .., ls /directory/subdirectory

  • man - View the MANual page for a program or other file. Probably the most useful program ever.

    Usage: man program, man xorg.conf

  • mkdir - MaKe DIRectory: create a new directory/folder.

    Usage: mkdir dirname, mkdir /directory/newdirname

  • mv - MoVe: Move a file or directory to a new location, or rename a file or directory.

    Usage: mv file /directory/newhome, mv file newfilename

  • pwd - Print Working Directory: returns the full path of the directory in which you are working.

    Usage: pwd

  • rm - ReMove: Remove a file or directory. If you want to get rid of a directory and all of its contents, use rm -R or rm -Rf for recursive deletion.

    Usage: rm filename, rm /directory/filename, rm -Rf /directory/dirname

  • rmdir - ReMove DIRectory: remove a directory. The directory must be empty.

    Usage: rmdir dirname, rmdir /directory/dirname

  • whereis - Determine where a certain file exists (if it's in your path)

    Usage: whereis filename

  • whoami - Detemine which user you are currently logged in as

    Usage: whoami

Linux Permissions

Linux has a great permission scheme. Since its inception, three basic levels of security have existed: user, group, and everyone. A simple way to change the permissions on a file or directory is to use the chmod, or CHange MODe, command. Changes to the permissions can be either a symbolic representation or an octal number representing the bit pattern for the new permissions. I prefer the symbolic method, myself, but many others prefer to see the octal pattern.

When working with permissions in Linux, always remember the following orders: User, Group, All; Read, Write, Execute. Those are the orders you will put the permissions in. Let's say that we want to make a file readable and writable only to the owner, while no one else will even be able to read the file. Here are some examples:

NOTE: Commands that begin with $ are executed as a regular user. Commands that begin with # are executed by a superuser (root). These two symbols (when they are the very first character in the command) are not entered by the user.

Symbolic:

$ echo "Hi" >> testing
$ chmod a-rwx,u+rw testing

Octal:

$ echo "Hi" >> testing
$ chmod 600 testing

Let's now examine the commands individually.

$ echo "Hi" >> testing

This command will append "Hi" (without the quotes) to the end of the file called testing. The file will be created if it does not already exist, assuming that the user has write permissions in the current directory. If you didn't want to append, you could overwrite anything that may be in the file by using a single > rather than >>.

$ chmod a-rwx,u+rw testing

This command removes (the - in a-rwx) read (the r in a-rwx), write (the w in a-rwx), and execute (the x in a-rwx) permissions from all (the a in a-rwx) users on the file called testing. Next we add (the + in u+rw) permissions for the owner (the u in u+rw) of the file: read (the r in u+rw) and write (the w in u+rw) on the file called testing.

$ chmod 600 testing

This command sets the permissions for everyone in one shot. I think of the digits in binary:

  • 1 = execute only;
  • 2 = write only;
  • 3 = write and execute, but no read;
  • 4 = read, but no write or execute;
  • 5 = read and execute, but no write;
  • 6 = read and write, but no execute;
  • 7 = read, write, and execute.

A digit is required for each level of permissions (user, group, and all). It is also possible to put another digit before the 3 levels of permissions, but to be honest, I don't know what significance it has. A little bit of testing has shown that it puts either an S or T in place of the execute permissions (depending on the digit).

A couple more things about chmod: Directories must also be executable in order to list the contents. chmod is very powerful. Finally, you can recursively apply permissions to directories and everything underneath with the -R option.

$ chmod -R a+rx /home/user/share

A couple of commands closely associated with chmod are chgrp (CHange GRouP) and chown (CHange OWNer).

chown will change the user ownership of files or directories. This can be done recursively with the -R option. It also has the capability to change the group ownership built into it. The syntax is: chown [options] user[:group] file1 [file...]

chgrp will change the group ownership of files or directories. You can do this recursively with the -R option. The syntax is: chgrp [options] groupname file1 [file...]

Cronjobs

Cronjobs are similar to scheduled tasks in the Windows world. Schedule tasks or cronjobs are simply programs that you want to run regularly, without having to type in the command every time you want it to run. Most distributions come with a cron daemon of some sort installed by default. Generally speaking, you can edit your cronjobs by typing crontab -e. This will bring up an editor like vi (it usually is vi by default) in which you edit your cronjob file. Each user can have their own cronjobs (unless it's been disabled by the administrator, I would assume). Here is an example of a cronjob entry:

47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null

You'll notice the 47 with four *'s after it. This is how the daemon knows when to execute a job. This is what each of the stars represent in their order:

  1. Minute: 0-59
  2. Hour: 0-23 (0 = midnight)
  3. Day of month: 1-31
  4. Month: 1-12
  5. Day of week: 0-6 (0 = Sunday)

So the example above will run after 47 minutes every hour of every day of the month of every month. You can also do some fancy things like having a job run every 5 minutes, after 15 and 45 minutes, etc. Let's say that we want to grab our mail every 5 minutes. The cronjob entry would look something like:

*/5 * * * * /usr/bin/fetchmail

If we wanted to grab our mail every 2 hours but only on Mondays, we would use the following:

0 */2 * * 1 /usr/bin/fetchmail

To have a job run after 15 and 45 minutes, we could do this:

15,45 * * * * /usr/bin/fetchmail

Pretty nifty, eh?

Make

This fancy utility is usually the means used for compiling programs from source. The usual sequence of commands for compiling and installing a program from the source in Linux is as follows:

$ ./configure
$ make
# make install

Most packages will follow this convention, but some require special procedures. Sometimes you can even get away with skipping the make and jump straight from ./configure to make install. It is always a good idea to read the README and INSTALL files included in source packages. They will generally tell you about anything out of the ordinary when compiling the source. Obviously, there is a lot more to this utility, but I'm not the person to explain it to you.

Package Management

There are several different types of package managers. The most popular these days are .rpm (RedHat Package Manager) and .deb (Debian). There are some other kinds of packages, but they aren't as popular as RPM and DEB. Slackware uses a straight .tgz (gzipped tarball) as its package system. Frugalware uses .fpm, which are bzipped tarballs. In the end, packages are almost always gzipped or bzipped tarballs.

Each package system has its ups and downs. I've personally found RPM-based distributions to be overly slow, especially with the package management. DEB-based distributions seem to be a lot more speedy when put up against RPM-based distros. However, I have found Slackware's TGZ-based system the most efficient and the fastest. Both RPM's and DEB's have dependency checking. In other words, the package manager will attempt to locate all entities upon which a program may depend in order to function properly before installing or upgrading that program.

A lot of people claim that .tgz packages are inferior to RPM and DEB because of the lack of dependency checking. By default, Slackware does not have dependency checking, but if you know what you're doing, you can get your dependencies a lot easier than you can with RPM or DEB (in my opinion).

RPM packages usually seem quite large compared to other package systems like DEB and TGZ. As far as I have seen, TGZ packages are smaller than both RPM and DEB packages. Here are a few options to help you use the RPM and TGZ package managers. I am not sure on Debian packages, so I won't attempt that one:

  • RPM:
    • rpm -q or rpm --query: look for a package on your system
    • rpm -i or rpm --install: install a new package on your system
    • rpm -U or rpm --upgrade: upgrade a package which is already installed on your system
    • rpm -e or rpm --erase: remove a package from your system
  • TGZ:
    • pkgtool: a text-based package manager
    • installpkg: install a package onto your system
    • upgradepkg: upgrade a package which is currently installed on your system
    • removepkg: remove a package from your system

You can also get some other programs that are VERY useful for package management. I think the latest craze for RPM's is YUM. I have not had great luck with this utility, but a lot of people really like it. Debian packages have used apt-get for ages now. My favorite add-on for Slackware packages is called swaret. Other distributions use the pacman utility, which is very efficient. Each one of these applications has several options and operation procedures.

Secure Shell and Secure Copy

One of my favorite aspects of Linux and other UNIX-derived systems is their secure shell capability (which is usually installed by default). Secure shell, or SSH, is a way for users to log into a remote computer and work on the remote computer as though it were right in front of them. Granted, it's all text-based unless you have X11 forwarding setup properly on both machines. But the command line interface (CLI) is extremely powerful--you should not be afraid to learn and use it. If you need to SSH from a Windows machine, you can use PuTTY.

In order to ssh into another computer, you simply type:

$ ssh hostname

or use the computer's IP address:

$ ssh xxx.xxx.xxx.xxx

By default, ssh on Linux machines will use the username of the account that you are running ssh from. Sometimes you need to login as a different user than the one you're currently using. To do that, you use the -l (lowercase L) or make it look like an e-mail address:

$ ssh hostname -l differentuser
$ ssh differentuser@hostname

Once your ssh session begins with the remote host, you will be asked to enter the password associated with the account with which you are attempting to login. If you do a lot of ssh'ing between machines, typing in your password several times is not only annoying but it could also pose a security risk--some wandering eyes might be watching you each time you enter your password. A great way to get around this is to generate a public and private key for your account. Once you do this, you can use the private key file on the machine you're ssh'ing out of and the public key on the remote machine.

To generate a public/private key, you can use ssh-keygen:

$ ssh-keygen -t rsa

You will be asked to enter and verify a passphrase for your private key. If you were aiming to get around not typing in your password, just hit enter twice for this part. It's still not secure, but it is a lot less hassle if you're only working on machines that no one else has "access" to. Usually your keys will be stored in ~/.ssh/ (or your /home/username/.ssh/ directory: ~ refers to your /home/yourusername).

The next step is to create your identification:

$ cd ~/.ssh
$ echo "IdKey private_key_file" > identification

Now you have to copy your public key (usually the one that ends in .pub) to the remote host:

$ scp public_key_file.pub username@xxx.xxx.xxx.xxx:/home/username/.ssh

And finally you should add your public key to the list of authorized users on the remote host by adding a line like the following to the ~/.ssh/authorization file:

Key public_key_file.pub

At this point you should be able to log into your remote host without your password (assuming you skipped the passphrase part of the key generation above).

As for the secure copy utility, you can get an idea of how to use it from the scp command above. This program uses the SSH system to securely copy files between two computers. This is how I use the scp command:

$ scp [-r] user@remote:/path/to/remote/file /local/destination/path/
$ scp [-r] /path/to/local/file user@remote:/remote/destination/path/

If you have setup public key authorization, you will not have to enter your password each time you use scp. Otherwise, you are asked for a password each time you run scp.

Archiving and Backup

There are many different kinds of compression and archiving tools in Linux. The most common types are tarballs, gzipped, and bzipped files. Below is a list of purposes for each of the three and some of the options:

  • tar - multiple files, little or no compression
    • c, --create - create a tarball
    • f, --file - specify the tarball's filename
    • x, --extract, --get - extract the contents of a tarball
    • j, --bzip2 - use bzip2 compression/decompression
    • v, --verbose - show verbose output
    • z, --gzip, --ungzip - use gzip compression/decompression
    • to create a tarball called filename.tar which contains all of the files in /dir/to/archive: $ tar cf filename.tar /dir/to/archive
    • to create a tarball called filename.tar.gz which contains all of the files in /dir/to/archive and gzip it: $ tar zcf filename.tar.gz /dir/to/archive
    • to create a tarball called filename.tar.bz2 which contains all of the files in /dir/to/archive and bzip it: $ tar jcf filename.tar.bz2 /dir/to/archive
    • to extract the contents of a tarball called filename.tar.gz to the current directory: $ tar zxf filename.tar.gz
    • to extract the contents of a tarball called filename.tar.bz2 to the current directory: $ tar jxf filename.tar.bz2
  • gzip - single file compression
    • To gzip a file called filename to make it filename.gz: $ gzip filename
    • To gunzip a file called filename.gz to make it filename: $ gunzip filename.gz
  • bzip2 - single file compression
    • To bzip a file called filename to make it filename.bz2: $ bzip2 filename
    • To bunzip a file called filename.bz2 to make it filename: $ bunzip2 filename.bz2

Wireless Networking With SuSE Linux Enterprise Desktop 10

Note: This tutorial is a continuation of yesterday's tutorial about installing SuSE Linux Enterprise Desktop 10 on my HP Pavilion dv8000. I may or may not refer to steps that I took during installation, so if you are confused, you might want to check out the previous article.

The process of installing and enabling a wireless adapter will vary greatly from machine to machine. Some lucky folks have wireless adapters that come with official Linux drivers. For the rest of us, we usually have a Broadcom-compatible adapter. In order to use a Broadcom device, I use a program called ndiswrapper, which basically takes the drivers for the devices to function with Windows and wraps them in such a manner that Linux can use. Since I have the 64-bit version of SuSE Linux Enterprise Desktop (SLED) 10, I need to get a 64-bit driver in order for my wireless to function properly. These 64-bit drivers took me a while to get ahold of the first time I got my wireless working (on SuSE Linux 10.1), but I still have them in my archives, so I should be fully prepared to get my wireless working. In this article I assume that you are going to use ndiswrapper to install drivers for a Broadcom device. So let's get started.

Install Ndiswrapper

First, make sure that you have ndiswrapper installed on your system. You can install it by entering YaST. In KDE, click the K menu (the little green chameleon in the bottom left), go to System > YaST (Administrator Settings). You will be asked to enter the root password, which you set during installation. Once you've done that, you will see the YaST Control Center, which is a very powerful set of tools and utilities that greatly ease the configuration and management of SLED. Click on the Software category on the left to show a list of software management options (if it's not already displayed). Click on the Software Management module.

Once loaded, you will see an interface which is very similar to what you would see during the expert package selection while installing SLED. Make sure your Filter (in the top left) is set to Search, and enter ndiswrapper in the search box. The search will return a few different results for ndiswrapper. The first result, ndiswrapper by itself, should be sufficient for most of us. When you check the box by ndiswrapper, you will see a warning informing you that ndiswrapper-based network are not officially supported by Novell. Just click OK to dismiss this warning.

Now you should be ready to install ndiswrapper. Click the Accept button in the bottom right. You will be asked to confirm the installation of ndiswrapper; click Continue. If your installation media is not still inserted, YaST will request the disc which contains the ndiswrapper packages. Insert the disc and click OK. In my case, two packages were installed. It may or may not differ for you.

As soon as the packages are done installing, your configuration settings are saved once again, and you will be asked if you want to install or remove more packages. Click No. At this point, ndiswrapper should be installed on your system, and you may dismiss the YaST Control Center.

Determine Your Wireless Adapter Make/Model

This step is absolutely necessary because if you install the wrong drivers, there is a chance (small as it may be) that your wireless adapter will be damaged. So let's ask Linux how our wireless adapter identifies itself. To do this, log into your SLED and open a Terminal or Konsole. On KDE, you can use the third button (a monitor with a black screen and > on it) on the menu panel at the bottom of the screen, or you can also click the "K" menu (same place as a regular start menu in Windows), go to System > Terminal > Konsole (Terminal Program). I am not exactly sure where this item is located with GNOME, but it might be under the System menu.

Once you have opened a terminal window of some sort, you must switch to a root user environment:

$ su -

You will then be asked for the root password, which you set during installation. Enter that password and type

# lspci

This command lists all of your PCI devices, according to the man pages, but you will see most if not all of your devices, PCI or otherwise, listed here. You'll notice that there is probably quite a list of devices. You may be interested in what your computer has in it, but since you're looking specifically for your wireless adapter, try one of the following commands

lspci | grep Broadcom
lspci | grep Wireless

The | after lspci will pipe the output of lspci to a useful and powerful program called grep. In this case, grep just looks for any lines that contain either the word Broadcom or Wireless. If you don't get any results from either of the two commands above, try to think of other keywords that might be used to identify a wireless adapter. My laptop returns the following:

# lspci | grep Broadcom
06:02.0 Network controller: Broadcom Corporation Dell Wireless 1470 DualBand WLAN (rev 02)

When you find the wireless adapter, pay attention to the numbers in front of it (06:02.0 on my laptop). With those numbers, you can get the information you need to find the right drivers for your particular wireless adapter. Enter the following command, substituting my device numbers with yours:

# lspci -n | grep 06:02.0
06:02.0 Class 0280: 14e4:4319 (rev 02)

This command gives you the wireless adapter's numeric ID; mine is 14e4:4319.

Download Your Device Drivers

Now that you know your device's numeric ID, you can go to the ndiswrapper wiki, which has a list of numeric IDs and the drivers that are known to work with that device. Look for your wireless adapter on the list of devices. I would recommend using your browser's search or find on page function to locate your device by the numeric ID that you just found.

I'll leave the retrieval of your device drivers up to you.

Install The Wireless Drivers

Most device drivers will come in an archive of some sort. Mine came in a RAR file. Extract your drivers to the directory of your choice--maybe something like ~/wireless. You can use the archive utility provided by SLED to extract your files. It functions very similar to WinZip, WinRAR, and other popular archive clients. By the way, the ~ in a directory listing refers to the current user's home directory (/home/user, for example).

Now, go back to the root terminal that you used to determine what kind of adapter you have. Navigate to the directory where you extracted your drivers and list the contents of the directory, looking for any *.inf files:

# cd ~/wireless
# ls

Ndiswrapper will use an INF file to know how it is supposed to install the driver. My INF file is called bcmwl5.inf. Now for the actual installation of the drivers:

# ndiswrapper -i bcmwl5.inf
Installing bcmwl5
Forcing parameter IBSSGMode|0 to IBSSGMode|2

Now check to make sure that the driver is there and that it recognizes your hardware:

# ndiswrapper -l
Installed drivers:
nbcmwl5          driver installed

Ooops!!! It doesn't recognize that my hardware is actually there. If you see 'driver installed, hardware present' then you should be good to go. You may proceed to the next step. However, if you have the same problem as me, you either have the wrong drivers or ndiswrapper installed the drivers improperly. This problem took forever to track down when I was first trying to get my wireless to work. Remember the numeric ID that you found earlier? Check this out:

# cd /etc/ndiswrapper/bcmwl5
# ls
14E4:4318.5.conf  bcmwl5.inf  bcmwl564.sys

Wait a second! Remember how my numeric ID was 14E4:4319? Why is there a listing for 14E4:4318.5? To solve this problem, I am just going to make a symlink (a shortcut) to 14E4:4318.5.conf and call it 14E4:4319.5.conf:

# ln -s 14E4:4318.5.conf 14E4:4319.5.conf

Now when I run the command to see if my hardware is recognized, I get this:

# ndiswrapper -l
Installed drivers:
bcmwl5          driver installed, hardware present

Hurray!! It says 'hardware present' in there!!! That means that the drivers are working and that my device can be used!

Enable Your Wireless Device

With ndiswrapper recognizing your wireless adapter, you can now enable it and start wirelessing your life away:

# modprobe ndiswrapper

There have been times when this particular step will lock up my machine and I have to do a hard reset, but most times it will work fine.

Connect to a Wireless Network

This part also gave me issues for a long time when I first installed my wireless drivers on SuSE Linux 10.1. I was able to connect to the wireless access points provided by my apartment complex, but I could not for the life of me connect to my own wireless router. Hopefully you don't encounter the same problem.

To see what access points you have available to you, check out the KNetworkManager applet in your system tray (next to the clock). I have 7 possible access points listed in the menu, including my encrypted router. When I clicked on my network, it asked me for my passphrase and connected immediately. Nice! That's definitely one plus for SLED over SuSE Linux 10.1!!

I am actually amazed at how easy it was to get my wireless working the second time around. Hopefully your wireless adapter installation was as painless as mine with the help of this guide.

Roadtrip!!

Ported From Blogger

The following post was ported from my old blogger account.

Yeah... this is what usually happens when I start a new blog. I just get into a certain routine after having the blog for a while. Things don't seem interesting enough to write about them after a while.

Anyway, this past weekend I took Mindy with me down to Provo, UT to see my sister and her little guy friend. We left Rexburg around 11AM and headed down the roads in my non-air-conditioned Civic. Mindy wanted to stop by one of her friends' pad in Ogden for a bit, so we did. After we left there, the traffic down to Provo was horrible. I think we rolled up to my sister's apartment around 5:30 or 6. As we were waiting for Paul, my sister's guy friend, to get home from work, we watched Nanny McPhee or something like that. Interesting movie, I must say.

Once Paul returned home, we had a nice little barbeque and then went inside to watch a movie. I only slept 4 hours the night before, so I was pretty tired (especially after driving for that long in that heat and in that traffic). I was trying to stay awake, but Mindy gave me permission to go to sleep. About 10 seconds later I was out for the count. I woke up when the movie finished. Brandi and Paul left to go on a walk (it was about midnight at this time) whilst Mindy and I just cuddled.

Eventually, Paul and Brandi came back and we all walked over to my sister's pad to drop off Mindy and my sister. I went back to Paul's place and crashed on his couch...this was about 2AM. Around 8 o'clock, I woke up, took a shower, and headed on over to my car. I cleaned it out a bit, listened to a little music, and played some guitar. I think it was about 9:30 or so when my sister came outside. I called her over and she said that Mindy was awake. My sister told Mindy that I was in the car, so she came out. We drove around for a while, trying to find some place that sounded appetizing to get some grub. In the end, we settled on Denny's. We had a fabulous breakfast of pancakes, eggs, and hashbrowns. mmmmmm....

When we hopped back into the car, Mindy called her mom to chat for a bit. Apparently the folks we visited in Ogden called Mindy's mom right after we left. It appears that they like me. hehe.. Then Mindy's mom asked if Mindy had met my sister yet. That's when things got interesting. Mindy said something like, "Actually, no (I didn't meet her). You love me, right mom? You're not going to get mad at me, are you?" "Mindy... What did you do?" "We went down to Vegas instead." "Mindy! Did you elope?!" hehe.. Mindy played around a bit more, but eventually gave in. It was absolutely hilarious. Keep in mind that all of this happened AFTER Mindy's mom had just talked about how much the folks in Ogden raved about me. hah... fantastic. Later on, Mindy told me that her mom played the same trick on her grandpa. I think he was prepared to come out after me and kick my trash. At least before Mindy's mom told him that it was just a joke, that is.

After breakfast we all went to a little river thingy and dinked around a bit. I thought it was fun. We also went to the mall before coming back. I was looking into getting some sandals because I am sick and tired of wearing shoes all the time. I left my most favorite sandals in Romania. I had been using them for like 7 or 8 years, it seems. Maybe even longer. Anyway, we headed back up to Rexburg after hitting up the mall. The drive back up was much more pleasant--it only took 4 hours to get back.

And life as usual resumed.

Super Computer

Ported From Blogger

The following post was ported from my old blogger account.

One of my good friends recently purchased a super nice computer system which should last him quite some time. He and I go back a couple years, and we devised a plan as to how I would set up his supercomputer when he got it a year and a half or so ago. This past weekend we carried out our plans. Saturday morning I woke up early and took my road trip up to Montana to take care of business.

I arrived at my friend's house around 10AM and work immediately commenced. Before I get too far into the details of the weekend, let me share a few of the vital specs of his computer:

  • Processor: AMD Athlon64 x2 4200+ (2.2Ghz) with liquid cooling
  • System Memory: 2GB DDR
  • Video: nVidia GeForce 7600 GS (512MB RAM)
  • Hard Drive: 2x 160GB SATA-II (320GB total)
  • Optical Media: 2x DVD+/-RW drives
  • Network: Wireless RaLink 2500 series
  • Monitor: 2x 19" Viewsonic LCD
  • Speakers: 5.1 Creative Surround Sound

Yeah, it's pretty sweet. I thoroughly enjoyed being able to work on it. I'll have to post some pictures of my friend's computer sometime. Ok, now on to the details of setting up his system.

My friend wanted to have both Windows and Linux on his system. We spent quite a bit of time around each other. Being the Linux nut that I am, he heard so much about Linux and wanted to get his fix. But he also wanted Windows for games and whatnot. Understandable. So we began the day trying to install Windows XP SP2 on his box. That was thoroughly painful, as usual. Installing a single driver, rebooting, installing another driver, rebooting, installing yet another driver, and once again rebooting. You'd think that the actual installation of Windows XP on a system such as his would be quite speedy. No, no... Microsoft never ceases to amaze me with the speed of Windows--or the lack thereof. It took at least an hour to get through all of the initial booting, setting up the partitions in a fashion that Windows could handle, installing drivers, and finally minimal essential software. Very rediculous. One of the best parts was that Windows somehow installed itself on the second hard drive...because of this (or some other unknown cause) Windows could not boot itself up. We had to use another bootCD in order to boot Windows. I assumed that GRUB (a Linux bootloader) would be able to circumvent this problem.

Once the first installation of Windows was complete and we had a backup of the installation, we proceeded to install SuSE 10.1 x86_64. This installation was painless. Everything worked extremely well. The hardest part about getting Linux to function properly was figuring out why his wireless adapter wouldn't connect to his router. It took a bit of time, but eventually we found the solution online (this solution also applied to my laptop, so I have great wireless in Linux now). As I was getting certain multimedia applications installed on Linux (since they're not included with SuSE for copyright reasons), we watched Hitch on the second monitor. It was great. Eventually Linux appeared to be set up and running perfectly. That was about the time we reboot Linux for the second time (once during installation, if I remember correctly).

Come to find out that not even GRUB could boot Windows. We were greatly frustrated, and my friend began to understand slightly more why I like Linux more than Windows. We decided to swap the hard drives around so that the drive with Windows already on it would be the primary master. I warned him that it would mean reinstalling Windows because it wouldn't know where to find itself after swapping the drives around. He was down with that, so that's what we did. We ran through the whole bloody process again. At least this time we knew what to expect when Windows complained about drivers--we'd already experienced it only hours before.

This installation of Windows went a bit more smoothly, but it also meant that we'd have to do something to get Linux back to an operational stage (firstly, get the GRUB boot menu back). I believe it was the first time I rebooted the computer after the second Windows installation that we got a "NTLDR is missing" error. Blasted Windows. I solved that problem, but then it started complaining about an invalid boot.ini file. Rubbish.

All we needed to do for Linux was pop the first install CD back in and run a rescue utility. It examined the existing installation and modified configuration files according to the drive swap. Linux was back up and running within 5 or 10 minutes. Windows, on the other hand, continues to complain at boot about the invalid boot.ini file.

And once again, my contempt for Windows has been reaffirmed. The only reasons I keep Windows around is for Adobe Creative Suite 2 and a game here or there. Even Google Earth runs natively on Linux (as of yesterday).

Is It Real?!?

Ported From Blogger

The following post was ported from my old blogger account.

Ok, I have to say that I had suspicions of what I last posted about being a dream. Honestly! I was not quite sure that it had actually happened, since it's not exactly an everyday occurrence. Somehow it just seemed like one of those dreams that are so pleasant when you experience them but that are also just temporary. I'm pretty sure it wasn't a dream now, but there are still suspicions lingering (it all just seems too good to be true!)...

So last night, I had the opportunity to see Mindy again. At first, it appeared as though it was all my imagination driving me crazy. Nothing seemed to really be different than before the previous night's dream. Then we started talking a bit and we talked about a few things we had discussed the night before (or in the previous dream). That's what comforted me--that she was conscious of the evening before (perhaps it wasn't just in my head after all).

We went for a nice stroll again, and we ended up at the campus stadium. We hiked up to the very top and just chilled for a while. It was nice to be able to just chat as we did. She started to get shivers, so I started thinking of things that we could do to warm up a bit. Mindy loves to dance. It is her most favorite thing to do, according to another conversation that we had. With that in mind, I asked if she wanted to dance a bit. Heh...you should have seen the confused look on her face. It was classic! Anyway, when she saw that I was serious, she agreed to dance a bit. Now, you all have to understand that I do not know how to dance worth beans. Lucky for me, Mindy knows how and actually likes to teach people how to dance. I'm afraid I may be a lost cause though. Mindy tried to teach me how to do a simple waltz on top of the stadium...aahh brotha. Let me tell you... As simple as it may seem, it sure did confuse me. Perhaps I just need a little more practice (I hope).

Eventually the chilliness of the evening breeze became a bit too much, so we walked back to Mindy's place. Once there, we happened upon our good buddies Matt and Kara. They seemed to be having a jolly time together, but Mindy and I thought it more appropriate not to interrupt their bliss for too long, so we took off again. This time we went to a nice park thinger away from all of the city lights and looked at the stars until the cold became too much again. We got to see a fabulous shooting star or possibly even a meteor. That was Mindy's first time to see a shooting star, and it was probably the most amazing one I've ever seen. It was rather large and not exactly fast-moving. There wasn't a tail on it until after we both saw it. Then the tail grew to be pretty long until it all disappeared. It was fantastic!

That is about the time that we walked back to Mindy's pad and said good night. I also seem to remember setting up a time and place for us to meet a bit later today. I suppose that if Mindy is actually there then all of my suspicions of these experiences will be put to rest. I will then accept the idea that it's not just my whacky imagination playing sick and drawn-out tricks on my mind. For all I know I'm actually just dozing off at work right now and dreaming that I am writing all of this. Weird.

20 Things You Won't Like About Windows Vista

Ported From Blogger

The following post was ported from my old blogger account.

20 Things You Won't Like About Windows Vista

I happened upon this article whilst glancing through my daily Slashdot update. From what I was able to read so far, I agree 100% with this bloke who wrote the article.

I had the opportunity to play with a Vista beta a couple of months ago, and I was pretty impressed by certain aspects of the new OS, but in the end, I went back to my Linux. One thing, for example, was the new hardware rating system that's integrated into Vista. It's an excellent idea--Windows will rate your system to give you some rating on an arbitrary PC standards scale from 1 to 10. The higher the rating, the better your computer is. Armed with this rating, a PC user can then go to a store to purchase a new piece of software. They can compare their PC's rating with the requirements of the application and be a happy camper when the program actually runs when they get home. Absolutely wonderful concept. The problem is this: my laptop was rated at a 2.0, if I remember correctly. Here are the related specs on my laptop (HP Pavilion dv8000 series):

  • Processor: AMD Turion 64 ML-40 (2.20Ghz)
  • RAM: 1.1GB DDR PC2700
  • Video: ATI Radeon Xpress 200M (128MB dedicated RAM, along with 128MB shared RAM)

Now, to put things into perspective, I have done a few benchmarks with my laptop up against my computer at work. Please note that these benchmarks are very limited in scope and are mostly for my personal satisfaction. My computer at work is a HyperThreaded Pentium 4 at 3.4Ghz with 1GB of RAM. As to the flavor of RAM, I'm not sure what to say, but I'd imagine that it runs at least 400Mhz compared to my 333Mhz. On several occasions, I've booted up both systems simultaneously. They both booted up Windows XP SP2, though my laptop has Home and the one at work has Professional. After logging in and letting everything settle down for a minute or two, I started up the NetBeans IDE in which I spend oh so much of my time. My laptop had the IDE up and ready to use (classpaths scanned and everything) 50 seconds before my work machine was to the same point. And this all happened before I upgraded from 512MB of RAM to 1GB in my laptop. I haven't tested since that upgrade.

So if my laptop, which beats out a HyperThreaded Pentium 4 running at 3.4Ghz with a gig of RAM in certain uncontrolled conditions, is rated as a 2 on the scale, what does that say for my 3-year-old desktop? How would that fare with Vista installed? I'm not really prepared to drop another grand or so on a new computer to meet Microsoft's anticipated hardware requirements once Vista is released, which is one reason I'm glad I love Linux.