I’d been wanting to take some computer science courses for a while now because after being out of school for five years, I miss it. Besides, my employer pays for a bit of the education costs. But it’s hard to find graduate level courses that are convenient times and locations. I stumbled across the OMSE program (Oregon Master of Software Engineering) from someone’s blog post that was linked to from an Ignite Portland web page, and saw that they offer online class for most of the courses, and convenient evening times for the face to face classes. Bingo.

I’m taking the first course, OMSE 500, Principles of Software Engineering, right now. The course is an overview of the rest of the program, and it’s helping me realize that software engineering is quite a different topic from computer science. The course is all discussion based about topics such as project management, system architecture, development methodologies and other high view topics, but we never actually look at code. I’m not sure if I like that or not yet. I wonder sometimes if I would get burned out on code if I worked a normal week coding, and then had classes where all I did was code on top of that. On the other hand I feel like the OMSE courses will prepare me more to be a project manager than a better programmer.

What I like best about the course so far is getting a lot of perspective and stories from the fellow students. A prerequisite for taking the classes is that you’ve been working in software development for a few years, so it’s interesting to hear about the real world problems that people face as opposed to the type of fellow students I had when I was finishing my undergrad where almost nobody had any real experience outside of homework assignments. It really drives home the point that with software being as ubiquitous as it is now, some of the biggest challenges in developing it are managing how programmers work with each other since there’s not much that is done by a single person anymore.

I plan to at least get the certificate, which is 5 courses and should take me a little over a year. The full masters program is 13 or so courses, but I’m not sure yet that I wouldn’t rather focus on more computer science courses. The biggest downside to the program is that it’s expensive – over $1500 for 3 credits. I can’t imagine paying that if work didn’t chip in for most of it. I suppose they charge a little extra since most people enrolled have their employers paying for it.

Keeping track of passwords has been a pain – until I found Passpack.  Passpack is a free, online password manager that I’ve been using for over a year now to keep track of most of my passwords.

Security

Storing passwords online might set of some security warning bells in your head since you’ll have all your passwords in one place, somewhere any could try to get to, but I’ve convinced myself that using Passpack is safe and that they take security very seriously.  Your password data is never sent anywhere unencrypted, meaning not even Passpack programmers could access your data.  This means you have a login that does get sent to Passpack to access your account, but to ‘unpack’ your data you have to type in another password they call your Packing Key.

Another security benefit is that now I don’t reuse passwords like I used to.  Trying to remember all the logins and passwords for all the websites I go to used to be such a hassle that I just used the same 2 or 3 passwords for everything. Passpack even has a nifty password generator that I used to create stronger passwords.

Access

I was a bit worried initially about what might happen if Passpack was down (I’ve never seen it happen) or if I lost my internet connection, but they have all sorts of offline options including a simple export (which you’ll want to encrypt if you’re storing it locally) and Google Gears. I haven’t had a problem with getting Passpack from anywhere yet, but it’s nice to know that if I did I would have backup.

Ease of Use

Besides storing your passwords online, they make it ridiculously easy to login places. They have a button that you can add to your browser toolbar that automatically logs you into websites that Passpack knows your password for. It saves you from having to copy and paste stuff all over the place, although they make that easier too with one click copy to the clipboard without ever showing your username or password on the screen so you don’t have to worry about anyone shoulder surfing your info.

Besides all this, there’s some new features they offer that I don’t even take advantage of like secure message sending, and the ability to share passwords between accounts.

Passpack is now one of the first sites that I open when I start a browsing session. Perhaps one day OpenID or something like it will be ubiquitous and I won’t need so many passwords, but until then some sort of tool like this to help is essential.

I’ve been working on more and more computers lately, and I was getting tired of my favorite bash and editor shortcuts not being available between the different machines. I finally took some good advice I heard a while back and put my config files under source control, and it’s been one of the best tips I’ve followed in some time.

The way I’ve done it is to use GitHub to store my config files, so anyone else is free to take a look if they want to see how I’ve got vim, bash, screen, readline, ruby’s irb or other things configured. However, the biggest benefit is that I can quickly get a new machine customized with all my favorite settings just by doing a checkout (clone in git) into my home directory on the new machine. From there I’ve got a little script I run called ‘create_symlinks’ that backs up the old config files before overwriting them with symlinks that point to the files in my checkout. That way, whenever I update my repository, the files are automatically current.

This has been immensely helpful in taking the tricks I learn at work and easily incorporating them at home or on any remote server I have to do work. If I add something new and cool to my vimrc at work, I just have to remember to commit it and push the changes to GitHub before I head home, and then I can continue working from home without having to remember whatever command I just automated.

I’ve even heard of people who go so far as to put their whole home directories under version control as a way of not only moving files around, but as a way of doing backups. That seems overkill to me, but it’s worth thinking about what sorts of files we move around a lot might be easily moved and backed up using a source control system like git or SVN.

If you’re wondering what happened to this site, I decided that I was sick of trying to use Mephisto for blogging software.  I love Rails development, but Mephisto was a pain in the ass and a major memory hog compared to a much more stable Wordpress.  It was taking down my slice pretty regularly and discouraging me from posting because it was a pain to use comparatively.  I haven’t moved the old posts over yet, but they will be coming in the next few weeks.  I’d move them sooner, but they’re mostly about stuff that is long since out of date.  I’d say I hope to post more frequently, but I’m not entirely sure that’s true.  We’ll see if having an easier to use blogging system installed precipitates more frequent posting.

I’ve got a new job and it’s working almost completely in Perl. Who’d have thought? Fortunately, it really is true that if you know one programming language fairly well, you’ll understand others fairly easily. Modifying code in Perl is easy as long as I have other code to refer to, and I often forget it’s a new language. Learning all the names, faces and practices in my new job is much harder than learning a bit of new syntax. However, if I had to sit down and write something in Perl from scratch it would not go well.

There’s definitely a few things I’ve noticed about Perl that I don’t like compared to Ruby:

  • Sigils. I keep forgetting to prepend my variables with those $@% sigils. That’s not me cursing there, those are the sigils for scalars, arrays and hashes. The worst is when you’re references the whole array (@foo) you use a different sigil than when you’re referencing a scalar element in the array ($foo[0]).
  • Objects feel like a hacked up addon to the language. You have to “bless” things to make them objects and you’re always passing around references to $self to object methods. You can’t make methods private. Weird stuff.
  • No interactive shell built in. I loved having irb to try things out when I was learning Ruby – heck, I still do. Fortunately, after a bit of searching and bad advice on just using the Perl debugger I found Perl Console.
  • Always needing to say “my variablename” instead of just “variablename” if I want local variables. It seems like variables should be local by default and you say something if you want to make them local.

There’s some things about Perl I’m not sure if I like or dislike yet:

  • Default variables. Strangely named variables like $ and @ pop up everywhere. The idea is if you don’t want to come up with a name for variables while you’re iterating or passing parameters, you don’t have to thus saving all that typing and thinking. However it certainly makes the code harder to read for those who don’t know Perl and I suspect it might make things always harder to read even once I get used to it.
  • Regular expressions everywhere. This will be good for me to get better with this powerful feature, but damned if regular expressions aren’t ugly.
  • There’s always more than one way to do something. This is true in most every language, but Perl seems to take a lot of pride in it, and often the idiomatic Perl way of doing something is quite a bit harder to read than the non idiomatic way.

The main thing I like about Perl so far is that the Perl community seems to have a sense of humor. Perl books are always making jokes and variables in Perl code are given entertaining names.

Aside from the new language difference, the new job is great for me to get used to coding in a large codebase and with teams. We use quite a few Extreme Programming practices like test driven development, and even more impressive, pair coding. These practices more than make up for anything I don’t like about Perl.

Since the old rails breakpointer quit working in versions of Ruby after 1.8.4, the preferred method for debugging rails has been to use ruby-debug. This is a separate gem, so to install you just do

gem install ruby-debug

However, the current version of ruby-debug (0.10.1) runs into problems on Windows, namely it doesn’t work and spews out an error message when you try to run it directly:

c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require': no such file to load -- c:/ruby/
lib/ruby/gems/1.8/gems/linecache-0.42-x86-mswin32/lib/../ext/trace_nums (LoadError)
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/linecache-0.42-x86-mswin32/lib/tracelines.rb:8
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/linecache-0.42-x86-mswin32/lib/linecache.rb:63
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/ruby-debug-base-0.10.1-mswin32/lib/ruby-debug-base.rb:3
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/ruby-debug-0.10.1/cli/ruby-debug.rb:5
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require'
from c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from c:/ruby/lib/ruby/gems/1.8/gems/ruby-debug-0.10.1/bin/rdebug:7
from c:/ruby/bin/rdebug:19:in `load'
from c:/ruby/bin/rdebug:19

I’ve found two possible ways to fix this. The first is simply to revert back to ruby-debug version 0.10.0

gem uninstall ruby-debug
gem install ruby-debug --version "=0.10.0"

The other way to fix the problem I found on a ticket in Rails Trac

In the file ruby\lib\ruby\gems\1.8\gems\linecache-0.42-x86-mswin32\lib\tracelines.rb I changed the line from

require File.join(@@SRCDIR, '..', 'ext', 'tracenums')

to

require File.join(@@SRC_DIR, '..', 'ext', 'extconf.rb')

I suppose the advantage to this method is that you have the newer version of ruby-debug with all it’s fixes, although I don’t really know what those are.

Now that I’m finally back in the USA it’s time to start blogging about technical things again. I doubt anyone really noticed, but this site was down for the last few months as my free Rails hosting expired back in February and I didn’t bother to renew it. I was also too busy to resetup Mephisto on my other server, especially since I haven’t found that I’ve liked Mephisto very much. I decided I’d continue with it and try to change the things I didn’t like, if for no other reason than it’s good practice deploying Rails apps. I’ll have the old articles back up in the near future.

update: I finally noticed that some people still visit this post. scope_out was great, but it’s been integrated into Rails now. It’s called name_scope and it works right out of the box with will_paginate.

Round 1: scope_out

A plugin I have found immensely useful is the scope_out plugin by John Andrews. The rationale for using the plugin is very nicely summed up by Dan Manges:

  • Your conditions are defined in ONE place.
  • You can extend associations if you want caching.
  • The ‘raw’ with_scope is available, making applying multiple conditions easy.
  • The find and calculate methods are both available.

Use it like so:

class Project < ActiveRecord::Base hasmany :tasks, :dependent => :deleteall end

class Tasks < ActiveRecord::Base belongs_to :project scope_out :active, :conditions => "status = 'active'" end

Now you can do the following:

# All active tasks
@tasks = Task.find_active(:all, :order => 'due_on')

# The active tasks for the project with memoization preserved
@tasks = @project.tasks.active

If you don’t see why that’s useful or understand what the means, read the whole blog post and contrast it with Jamis Buck’s take on associations, which is what originally led me to scopeout. I don’t like Jamis’ idea of defining methods on the associations because there’s some methods I need to use as part of the association OR as just a method on the original class. Also, what if a user also hasmany tasks? You have to redefine what ‘active’ is for that association too.

To install it appears you have to go to the svn repository since it’s not found in the default rails plugin repositories

ruby script/plugin install http://scope-out-rails.googlecode.com/svn/trunk/

and then rename /vendor/plugins/trunk to /vendor/plugins/scope_out.

Round 2: will_paginate

Using scoped out I’d been paginating my conditions with Rails classic pagination

def index
  @task_pages = Paginator.new self, Task.find_active(:all).count, 10, params[:page]
  @tasks = Task.find_active(:all, :order => 'due_on',
  :limit  =>  @person_pages.items_per_page,
  :offset =>  @person_pages.current.offset)
end

But now that pagination isn’t going to be part of the rails core I went looking for what people were going to do and found Err’s will_paginate plugin, which is slicker than what I’ve been doing anyway.

@tasks = Task.paginate :page => params[:page]

But can I still use my scope_out definitions?

@tasks = Task.paginate_active :page => params[:page]
NoMethodError: undefined method `find_all_active' for Task:Class

Not yet. willpaginate implements a method missing bit o’ code so you can do findby’s with an implicit all:

# These are equivalent
@tasks = Task.paginate_by_priority "high", :page => params[:page]
@tasks = Task.paginate_all_by_priority "high", :page => params[:page]

However, the way the regular expression is written messes up scopeout. This is easily fixed by replacing one line of code in vendor/plugins/willpaginate/lib/will_paginate/finder.rb. Replace

finder.sub! /^find/, 'find_all'

on line 91 with

finder.sub! /^find_by/, 'find_all_by'

Now paginateby will still have an implicit all and scopeout definitions will work with the pagination. I’m pretty sure this change doesn’t break anything with the will_paginate plugin, but if anyone finds problems with this let me know.

I just signed up for a slicehost account today and am trying to setup my new server according with the deprec capistrano recipes according to the free bonus peepcode video: Building a Full Rails Stack on Ubuntu 6.06.

The first capistrano task you have to run is really hard to read since the free video download is pretty poor quality, so you can’t read the underscores very well at all.

cap setup ssh keys

should be

cap setup_ssh_keys

running it the wrong way got me the error

command "sudo  chgrp -R deploy /var/www/apps/rails_app" failed on www.servername.com

If you haven’t been doing test driven development, and you should be, you may not be sure how well tested your rails application is. You can use rake stats to get your code to test ratio, but that only gives you a general sense of how you’re doing, but just knowing that your ratio is 1.0 to .7 doesn’t really ensure anything. You can write some tests in one line that test dozens of lines of code fairly well.

This is where using rcov comes in to save the day. Thankfully Coda Hale has written a rails plugin to make running rcov on your projects super easy. The instructions on his page tell you to install rcov by downloading it, but there’s a prepackaged gem that makes installation even easier. Let’s install it:

gem install rcov

Now Coda Hale recommends installing the plugin using -x to add and svn:externals entry, but I can’t stand having svn:externals since every time I want to update or commit I become dependent on someone else’s svn repository. You’re better off using piston if you need to keep your plugins up to date. So go ahead and run:

ruby script/plugin install http://svn.codahale.com/rails_rcov

Now you’re pretty much done for installing. Just run rake with a little change

rake test:units:rcov

You’ll now find a folder in your rails project called coverage. Look in there and find the index file and open it up. You now can see how well covered all your models are. If you open the individual model coverage file you can see exactly which lines aren’t covered and begin writing tests for them. There’s other options you can pass to rcov too, so take a look at the documentation if you want to get fancy. The only other command I run is

rake test:functionals:rcov RCOV_PARAMS="--sort=coverage"

So the difference there is I’m testing the controllers with functionals instead of the models with units and sorting the output by code coverage. This way I can easily see the models and controllers I need to do the most work on.