I Went In a Boy, I Came Out a Man

Apple Store large logo sign

Not really, it just seemed like the sort of over-the-top thing a rabid Mac fan might say.

But I did replace my main Windows PC with a MacBook Pro. I’ve used Apple products occasionally over the decades, going all the way back to the Apple II, IIe, IIgs, and orignal 1984 Macintosh. I’m not “switching”, but rather adding; our client projects at Oasis Digital continue to run primarily on Windows or Linux. Our Java work runs with little extra effort on all three platforms.

Here are some thoughts from my first days on this machine and OSX:

  • The MacBook Pro case is very nice. I didn’t see any Windows-equipped hardware with anything similar. The high-tech metal construction is an expensive (and thus meaningful) signal that Apple sends: Apple equipment is high end. The case also has the great practical benefit of acting as a very large heat sink.
  • The MPB keyboard is a bit disappointing; I miss a real Delete key (in addition to Backspace), Home, End, PageUp, PageDown. At my desk I continue to use a Microsoft Natural Keyboard, so this is only a nuisance on the road.
  • I bought a Magic Mouse for the full Apple experience; but I’ll stick with a more normal mouse (and its clickable middle wheel-button) for most use. I find wireless mice too heavy, because of their batteries.
  • Apple’s offerings comprise a fairly complete solution for common end user computing needs; for example, Apple computers, running Time Machine for backup, storing on a Time Capsule. I didn’t go this route, but it is great to see it offered.
  • Printing is very easy to set up, particularly compared to other Unix variants.
  • VMWare Fusion is fantastic, and amply sufficient to use this machine for my Windows work. Oddly, my old Windows software running inside seems slightly more responsive than the native Mac GUI outside (!).
  • I need something like UltraMon; the built in multi-monitor support is trivial to get working, but the user experience is not as seamless as Windows+UltraMon. For example, where is my hotkey to move windows between screens, resizing automatically to account for their different sizes?
  • Windows has a notion of Cut and Paste of files in Explorer. It is conceptually a bit ugly (the files stay there when you Cut them, until Pasted), but extremely convenient. OSX Finder doesn’t do this, as discussed at length on many web pages.
  • I would like to configure the Apple Remote to launch iTunes instead of Front Row, but haven’t found a way to do so yet. No, Mr. Jobs, I do not wish to use my multi-thousand-dollar computer in a dedicated mode as an overgrown iPod. Ever.
  • The 85W MagSafe power adapter, while stylish and effective, is heavy. I’d much prefer a lighter aftermarket one, even if it was inferior in a dozen ways, but apparently Apple’s patent on the connector prevent this. I’d actually be happy to pay Apple an extra $50 for a lightweight power adapter, if they made such a thing.
  • This MBP is much larger, heavier, and more expensive than the tiny Toshiba notebook PC it replaces; yet it is not necessarily any better for web browsing, by far the most common end user computer activity in 2009. This is not a commentary on Apple, it merely points out why low-spec, small, cheap netbooks are so enormously popular.

Missing your .svn\tmp directories? One line fix.

You may find with “svn cleanup” (or its TortoiseSVN equivalent) fails with an error message about “system cannot find the path specified”. If you research this error, you may find that the SVN dev team knows that svn-cleanup does not clean up this particular problem, and as of SVN version 1.6.5, considers that OK.

There is an easy fix, though. The tools are already present on nearly any Linux system, and are available in Cygwin or MSYS on Windows. Navigate to the top of your SVN working directory, and run this:

find . -iname '.svn' -exec mkdir {}/tmp \;

If all you were missing was some empty tmp directories, svn cleanup will now work, as will svn update and friends. Of course you may have other, additional problems with your .svn directories.

A mystery, for me and others, is how the missing .svn\tmp directory situation comes about. The best guess I’ve seen, but not yet reproduced here, is that a helpful piece of software (perhaps a backup tool?) deletes empty directories.

The great majority of all software I’ve used, does not depend on empty directories, and I likewise heartily recommend not designing software in such a way that it requires that empty directories are preserved. If you need a directory, please keep something in it. If you don’t need anythign in it, be willing to rereate it when you have something to put in it. Make it Just Work.

Finally, massive storage done right

Last year, I wrote about my efforts to find a storage server with lots of storage at a low cost-per-byte. What was obvious to me at the time, but apparently not obvious to many vendors, is that the key to cost effective storage is to buy mostly hard drives and as little else as possible. I built on Linux and commodity hardware, but the principle applies regardless of OS or hardware vendor.

The team at BackBlaze went much farther down the same path. They ended up with a custom made 4U case (a bit expensive) while the rest of the parts are few in number, inexpensive, and off the shelf. Their cost overhead is stunningly low, as seen in this chart (which I copied from their article):

cost-of-a-petabyte-chart

Is this right for everyone? Of course not. Enterprise buyers, for example, may need the extra functionality offered by the enterprise class solutions (at many times the cost). Cloud providers and web-scale data storage users, though, simply cannot beat BackBlaze’s approach. What about performance? Clearly this low-overhead approach is optimized for size and cost, not performance. Yet the effective performance can be very high, because this approach makes it possible to use a very large number of disk spindles, and thus has a very high aggregate IO capacity.

Predictably, the response to BackBlaze’s design has been notably mixed, with numerous complaint about performance and reliability. For a very thoughtful (though unavoidably biased) response, read this Sun engineer’s thoughts.

The key thing to keep in mind is the problem being solved. BackBlaze’s design is ideal for use as backup, bulk storage. That is a very common need; the solution I set up (described at the link above) had a typical use case of a given file being written once, then never read again, i.e. kept “just in case”. Reliability, likewise, is obtained as the system level, by having multiple independent servers, preferably spread across multiple physical sites. Once you’re paying the complexity cost to achieve this, there isn’t much additional benefit to paying the cost a second time in the form of more expensive storage.

gitosis on Ubuntu 9.04 Jaunty

As of April 2009, the gitosis package in Ubuntu 9.04 Jaunty is broken; it fails with an error like so:

pkg_resources.DistributionNotFound: gitosis==0.2

There are quite a few pages and mailing list messages that mention this. I only found one with a good hint toward a solution, which was that it is also a known issue on Debian. Following that lead, I got it working by grabbing newer packages from Debian Unstable:

wget http://ftp.us.debian.org/debian/pool/main/p/python-support/python-support_1.0.2_all.deb
wget http://ftp.us.debian.org/debian/pool/main/g/gitosis/gitosis_0.2+20080825-14_all.deb
sudo dpkg -i python-support_1.0.2_all.deb
sudo dpkg -i gitosis_0.2+20080825-14_all.deb

Use this at your own risk; your mileage may vary.

Sometimes You Need to Compile It Yourself

I posted an earlier version of this on the Puppet mailing list recently; it seemed worth expanding here.

An Ideal World

In an ideal world, for each piece of Linux software I use, a very recent version would be “in the box” in the distribution package repositories, for every distro and distro release I use. The package versions would be updated promptly, and backported to the most recent N distro releases. This would be done in such a way as to avoid unexpected breakage, offering a combination of original (as of the distro release), bug-fix-only, and all-updates.

We don’t live in such a world.

The Real World

I’ve found that, with at least Debian, Ubuntu, Red Hat, and Mandrake, typically only the most popular packages are prone to prompt updates for new upstream versions. Backports to not-the-current-distro are even more rare, for various good reasons.

Therefore, when adopting something less widely used, especially if I need the same (current) package version for various distro versions, I’m resigned to having to either package it myself, or find someone “out there” offering updated packages.

Example #1: Puppet

As I write this, the current Puppet version is 0.24.8. It contains a lot of bug fixes and enhancements relative to even the earlier 0.24.x versions. It would probably be only a slight stretch to say that in the Puppet community,  versions before 0.24.x really aren’t recommended for current use at all. Yet the most recent versions offered in-the-box for the last few releases for Debian / Ubuntu are all old enough to be in that category.

Example #2: udpcast

Udpcast is an extremely useful tool, both for cloning systems en masse, and for totally unrelated uses like the one I describe here; yet the version in the very latest Ubuntu is from 2004, and the same is true for Debian.

Example #3: Zabbix

To get good results with Zabbix, it’s necessary to have approximately the same, approximately current versions, on all machines. The versions in various current and past Ubuntu and Debian releases / backports are not even close.

But Please, Use Your Package System

Please, I ask you… and if you work for me on one of my projects, I require of you… do not take any of this to mean you should ever type “sudo make install”.  It’s a nightmare to untangle a system with a mix of packaged and ad hoc compiled code in /usr/bin and friends. Always install using the package system: for a one-off (one machine, ever) install, simply use checkinstall, it takes only an extra minute or two and make it trivial to back the install out later. For a set of production systems. dpkg and RPM are your friends. They won’t bite. Get to know them.

DRBD on Ubuntu 8.04

A while back I wrote about setting up DRBD on Ubuntu.

It is considerably simpler now: you don’t need to build the module anymore, a reasonably recent version is already in the box. You should still read the full directions at the DRBD web site, but to get it going, you only need to configure it and start it; don’t download anything, don’t compile any modules kernels, etc.

Rather, the module is already there; you could load it manually like this:

modprobe drbd

But you really want the utils package:

apt-get install drbd8-utils

… which sets things up the rest of the way. Simply proceed with the configuration file, /etc/drbd.conf. If you want a nice clean config file, remove all the existing resource sections in the drbd.conf installed by the Ub package, and put in something like so:

resource kylesdrbd {
  protocol      C;

  startup { wfc-timeout 60; degr-wfc-timeout  120; }
  disk { on-io-error detach; }
  syncer {
  }
  on x1.kylecordes.com {
    device      /dev/drbd4;
    disk        /dev/sdb5;
    address     192.168.17.61:7791;
    meta-disk   /dev/sdb1[0];
  }
  on x2.kylecordes.com {
    device      /dev/drbd4;
    disk        /dev/sda5;
    address     192.168.17.62:7791;
    meta-disk   /dev/sda3[0];
  }
}

The hostnames here need to match the full, actual hostnames. I show my example disk configuration, you’ll need to do somethign locally appropriate, based on understanding the DRBD docs.

Also adjust the syncer section:

syncer {
rate 40M;  # the rate is in BYTES per second
}

If there was already a filesystem on the metadata partitions you’re trying to put under DRBD, you may need to clear it out:

dd if=/dev/zero of=/dev/sdb1 bs=1M count=1

Now you’re ready to fire it up; on both machines:

drbdadm create-md mwm
drbdadm attach mwm
drbdadm connect mwm

# see status:
cat /proc/drbd

You now are ready to make one primary; do this on only one machine of course:

drbdadm -- --overwrite-data-of-peer primary all

or

drbdadm -- --overwrite-data-of-peer primary kylesdrbd

In my case, I now see a resync starting in the /proc/drbd output.

You don’t need to wait for that to finish; go ahead and create a filesystem on /dev/drbd0.

It’s best to have a dedicated Gigabit-ethernet connection between the nodes; or for busy systems, a pair of bonded GigE. For testing though, running over your existing network is fine.

I found that an adjustment in the inittimeout setting helps to avoid long boot delays, if one of the two systems is down.

Of course I only covered DRBD here; typically in production you’d use it in conjuction with a failover/heartbeat mechanism, so that whatever resource you serve on the DRBD volume (database, NFS, etc.) cuts over to the other machine without intervention; there is plenty to read online about Linux high availability.