Cloudy Data Storage, circa 2001

Around 2000-2001, Oasis Digital built a system for a client which (in retrospect) took a “cloudy” approach to data storage. 2001 is a few years before that approach gained popularity, so it’s interesting to look back and see how our solution stacks up.

The problem domain was the storage of check images for banks; the images came out of a check-imaging device, a very specialized camera/scanner capable of photographing many checks per second, front and back. For example, to scan 1000 checks (a smallish run), it generated 2000 images. All of the images from a run were stored in a single archive file, accompanied by index data. OCR/mag-type data was also stored.

I don’t recall the exact numbers (and probably wouldn’t be able to talk about them anyway), so the numbers here are estimates to convey a sense of the scale of the problem in its larger installations:

  • Many thousands of images per day.
  • Archive files generally between 100 MB and 2 GB
  • Hundred, then thousands, of these archive files
  • In an era when hard drives were much smaller than they are today

Our client considered various off-the-shelf high-capacity storage systems, but instead worked with us to contruct a solution roughly as follows.

Hardware and Networking

  • Multiple servers were purchased and installed, over time.
  • Servers were distributed across sites, connected by a WAN.
  • Multiple hard drives (of capacity C) were installed in each server, without RAID.
  • Each storage drive on each server was made accessible remotely via Windows networking

Software

  • To keep the file count managable, the files were kept in the many-image archives.
  • A database stored metadata about each image, including what file to find it in.
  • The offset of the image data within its archive file was also stored, so that it could be read directly without processing the whole archive.
  • Each archive file was written to N different drives, all on different servers, and some at different physical sites.
  • To pick where to store a new file, the software could simply look through the list of possibility and check for sufficient free space.
  • A database kept track of where (all) each archive file was stored.
  • An archive file could be read from any of its locations. Client software would connect to the database, learn of all the locations for a file.

This system was read-mostly, and writes were not urgent. For writes, if N storage drives weren’t available, the operator (of the check-scanning system) would try again later. CAP and other concerns weren’t important for this application.

Helpful Properties

  • Even if some servers, sites, or links were down, files remained generally accessible.
  • Offline media storage could be added, though I don’t recall if we got very far down that path.
  • The system was very insensitive to details like OSs, OS versions, etc. New storage servers and drives could be added with newer OS versions and bigger drive sizes, without upgrading old storage.
  • Drives could be made read-only once full, to avoid whole classes of possible corruption.
  • By increasing the number of servers, and number of hard drives over time, this basic design could scale quite far (for the era, anyway).

This approach delivered for our client a lot of the benefits of an expensive scalable storage system, at a fraction of the cost and using only commodity equipment.

Why do I describe this as cloud-like? Because from things I’ve read, this is similar (but much less sophisticated, of course) to the approach taken inside of Amazon S3 and other cloud data storage systems/services.

Key Lesson

Assume you are willing to pay to store each piece of data on N disks. You get much better overall uptime (given the right software) if those N disks are in N different machines spread across sites, than you do by putting those N disks in a RAID on the same machine. Likewise, you can read a file much faster from an old slow hard drive in the same building than you can from a RAID-6 SAN across an 2000-era WAN. The tradeoff is software complexity.

 

Fix timestamps after a mass file transfer

I recently transferred a few thousand files, totalling gigabytes, from one computer to another over a slowish internet connection. At the end of the transfer, I realized the process I used had lost all the original file timestamps. Rather, all the files on the destination machine had a create/modify date of when the transfer occurred. In this particular case I had uploaded files to Amazon S3 from end then downloaded them from another, but there are numerous other ways to transfer files that lose the timestamps; for example, many FTP clients do so by default.

This file transfer took many hours, so I wasn’t inclined to delete and try again with a better (timestamp-preserving) transfer process. Rather, it shouldn’t be very hard to fix them in-place.

Both machines were Windows servers; neither had a broad set of Unix tools installed. If I had those present, the most obvious solution would be a simple rsync command, which would fix the timestamps without retransferring the data. But without those tools present, and with an unrelated desire to keep these machines as “clean” as possible, plus a firewall obstacle to SSH, I looked elsewhere for a fix.

I did, however, happen to have a partial set of Unix tools (in the form of the MSYS tools that come with MSYSGIT) on the source machine. After a few minutes of puzzling, I came up with this approach:

  1. Run a command on the source machine
  2. … which looks up the timestamp of each file
  3. … and stores those in the form of batch file
  4. Then copy this batch file to the destination machine and run it.

Here is the source machine command, executed at the top of the file tree to be fixed:

find . -print0 | xargs -0 stat -t "%d-%m-%Y %T"
 -f 'nircmd.exe setfilefoldertime "%N" "%Sc" "%Sm"'
 | tr '/' '\\' >~/fix_dates.bat

I broken it up to several lines here, but it’s intended as one long command.

  • “find” gets the names of every file and directory in the file tree
  • xargs feeds these to the stat command
  • stat gets the create and modify dates of each file/directory, and formats the results in a very configurable way
  • tr converts the Unix-style “/” paths to Windows-style “\” paths.
  • The results are redirected to (stored in) a batch file.

As far as I can tell, the traditional set of Windows built in command line tools does not include a way to set a file or directory’s timestamps. I haven’t spent much time with Powershell yet, so I used the (very helpful) NIRCMD command line utilities, specifically the setfilefoldertime subcommand. The batch file generated by the above process is simply a very long list of lines like this:

nircmd.exe setfilefoldertime "path\filename" "19-01-2000 04:50:26" "19-01-2000 04:50:26"

I copied this batch file to the destination machine and executed it; it corrected the timestamps, the problem was solved.

Data Center (Cloud) Cost Efficiency

A few months ago I mentioned James Hamilton’s comments on the micro-server trend. Today I came across a talk he gave at MIX10 in which he presented excellent real-world large-scale data, with insightful analysis, about the cost efficiency of data centers. (Here is a direct MP4 download, suitable for viewing across more platforms.)

I had an intuitive feel for many of his conclusions already, and had numbers to back that up on a small scale (as a customer of cloud services, and provider of SaaS services, and employer of people who operate systems). But I am very pleased whenever an opportunity comes along to replace intuition with data.

I won’t attempt to repeat his ideas here. I will simply recommend that you watch this (and other similar analyses) and get a decent understanding, before purchasing or deploying any in-house, self-hosted, or self-managed servers. The latter still makes sense in some situations, but in 2010 the cloud is the default right answer.

A number of the ideas he presents are iconoclastic; some popular trends, especially in enterprise data centers, turn out to be misguided.

Upcoming talk: Cloud Computing User Group

The St. Louis Cloud Computing User Group launches on Jan. 21st at Appistry. Sam Charrington over there kicked it off, but I suspect it will shortly grow far past its Appistry roots.

I’m giving a talk (one of two) at the first meeting. Contrary to the initial description floating around, I won’t be speaking (in detail) about “Amazon Web Services from a Developer Perspective”. Rather, my talk will be broader, and from a developer+business perspective:

To the Cloud(s) and Back

Over the last few years, I’ve been to the Amazon cloud and back; on a real project I started with inhouse file storage, moved to Amazon S3, then moved back. I’ve likewise used EC2 and tried a couple of competitors. I think this qualifies me to raise key questions:

  • Should you use (public) cloud storage? Why and why not?
  • Should you use (public) cloud CPUs? Why and why not?
  • How do you manage an elastic set of servers?
  • Can you trust someone else’s servers? Can you trust your own?
  • Can you trust someone else’s sysadmins? Can you trust your own?
  • What about backups?

This talk will mostly raise the questions, then offer some insights on the some of the answers.

Update: Slides are online here.

Massive Parallelism and Microslices

I just read James Hamilton’s comments on “Microslice” servers, which are very low-power, but high CPU-to-wattage ratio servers. As he explains in detail, at scale the economics of this design are compelling. In some ways, of course, this is the opposite of another big trend going on, which is consolidation through virtualization. I reconcile these forces like so:

  1. For enterprises with a high ratio of emloyees-per-server-CPU, the cost factors tend to drive cost as a function of the number of boxes / racks /etc. This makes virtualization on to a few big servers a win.
  2. But for enterprises with a low ratio (lots of computing work, small team), the pure economics of the microserver approach makes it the winner.

The microserver approach demands:

  • better automated system adminstration, you must get to essentially zero marginal sysadmin hours per box.
  • better decompisition of the computing work in to parallelizable chunks
  • very low software cost per server (you’re going to run a lot of them), favoring zero-incremental-cost operating systems (Linux)

My advice to companies who make software to harness a cloud of tiny machines: find a way to price it so your customer pays you a similar amount to divide their work among 1000 microservers, as they would amount 250 heavier servers; otherwise if they move to microservers they may find a reason to leave you behind.

On a personal note, I find this broadening trend toward parallelization to be a very good thing – because my firm offers services to help companies through these challenges!