Around 2000-2001, Oasis Digital built a system for a client which (in retrospect) took a “cloudy” approach to data storage. 2001 is a few years before that approach gained popularity, so it’s interesting to look back and see how our solution stacks up.
The problem domain was the storage of check images for banks; the images came out of a check-imaging device, a very specialized camera/scanner capable of photographing many checks per second, front and back. For example, to scan 1000 checks (a smallish run), it generated 2000 images. All of the images from a run were stored in a single archive file, accompanied by index data. OCR/mag-type data was also stored.
I don’t recall the exact numbers (and probably wouldn’t be able to talk about them anyway), so the numbers here are estimates to convey a sense of the scale of the problem in its larger installations:
- Many thousands of images per day.
- Archive files generally between 100 MB and 2 GB
- Hundred, then thousands, of these archive files
- In an era when hard drives were much smaller than they are today
Our client considered various off-the-shelf high-capacity storage systems, but instead worked with us to contruct a solution roughly as follows.
Hardware and Networking
- Multiple servers were purchased and installed, over time.
- Servers were distributed across sites, connected by a WAN.
- Multiple hard drives (of capacity C) were installed in each server, without RAID.
- Each storage drive on each server was made accessible remotely via Windows networking
Software
- To keep the file count managable, the files were kept in the many-image archives.
- A database stored metadata about each image, including what file to find it in.
- The offset of the image data within its archive file was also stored, so that it could be read directly without processing the whole archive.
- Each archive file was written to N different drives, all on different servers, and some at different physical sites.
- To pick where to store a new file, the software could simply look through the list of possibility and check for sufficient free space.
- A database kept track of where (all) each archive file was stored.
- An archive file could be read from any of its locations. Client software would connect to the database, learn of all the locations for a file.
This system was read-mostly, and writes were not urgent. For writes, if N storage drives weren’t available, the operator (of the check-scanning system) would try again later. CAP and other concerns weren’t important for this application.
Helpful Properties
- Even if some servers, sites, or links were down, files remained generally accessible.
- Offline media storage could be added, though I don’t recall if we got very far down that path.
- The system was very insensitive to details like OSs, OS versions, etc. New storage servers and drives could be added with newer OS versions and bigger drive sizes, without upgrading old storage.
- Drives could be made read-only once full, to avoid whole classes of possible corruption.
- By increasing the number of servers, and number of hard drives over time, this basic design could scale quite far (for the era, anyway).
This approach delivered for our client a lot of the benefits of an expensive scalable storage system, at a fraction of the cost and using only commodity equipment.
Why do I describe this as cloud-like? Because from things I’ve read, this is similar (but much less sophisticated, of course) to the approach taken inside of Amazon S3 and other cloud data storage systems/services.
Key Lesson
Assume you are willing to pay to store each piece of data on N disks. You get much better overall uptime (given the right software) if those N disks are in N different machines spread across sites, than you do by putting those N disks in a RAID on the same machine. Likewise, you can read a file much faster from an old slow hard drive in the same building than you can from a RAID-6 SAN across an 2000-era WAN. The tradeoff is software complexity.