Growing a Language, by Guy Steele

This is an oldie-but-goodie: Guy Steele’s “Growing a Language” talk from OOPSLA 1998.

It is amazing to me that Guy, whose is something of a legend in language design, and who thinks so clearly about what makes a good language, was also key in designing Java. Java has been extremely slow to grow in the sense described in this talk, because for many years Sun resisted such growth. Only the rise of C# and the growing popularity of dynamic languages generated enough pressure to get Java unstuck… and in the last few year Java has become somewhat growable in the sense Guy describes.

YouTube Scalability Talk

Cuong Do of YouTube / Google recently gave a Google Tech Talk on scalability.

I found it interesting in light of my own comments on YouTube’s 45 TB a while back.

Here are my notes from his talk, a mix of what he said and my commentary:

In the summer of 2006, they grew from 30 million pages per day to 100 million pages per day, in a 4 month period. (Wow! In most organizations, it takes nearly 4 months to pick out, order, install, and set up a few servers.)

YouTube uses Apache for FastCGI serving. (I wonder if things would have been easier for them had they chosen nginx, which is apparently wonderful for FastCGI and less problematic than Lighttpd)

YouTube is coded mostly in Python. Why? “Development speed critical”.

They use psyco, Python -> C compiler, and also C extensions, for performance critical work.

They use Lighttpd for serving the video itself, for a big improvement over Apache.

Each video hosted by a “mini cluster”, which is a set of machine with the same content. This is a simple way to provide headroom (slack), so that a machine can be taken down for maintenance (or can fail) without affecting users. It also provides a form of backup.

The most popular videos are on a CDN (Content Distribution Network) – they use external CDNs and well as Google’s CDN. Requests to their own machines are therefore tail-heavy (in the “Long Tail” sense), because the head codes to the CDN instead.

Because of the tail-heavy load, random disks seeks are especially important (perhaps more important than caching?).

YouTube uses simple, cheap, commodity Hardware. The more expensive the hardware, the more expensive everything else gets (support, etc.). Maintenance is mostly done with rsync, SSH, other simple, common tools.
The fun is not over: Cuong showed a recent email titled “3 days of video storage left”. There is constant work to keep up with the growth.

Thumbnails turn out to be surprisingly hard to serve efficiently. Because there, on average, 4 thumbnails per video and many thumbnails per pages, the overall number of thumbnails per second is enormous. They use a separate group of machines to serve thumbnails, with extensive caching and OS tuning specific to this load.

YouTube was bit by a “too many files in one dir” limit: at one point they could accept no more uploads (!!) because of this. The first fix was the usual one: split the files across many directories, and switch to another file system better suited for many small files.

Cuong joked about “The Windows approach of scaling: restart everything”

Lighttpd turned out to be poor for serving the thumbnails, because its main loop is a bottleneck to load files from disk; they addressed this by modifying Lighttpd to add worker threads to read from disk. This was good but still not good enough, with one thumbnail per file, because the enormous number of files was terribly slow to work with (imagine tarring up many million files).

Their new solution for thumbnails is to use Google’s BigTable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition.

YouTube uses MySQL to store metadata. Early on they hit a Linux kernel issue which prioritized the page cache higher than app data, it swapped out the app data, totally overwhelming the system. They recovered from this by removing the swap partition (while live!). This worked.

YouTube uses Memcached.

To scale out the database, they first used MySQL replication. Like everyone else that goes down this path, they eventually reach a point where replicating the writes to all the DBs, uses up all the capacity of the slaves. They also hit a issue with threading and replication, which they worked around with a very clever “cache primer thread” working a second or so ahead of the replication thread, prefetching the data it would need.

As the replicate-one-DB approach faltered, they resorted to various desperate measures, such as splitting the video watching in to a separate set of replicas, intentionally allowing the non-video-serving parts of YouTube to perform badly so as to focus on serving videos.

Their initial MySQL DB server configuration had 10 disks in a RAID10. This does not work very well, because the DB/OS can’t take advantage of the multiple disks in parallel. They moved to a set of RAID1s, appended together. In my experience, this is better, but still not great. An approach that usually works even better is to intentionally split different data on to different RAIDs: for example, a RAID for the OS / application, a RAID for the DB logs, one or more RAIDs for the DB table (uses “tablespaces” to get your #1 busiest table on separate spindles from your #2 busiest table), one or more RAID for index, etc. Big-iron Oracle installation sometimes take this approach to extremes; the same thing can be done with free DBs on free OSs also.

In spite of all these effort, they reached a point where replication of one large DB was no longer able to keep up. Like everyone else, they figured out that the solution database partitioning in to “shards”. This spread reads and writes in to many different databases (on different servers) that are not all running each other’s writes. The result is a large performance boost, better cache locality, etc. YouTube reduced their total DB hardware by 30% in the process.

It is important to divide users across shards by a controllable lookup mechanism, not only by a hash of the username/ID/whatever, so that you can rebalance shards incrementally.

An interesting DMCA issue: YouTube complies with takedown requests; but sometimes the videos are cached way out on the “edge” of the network (their caches, and other people’s caches), so its hard to get a video to disappear globally right away. This sometimes angers content owners.

Early on, YouTube leased their hardware.

Pipe RGB data to ffmpeg

A while back I asked on the ffmpeg mailing list how to pipe RGB data in to ffmpeg. I described it as follows:

in my code I am building video frames, 720x480x24bit. I have in mind generating a large number of these, as long as a full DVD worth at 30fps, then using ffmpeg (followed by dvdauthor) to encode them in to MPEG2 for DVD usage.

There were a few replies, but no definitive answer. With considerable experimentation, I got it to work. It turns out that (as far as I can tell) ffmpeg does not have the ability to accept piped in RGB frames. It will however accept piped in data in its “yuv4mpegpipe” format. With some searching and reading I found that this is roughly akin to the format of raw DV video; each frame consists of a header something like this:

YUV4MPEG2 W%d H%d F%d:%d Ip A0:0 C420mpeg2 XYSCSS=420MPEG2

… then an LF character, then data for the the Y, U, and V “planes”. The Y data is full resolution, while the U and Y are half-resolution (this is called “420” in the video world). These planes are uncompressed, one byte per pixel. All of my past work with computer video (going back to Commodore 64s and Apple IIs) has arranged all of the bits for each pixel within a few bytes of each other; this format (with all the Y data for the whole frame, then all the U data, then all the V data) is starkly different.

The essential problem remaining was how to convert RGB to YUV. Happily there are plenty of online references for this. Unhappily there are few fast implementations, and a naive implementation will be very slow. I solved this problem by finding and hiring an expert in low-level data processing with MMX, SSE2, etc. instructions. (I am not in a position to publish that code here.)

In retrospect, though, there are routines included in Intel’s “Integrated Performance Primitives” library which perform this transformation in a highly optimized way. IPP is a bargain: for only a few hundred dollars you get a wealth of high optimized ready-to-use library routines for signal processing.

The ffmpeg piping solution consists, therefore, of:

  1. A module which generated frames in RGB format, to contain whatever contents your application requires.
  2. A module to very quickly convert these to YUV in yuv4mpegpipe format (write your own, or use routines in IPP, for the RGB->YUV420 part).
  3. Pipe this data stream to ffmpeg with stdin; ffmpeg is invoked something like this: ffmpeg -y -f yuv4mpegpipe -i – -i audio.mp3 -target ntsc-dvd -aspect 4:3 foo.mpg

By using a multicore CPU and threads, this whole process can be made to happen in real time or better (i.e., one second of “wall clock” processing time, for one second of finished MPEG2 video). The resulting MPEG2 file can be used with a DVD authoring application to produce a ready-to-burn DVD ISO image.

Update: the data format above is published here as part of the mjpegtools man pages.

TEDTalks – Ideas Worth Spreading – Video Worth Watching

TED is an annual conference at which a bunch of (hopefully?) remarkable people say remarkable things. I’m using the word in a Seth Godin sort of way: remarkable things are those which inspire people to literally remark about them.

It appears to be “A-list” event, meaning that I’m not likely to make the cut anytime soon.

Fortunately, many TED sessions are available to the rest of us online: TEDTalks. Here are some that I bookmarked to watch again; the download links are for QuickTime videos. Many of these are also on Google Video or elsewhere.

Burt Rutan

Barry Schwartz

Malcom Gladwell

Steven Levitt

Nicholas Negroponte

Hans Rosling (One word summary: Wow.)

Ken Robinson – Do schools today kill creativity?

Dan Gilbert – The (misguided) pursuit of happiness

The Secret of Happiness, according to Dan Gilbert:

  1. Accrue wealth, power, and prestige. Then lose it.
  2. Spend as much of your life in prison as you possibly can.
  3. Make someone else really, really rich.
  4. Never ever join the Beatles.

(watch the video for the story behind this)

Linus Torvalds explains distributed source control

On several occasions over the last year, I’ve pointed out that distributed source control tools are dramatically better than centralized tools. It’s quite hard for me to explain why. This is probably because of sloppy and incomplete thinking on my part, but it doesn’t help that most of the audiences / people I’ve said this to, have never used a distributed tool. (I’ve been trying out SVK, bzr, git, etc.) Fortunately, I no longer need to stumble with attempts at explaining this; instead, the answer is to watch as:

Linus Torvalds explains distributed source control in general, and git in particular, at Google

Here are some of his points, paraphrased. They might not be clear without watching the video.

  • He hates CVS
  • He hates SVN too, since it’s “CVS done right”, because you can’t get anywhere good from there.
  • If you need a commercial tool, BitKeeper is the one you should use.
  • Distributed source control is much more important than which tool you choose
  • He looked at a lot of alternatives, and immediately tossed anything not distributed, slow, or which does not guarantee that what goes in, comes out [using secure hashes]
  • He liked Monotone, but it was too slow
  • With a distributed tool, no single place is vital to your data
  • Centralized does not scale to Linux-kernel sized projects
  • Distributed tools work offline, with full history
  • Branching is not an esoteric, rare event – everyone branches all the time every time they write a line of code – but most tools don’t understand this, they only understand branching as a Very Big Deal.
  • Distributed tools serve everyone; everyone has “commit” access to their branches. Everyone has all of the tool’s features, rather than only a handful of those with special access.
  • Of course noone else will necessarily adopt your changes; using a tool in which every developer has the full feature set, does not imply anarchy.
  • Merging works like security: as a network of trust
  • Two systems worth looking at: git and Mercurial. The others with good features are too slow [for large projects].
  • Lots of people are using git for their own work (to handle merges, for example), inside companies where the main tools is SVN. git can pull changes from (and thus stay in sync with) SVN.
  • Git is now much easier to use than CVS or other common tools
  • Git makes it easier to merge than other tools. So the problem of doing more merging, is not much of a problem, not something you need to fear.
  • Distributed tools are much faster because they don’t have to go over the network very often. Not talking to a server, is tremendously faster, than talking to even a high end server over a fast network.
  • SVN working directories and repositories are quite large, git equivalents are much smaller
  • The repository (=project) is the unit of checkout / commit / etc.; don’t put them all in to on repository. The separate repositories can share underlying storage.
  • Performance is not secondary. It affect everything you do, it affects how you use an application. Faster operations = merging not a big deal = more, smaller changes.
  • SVN “makes branching really cheap”. Unfortunately, “merging in SVN is a complete disaster”. “It is incredible how stupid these people are”
  • Distributed source control is mostly inherently safer: no single point of failure, secure hashes to protect from even intentional malicious users.
  • “I would never trust Google to maintain my source code for me” – with git (and other distributed systems) the whole history is in many places, nearly impossible to lose it.

My own observations:

There are important differences between source control tools. I have heard it said that these are all “just tools” which don’t matter, you simply use whatever the local management felt like buying. That is wrong: making better tool choices will make your project better (cheaper, faster, more fun, etc.), making worse tool choices will make your project worse (more expensive, slower, painful, higher turnover, etc.)

Distributed tools make the “network of trust” more explicit, and thus easier to reason about.

I think there is an impression out there that distributed tools don’t accomodate the level of control that “enterprise” shops often specify in their processes, over how code is managed. This is a needless worry; it is still quite possible to set up any desired process and controls for the the official repositories. The difference is that you can do that, without the collateral damage of taking away features from individual developers.

Git sounds like the leading choice on Linux, but at the moment it is not well supported on Windows, and I don’t see any signs of that changing anytime soon. I will continue working with bzr, SVK, etc.

There is widespread, but mostly invisible demand for better source control. Over the next few years, the hegemony of the legacy (centralized) design will be lessened greatly. One early adopter is Mozilla, which is switching (or has switched?) to Mercurial. Of course, many projects and companies (especially large companies) will hang on for years to come, because of interia and widespread tool integration.

Several of the distributed tools (SVK, bzr, git, probably more) have the ability to pull changes from other systems, including traditional centralized systems like SVN. These make it possible to use a modern, powerful tool for your own work, even when you are working on a project where someone else has chosen such a traditional system for the master repository.

Update: Mark commented that he doesn’t feel like he has really commited a changeset, until it is on a remote repository. I agree. However, this is trivial: with any of the tools you can push your change from your local repository to a remote one (perhaps one for you alone), with one command. With some of them (bzr, at least) you can configure this to happen automatically, so it is zero extra commands. This does not negate the benefits of a local repository, because you can still work offline, and all read operations still happen locally and far more quickly.

Update: I didn’t mention Mercurial initially, but I should have (thanks, Alex). It is another strong contender, and Mozilla recently chose it. Its feature set is similar to git’s, but with more eager support for Windows.

Update: There is another discussion of Linus’s talk over on Codice Software’s blog, which was linked on Slashdot. Since Codice sells a source control product, the Slashdot coverage is a great piece of free publicity.

Update: Mark Shuttleworth pointed out below that bzr is much faster than it used to be, and that it has top-notch renaming support; which he explains in more detail on in a post. Bzr was already on my own short-list, I am using it for a small project here at Oasis Digital.

Update: I’ve started gathering notes about git, bzr, hg, etc., for several upcoming posts (and possibly a talk or two). If you’re interested in these, subscribe to the feed using the links in the upper right of my home page.