Jul 12 2007

YouTube Scalability Talk

Published by at 8:39 am under Business   

Cuong Do of YouTube / Google recently gave a Google Tech Talk on scalability.

I found it interesting in light of my own comments on YouTube’s 45 TB a while back.

Here are my notes from his talk, a mix of what he said and my commentary:

In the summer of 2006, they grew from 30 million pages per day to 100 million pages per day, in a 4 month period. (Wow! In most organizations, it takes nearly 4 months to pick out, order, install, and set up a few servers.)

YouTube uses Apache for FastCGI serving. (I wonder if things would have been easier for them had they chosen nginx, which is apparently wonderful for FastCGI and less problematic than Lighttpd)

YouTube is coded mostly in Python. Why? “Development speed critical”.

They use psyco, Python -> C compiler, and also C extensions, for performance critical work.

They use Lighttpd for serving the video itself, for a big improvement over Apache.

Each video hosted by a “mini cluster”, which is a set of machine with the same content. This is a simple way to provide headroom (slack), so that a machine can be taken down for maintenance (or can fail) without affecting users. It also provides a form of backup.

The most popular videos are on a CDN (Content Distribution Network) – they use external CDNs and well as Google’s CDN. Requests to their own machines are therefore tail-heavy (in the “Long Tail” sense), because the head codes to the CDN instead.

Because of the tail-heavy load, random disks seeks are especially important (perhaps more important than caching?).

YouTube uses simple, cheap, commodity Hardware. The more expensive the hardware, the more expensive everything else gets (support, etc.). Maintenance is mostly done with rsync, SSH, other simple, common tools.
The fun is not over: Cuong showed a recent email titled “3 days of video storage left”. There is constant work to keep up with the growth.

Thumbnails turn out to be surprisingly hard to serve efficiently. Because there, on average, 4 thumbnails per video and many thumbnails per pages, the overall number of thumbnails per second is enormous. They use a separate group of machines to serve thumbnails, with extensive caching and OS tuning specific to this load.

YouTube was bit by a “too many files in one dir” limit: at one point they could accept no more uploads (!!) because of this. The first fix was the usual one: split the files across many directories, and switch to another file system better suited for many small files.

Cuong joked about “The Windows approach of scaling: restart everything”

Lighttpd turned out to be poor for serving the thumbnails, because its main loop is a bottleneck to load files from disk; they addressed this by modifying Lighttpd to add worker threads to read from disk. This was good but still not good enough, with one thumbnail per file, because the enormous number of files was terribly slow to work with (imagine tarring up many million files).

Their new solution for thumbnails is to use Google’s BigTable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition.

YouTube uses MySQL to store metadata. Early on they hit a Linux kernel issue which prioritized the page cache higher than app data, it swapped out the app data, totally overwhelming the system. They recovered from this by removing the swap partition (while live!). This worked.

YouTube uses Memcached.

To scale out the database, they first used MySQL replication. Like everyone else that goes down this path, they eventually reach a point where replicating the writes to all the DBs, uses up all the capacity of the slaves. They also hit a issue with threading and replication, which they worked around with a very clever “cache primer thread” working a second or so ahead of the replication thread, prefetching the data it would need.

As the replicate-one-DB approach faltered, they resorted to various desperate measures, such as splitting the video watching in to a separate set of replicas, intentionally allowing the non-video-serving parts of YouTube to perform badly so as to focus on serving videos.

Their initial MySQL DB server configuration had 10 disks in a RAID10. This does not work very well, because the DB/OS can’t take advantage of the multiple disks in parallel. They moved to a set of RAID1s, appended together. In my experience, this is better, but still not great. An approach that usually works even better is to intentionally split different data on to different RAIDs: for example, a RAID for the OS / application, a RAID for the DB logs, one or more RAIDs for the DB table (uses “tablespaces” to get your #1 busiest table on separate spindles from your #2 busiest table), one or more RAID for index, etc. Big-iron Oracle installation sometimes take this approach to extremes; the same thing can be done with free DBs on free OSs also.

In spite of all these effort, they reached a point where replication of one large DB was no longer able to keep up. Like everyone else, they figured out that the solution database partitioning in to “shards”. This spread reads and writes in to many different databases (on different servers) that are not all running each other’s writes. The result is a large performance boost, better cache locality, etc. YouTube reduced their total DB hardware by 30% in the process.

It is important to divide users across shards by a controllable lookup mechanism, not only by a hash of the username/ID/whatever, so that you can rebalance shards incrementally.

An interesting DMCA issue: YouTube complies with takedown requests; but sometimes the videos are cached way out on the “edge” of the network (their caches, and other people’s caches), so its hard to get a video to disappear globally right away. This sometimes angers content owners.

Early on, YouTube leased their hardware.

Post to Twitter Post to Facebook Post to Reddit

If you found this post useful, please link to it from your web site, mention it online, or mention it to a colleague.

11 responses so far

11 Responses to “YouTube Scalability Talk”

  1. M Easter says:

    Fantastic overview… thanks Kyle!

    It is always interesting to hear about architecture and tools used by the titans

  2. Chris says:

    This is awesome. Thanks for taking the time to expose the underpinnings! :D

    Very interesting to see that they’re using Lighttpd. I have to agree with your comment on nginx … perhaps they’ll move to it in the future.

  3. Seo Sanghyeon says:

    psyco is not a Python->C compiler. It’s Python JIT (although not exactly).

  4. Adarsh says:

    Excellent Discovery !

  5. spect says:

    its a rare pleasure to gain this kind of architecture and engineering insight from some of the larger web based companies. used to be easier, but i dont live in the bay area anymore, wish i could find more info like this.

    thanks

  6. geekmaster grok says:

    such an excellent read. thanks.

  7. metale says:

    Simply Superb…!!

  8. Alot of this sounds like the scalability techniques we used to build an ad network… although we were using PHP and not Python.

    We were doing about 1000 hits per second on average… although it could be very spikey traffic… so it could shoot way up, beyond that, on a spike. (It almost like a getting DDoS’ed all the time.)

    Believe it or not… there’s scalability problems (and solutions) beyond those listed here. Interesting stuff to work on, IMO.

    The more and more you have to scale… things like database accesses become to “expensive”, and you have to remove them. Table JOIN’s need to be taken out too (… to “expensive”).

    In-memory databases sometime become a solution.

    And logging things in a file… instead of INSERT’ing into or UPDATE’ing a database also become necessary.

    ReiserFS was a help.

    Also… sometimes you need data centers spread around the world… with slightly different versions of your system optimized for those locations.

    – Charles

  9. nsfun says:

    and youtube uses netscaler … a small detail :)

  10. nelix says:

    A good read, psyco is makes multiple versions of each block for different types (python is dynamically typed, which is slow) so the type bottlenecks are removed at run time, to some extent anyway. At least that’s how I understand it.

  11. Frank Malina says:

    Very concise reading.
    I’ve just wrote a similar article on the video streaming applications, although it’s from the top of my head.