Multicast your DB backups with UDPCast

At work we have a set of database machines set up so that one is the primary machine, making backups once per day, and several other machines restore this backup once per day, for development, ad hoc reporting, and other secondary purposes. We started out with an obvious approach:

  • back up on server1, to a file on server1
  • SCP or rsync the file from server1 to server2
  • restore the DB on server2

… but over time as the data has grown the inefficiency of such an approach become equally obvious: the backup data goes back and forth across the network and to/from disk repeatedly. These steps only count the backup data, not the live storage in the DBMS:

  1. On to the disk on server1  (putting additional load on the primary DB machine)
  2. Off the disk on server1  (putting additional load on the primary DB machine)
  3. On to the disk on server2
  4. Off the disk on server2

This is also wasteful from a failure-recovery point of view, since the place we are least likely to need the backup is on the machine whose failure would lead us to need a backup.

Pipe it over the network instead

The project at hand uses PostgreSQL on Linux, so I’ll show example PG commands here. The principles apply equally well to other DBs and platforms of course, though some DBMSs or platforms might not offer backup and restore commands that stream data. (I’m looking at you, MS SQL Server!)

What we need is a pipe that goes over the network.  One way to get such a pipe is with ssh (or rsh), something like so, run from server1:

pg_dump -Fc dbnameonserver1 | ssh server2 pg_restore -Fc -v -O -x -d dbnameonserver2

This variation will simultaneously store the backup in a file on server1:

pg_dump -Fc dbnameonserver1 | tee dbname.dump | ssh server2 pg_restore -Fc -v -O -x -d dbnameonserver2

This variation (or something close, I last run this several days ago) will store the backup in a file on server2 instead:

pg_dump -Fc dbnameonserver1 | ssh server2 "tee dbname.dump | pg_restore -Fc -v -O -x -d dbnameonserver2"

To reduce the CPU load from this, adjust SSH to use less CPU-intensive encryption, or avoid that entirely with rsh (but only if you have a trusted / local network).

Multicast / Broadcast it over the network instead

The above commands are good for point-to-point streaming backup / restore, but the scenario I have in mind has one primary machine and several (3, 4, or more) secondary machines. One answer is to run the above process repeatly, once for each secondary machine, but that sends the whole backup over the network N times. Inefficiency! (==Blashphemy?)

To avoid that, simply use UDPCast. It’s a trivial install on Debian / Ubuntu:

apt-get install udpcast

(Be warned though: there is at least one annoying bug in the old (2004) UDPCast offered off-the-shelf in Debian / Ubuntu as of 2008. You might need to the latest UDPCast source from its web site above.)

Run this on the server1:

pg_dump -Fc dbnameonserver1 | udp-sender --min-wait 5 --nokbd

Run this on the server2 .. serverN:

udp-receiver --nokbd | pg_restore -Fc -v -O -x -d dbnameonserverN

With this approach, the backup data will be multicast (or broadcast, if multicast does not work and if all the machines are on the same segment), only traversing the network once no matter how many receiving machines are set up. udp-receiver has a –pipe option, but I found that I occasionally get corruption with huge (50GB+) transfer, when using –file or –pipe. So I recommend this, to save a copy on the receiving end:

udp-receiver --nokbd | tee mydatabase.dump | pg_restore -Fc -v -O -x -d dbnameonserverN

Or perhaps you want to just receive and store the backup on a file server, with this:

udp-receiver --nokbd >mydatabase.dump

To make all this happen automatically, you’ll set the sender to start at the same time as the receivers in “cron” on the relevant machines. Use NTP to keep their clocks in sync, and adjust the udp-sender and udp-receiver options as needed to get the whole process to start smoothly in spite of minor timing variations (–min-wait t, –max-wait t).

As with the previous suggestion for rsh, the data will travel unencrypted over your network, so do this only if you trust your network (such as a LAN segment between your database servers).

Multicast / broadcast is very useful technology, and with UDPcast it is quite easy to use. UDPcast also implements a checksum/retransmit mechanism, it is not a “bare”, loss-prone UDP transmission.

 

Network / System Monitoring Smorgasbord

At one of my firms (a Software as a Service provider), we have a Zabbix installation in place to monitor our piles of mostly Linux servers. Recently we look a closer look at it and and found ample opportunities to monitor more aspects, of more machines and device, more thoroughly. The prospect of increased investment in monitoring led me to look around at the various tools available.

The striking thing about network monitoring tools is that there are so many from which to choose. Wikipedia offers a good list, and the comments on a Rich Lafferty blog post include a short introduction from several of the players. (Update – Jane Curry offers a long and detailed analysis of network / system monitoring and some of these tools (PDF).)

For OS level monitoring (CPU load, disk wait time, # of processes waiting for disk, etc.), Linux exposes extensive information with “top”, “vmstat”, “iostat”, etc. I was disappointed to not find any of these tools conveniently presenting / aggregating / graphing the data therein. From my short look, some of the tools offer small subsets of that data; for details, they offer the ability for me to go in and figure out myself what data I want in and how to get it. Thanks.

Network monitoring is a strange marketplace; many of the players have a very similar open source business model, something close to this:

  • core app is open source
  • low tier commercial offering with just a few closed source addons, and support
  • high tier commercial offering with more closed source addons, and more support

I wonder if any of them are making any money.

Some of these tools are agent-based, others are agent-less. I have not worked with network monitoring in enough depth to offer an informed opinion on which design is better; however, I have worked with network equipment enough to know that it’s silly not to leverage SNMP.
I spent yesterday looking around at some of the products on the Wikipedia list, in varying levels of depth. Here I offer first impressions and comments; please don’t expect this to be comprehensive, nor in any particular order.

Zabbix

Our old installation is Zabbix 1.4; I test-drove Zabbix 1.6 (advertised on the Zabbix site as “New look, New touch, New features”. The look seemed very similar to 1.4, but the new feature list is nice.

We most run Ubuntu 8.04, which offers a package for Zabbix 1.4. Happily, 8.04 packages for Zabbix 1.6 are available at http://oss.travelping.com/trac.

The Zabbix agent is delightfully small and lightweight, easily installing with a Ubuntu package. In its one configuration file, you can tell it how to retrieve additional kinds of data. It also offers a “sender”, a very small executable that transmits a piece of application-provided data to your Zabbix server.

I am reasonably happy with Zabbix’s capabilities, but I have the GUI design to be pretty weak, with lots of clicking to get through each bit of configuration. I built far better GUIs in the mid-90s with far inferior tools to what we have today.  Don’t take this as an attack on Zabbix in particular though; I have the same complaint about most of the other tools here.

We run PostgreSQL; Zabbix doesn’t offer any PG monitoring in the box, but I was able to follow the tips at http://www.zabbix.com/wiki/doku.php?id=howto:postgresql and get it running. This monitoring described there is quite high-level and unimpressive, though.

Hyperic

I was favorably impressed by the Hyperic server installation, which got two very important things right:

  1. It included its own PostgreSQL 8.2, in its own directory, which it used in a way that did not interfere with my existing PG on the machine.
  2. It needed a setting changed (shmmax), which can only be adjusted by root. Most companies faced with this need would simply insist the installer run as root. Hyperic instead emitted a short script file to make the change, and asked me to run that script as root. This greatly increased my inclination to trust Hyperic.

Compared to Zabbix, the Hyperic agent is very large: a 50 MB tar file, which expands out to 100 MB and includes a JRE. Hyperic’s web site says “The agent’s implementation is designed to have a compact memory and CPU utilization footprint”, a description so silly that it undoes the trust built up above. It would be more honest and useful of them to describe their agent as very featureful and therefore relatively large, while providing some statistics to (hopefully) show that even its largish footprint is not significant on most modern servers.

Setting all that aside, I found Hyperic effective out-of-the-box, with useful auto-discovery of services (such as specific disk volumes and software packages) worth monitoring, it is far ahead of Zabbix in this regard.

For PostgreSQL, Hyperic shows limited data. It offers table and index level data for PG up through 8.3, though I was unable to get this to work, and had to rely on the documentation instead for evaluation. This is more impressive at first glance than what Zabbix offers, but is still nowhere near sufficiently good for a substantial production database system.

Ganglia

Unlike the other tools here, Ganglia comes from the world of high-performance cluster computing. It is nonetheless apparently quite suitable nowadays for typical pile of servers. Ganglia aims to efficiently gather extensive, high-rate data from many PCs, using efficient on-the-wire data representation (XDR) and networking (UDP, including multicast). While the other tools typically gather data at increments of once per minute, per 5 minutes, per 10 minutes, Ganglia is comfortable gathering many data points, for many servers, every second.

The Ganglia packages available in Ubuntu 8.04 are quite obsolete, but there are useful instructions here to help with a manual install.

Nagios

I used Nagios briefly a long time ago, but I wasn’t involved in the configuration. As I read about all these tools, I see many comments about the complexity of configuring Nagios, and I get the general impression that it is drifting in to history. However, I also get the impression that its community is vast, with Nagios-compatible data gathering tools for any imaginable purpose.

Others

Zenoss

Groundwork

Munin

Cacti

How Many Monitoring Systems Does One Company Need?

It is tempting to use more than one monitoring system, to quickly get the logical union of their good features. I don’t recommend this, though; it takes a lot of work and discipline to set up and operate a monitoring system well, and dividing your energy across more than one system will likely lead to poor use of all of them.

On the contrary, there is enormous benefit to integrated, comprehensive monitoring, so much so that it makes sense to me to replace application-specific monitors with data feeds in to an integrated system. For example, in our project we might discard some code that populates RRD files with history information and published graphs, and instead feed this data in to a central monitoring system, using its off-the-shelf features for storage and graphing.

A flip side of the above is that as far as I can tell, none of these systems offers detailed DBA-grade database performance monitoring. For our PostgreSQL systems, something like pgFouine is worth a look.

Conclusion

I plan to keep looking and learning, especially about Zenoss and Ganglia. For the moment though, our existing Zabbix, upgraded to the current version, seems like a reasonable choice.

Comments are welcome, in particular from anyone who can offer comparative information based on substantial experience with more than one of these tools.

RocketModem Driver Source Package for Debian / Ubuntu

A couple of months ago I posted about using the current model Comtrol RocketModem IV with Debian / Ubuntu Linux. Ubuntu/Debian includes an older “rocket” module driver in-the-box, which works well for older RocketModem IV cards. But for the newest cards, it does not work at all. The current RocketModem IV is not recognized by the rocket module in-the-box in Linux, it requires an updated driver from Comtrol.

With some work (mostly outsourced to a Linux guru) I now present a source driver package for the 3.08 “beta” driver version (from the Comtrol FTP site):

comtrol-source_3.08_all.deb

Comtrol ships the driver source code under a GPL license, so unless I badly misunderstand, it’s totally OK for me to redistribute it here.

To install this, you follow the usual Debian driver-build-install process. The most obvious place to do so is on the hardware where you want to install it, but you can also use another machine the same distro version and Linux kernel version as your production hardware. Some of these commands must be run as root.

dpkg -i comtrol-source_3.08_all.deb
module-assistant build comtrol

This builds a .deb specific to the running kernel. When I ran it, the binary .deb landed here:

/usr/src/comtrol-module-2.6.22-14-server_3.08+2.6.22-14.52_amd64.deb

Copy to production hardware (if you are not already on it), then install:

dpkg -i /usr/src/comtrol-module-2.6.22-14-server_3.08+2.6.22-14.52_amd64.deb

and verify the module loads:

modprobe rocket

and finds the hardware:

ls /dev/ttyR*

To verify those devices really work (that they talk to the modems on your RocketModem card), Minicom is a convenient tool:

apt-get install minicom
minicom -s

Kernel Versions

Linux kernel module binaries are specific to the kernel version they are built for; this is an annoyance, but is not considered a bug (by Linus). Because of this, when you upgrade your kernel, you need to:

  • Uninstall the binary kernel module .deb you created above
  • Put in the new kernel
  • Build and install a new binary module package as above

Rebuilding the source .deb

Lastly, if you care to recreate the source .deb, you can do so by downloading the “source” for the source deb: comtrol-source_3.08.tar.gz then following a process roughly like so:

apt-get install module-assistant debhelper fakeroot
m-a prepare
tar xvf comtrol-source_3.08.tar.gz
cd comtrol-source
dpkg-buildpackage –rfakeroot

The comtrol subdirectory therein contains simply the content of Comtrol’s source driver download, and this is likely to work trivially with newer driver versions (3.09, etc.) when they appear.

A/B Technique for Web Application Deployment

This description of my “A/B technique for web application deployment” was transcribed from audio, so it less tight, more verbose than my normal prose. I chose to post it in rough form, rather than leave it on the “back burner” until an unknown future date when I have time to rewrite it. I first explained this to a colleague around 1999, 8 years is long enough for an idea to wait.

The Problem / Context

At least a dozen times over the last decade, this scenario has come up at consulting client sites: you have a web application and you want to upgrade it with a new version. You could do so with a brute force cutover (stop the app, swap the code), but that’s not the scenario that I’m talking about. I’m talking about the upgrading to a new version, not quite compatible with the old one, without dropping current active users. For example, in the new version, you might have some different data that goes in the session. You might have some different pages, so you have some different URLs, you may be adding a new field so that once you put in this new version, there’ll be an additional field, and it stores that additional field in the session and in the database and the caching and so on. Yet you want the current users to keep working without interruption.

Solution

This technique is not language-specific – it applies equally in PHP, ASP, ASP.NET, Java servlets, JSP, CGI, etc.; with nearly any application or infrastructure.

Have the URL of your application, which I’ll call “/contact” here, be the URL of a proxy application (or “launching pad”). Then have two additional URLs for two specific instances of the actual application. For example, you might have “/contact” as your overall application URL and then “/contact/contactA” or “/contact/A” as one instance where you have that application installed.

At the “/contact” URL, install a simple proxy application, whose job is to take a newly arriving user, present them an intro/login screen, then redirect them to one of the specific instances of the application.

As a user I will point my browser to the “/contact” URL, and I bookmark that. I launch that, I see a login screen, I type in my name and password, I press the button to log in. The “/contact” launching pad redirects me to “/contact/A,”, an instance of real application. I’ll call the second instance “B”, perhaps at the URL “/contact/B”. In the normal steady state of the system, the user will be using A, they login to “/contact” and they end up in the “A” instance.

Then you want to do an upgrade, install a new version. Install the new version as “/contact/B.” Leave the existing application in place and working. (By the way, I’ve assumed you are using a technology where you can deploy and undeploy applications without bringing your web server down, but with mod_proxy you could make this work even without that capability) Deploy the new application version in a new and different path than what your current running users are on. Adjust some setting your proxy/launchpad application (perhaps as simple as a single line in a single config file). So, for example, you might install the new version as “/contact/B” and then you flip a switch (edit the config file) to make it so that new users that come to the “/contact” page don’t land in “/contact/A” anymore, they land in “/contact/B” as they login.

The current users already using the application in “/contact/A” stay there – they don’t know or care that you’ve deployed a new version. New users come in the come in the new version. So you want to have some sort of mechanism (likely provided by your application server if you’re using one, and not hard to build otherwise) for monitoring how many users are using each of these applications. So you might for example notice that you have 1000 users active on /contact/A. You deploy a new version as /contact/B and flip the switch. Then, depending on the usage characteristics of your application – over the next few minutes, next few hours, however it works out, the users, as they log out and log in and such, gradually all make it into the /B application. Some kind of maximum-login-time mechanism will ensure that this cutover happens in finite time.

Once the users have moved to /contact/B, you then declare it as your “current” version, and you take down /A, because no one’s using it and no one can get into it. So that next time you need to do an upgrade, you just do it in reverse – you deploy that new version as /contact/A, flip the switch back to make all new users’ logins land in the A… and again, after however many hours or minutes or whatever, you have all your users on the new version, and you can take down the old version.

You can easily implement with just the tools that already come with your Web application development system. You don’t need any kind of special hardware or special application server or HTTP server support. You don’t need any sort of special way of doing session affinity; you’re doing session affinity by simply handing out the URL of one of these other Web applications.

Bookmarks

Someone might bookmark a page of your application. So let’s say that you had directed them to /contact/A, and they were on the page /contact/A/lists.jsp. When they return to this bookmark later (you do want to support bookmarking, right?), you don’t want them to land there; you don’t want them to end up in the A application if you’re currently using the B application. This is actually pretty easy to handle also. You simply use some settings on your Web server to do a redirect, with a few lines of configuration in .htaccess or analogous mechanims. So based on your setting of which one is current, you make it so that if someone comes into the application without having a referrer from inside the application, you just redirect them over to whatever the current instance is. And that takes a little bit of thought, but only a little, and you can make it seamlessly solve that problem of users’ bookmarks working in spite of you switching back and forth between two instances.

Clustering

You might be deployed on a cluster. Perhaps you are using Websphere with 37 web servers. It turns out that this A/B approach works orthogonally to the clustering features of your Web application server. You could have the A application deployed across all 37 servers; you could deploy the B application, with a few clicks, across all 37 servers; you could flip that switch in some global way to kick people onto the B, and so on.

Override the launchpad for testing

You can permit users to enter a special URL to get to the “other side”. you could have some way of entering a URL that takes you past that launch application straight into the B side, so that you could click around, you could manually verify that the new B application works in the production environment before you flip the switch to make that the deployed production system. This is a very wise and useful type of testing to do, a great final stage of testing because the new code in actually in production. It’s obviously not a replacement for testing in a separate test environment, it’s an adjunct for even greater safety in deploment.

Performance

When a system is running in a steady state, its caches are fully populated with relevant data, so many requests can be answered with data from the cache (RAM). But when a system is freshly started, its caches are empty, so more requests require (slower) disk access, during the first few minutes of operations. This is sometimes called the “empty cache” problem, and is responsible for the poor performance sometimes seen in the first few minutes after a busy system is restarted.

The technique described here prevents this problem, because with it you avoid ever shutting down and restarting your whole Web application with your full user population on it. Instead, since the switch only brings newly-logging-in users to the new version – the new instance – you gradually have people start using it, so you never take a big hit all at once in terms of cache population.

Schema Changes

Hani asked, in a comment, about schema changes. A simple answer is that you won’t be able to make a transition like the one described here (where both the old and new code versions run in parallel for a while), if you make schema changes such that the old code no longer works. A more complex answer, which I have used with great results, is that this is a programmable computer and you are a programmer – with some effort, you can make the software tolerate both the old and new schemas. So the process works like this:

  1. Decide on the schema change, but don’t deploy it
  2. Modify your software to tolerate the old or new schema, whichever is present
  3. Deploy the new software, transition all users to it (as described above)
  4. Make the schema change; you may need to momentarily quiesce the software, but hopefully not kill user sessions

(There are a few tools out there to help with the schema-change-in-a-live-app problem. One of then is ChronicDB who wrote me to point this out.)

Of course this is a lot more work than just stopping the server, making your change, and restarting. Whether it’s worth it depends on your situation. If you have an overnight non-usage window, consider using it instead of the long path described here.

I hope this is helpful for someone out there. Comments are welcome.