Sunday, May 20, 2012

Base backup compression options

I've been looking at my PostgreSQL base backups. They are run using the traditional

tar -c -z -f basebackup.tar.gz $PGDATA/...

way (many details omitted). I haven't gotten heavily into using pg_basebackup yet, but the following could apply there just as well.

I had found some of the base backups to be pretty slow, so I dug a little deeper. I was surprised to find that the job was completely CPU bound. The blocking factor was the gzip process. So it was worth thinking about other compression options. (The alternative is of course no compression, but that would waste a lot of space.)

There are two ways to approach this:

  • Use a faster compression method.

  • Parallelize the compression.

For a faster compression method, there is lzop, for example. GNU tar has support for that, by using --lzop instead of -z. It gives a pretty good speed improvement, but the compression results are of course worse.

For parallelizing compression, there are parallel (multithreaded) implementations of the well-known gzip and bzip2 compression methods, called pigz and pbzip2, respectively. You can hook these into GNU tar by using the -I option, something like -I pigz. Alternatively, put them into a pipe after tar, so that you can pass them some options. Because otherwise they will bring your system to a screeching halt! If you've never seen a system at a constant 1600% CPU for 10 minutes, try these.

If you have a regular service window or natural slow time at night or on weekends, these tools can be quite useful, because you might be able to cut down the time for your base backup from, say 2 hours to 10 minutes. But if you need to be always on, you will probably want to qualify this a little, by reducing the number of CPUs used for this job. But it can still be pretty effective if you have many CPUs and want to dedicate a couple to the compression task for a while.

Personally, I have settled on pigz as my standard weapon of choice now. It's much faster than pbzip2 and can easily beat single-threaded lzop. Also, it produces standard gzip output, of course, so you don't need to install special tools everywhere, and you can access the file with standard tools in a bind.

Also, consider all of this in the context of restoring. No matter how you take the backup, wouldn't it be nice to be able to restore a backup almost 8 or 16 or 32 times faster?

I have intentionally not included any benchmark numbers here, because it will obviously be pretty site-specific. But it should be easy to test for everyone, and the results should speak for themselves.

Tuesday, May 15, 2012

My (anti-)take on database schema version management

There were a number of posts recently about managing schema versions and schema deployment in PostgreSQL. I have analyzed these with great interest, but I have concluded that they are all more or less significantly flawed. (Of course, most of these solutions do in fact work for someone, but they are not general enough to become canonical go-to solutions for this problem class, in my opinion.) I have developed a list of elimination criteria by which I can evaluate future developments in this area. So here are some of the things that I don't want in my schema version management system:

  • Using schemas for distinguishing multiple versions (like this, but that's actually more about API versioning). That simply won't work for deploying objects that are not in schemas, such as casts, languages, extensions, and, well, schemas.

  • Using extensions (like this). Well, this could work. But extensions by themselves do nothing about the core problem. They are just an SQL wrapper interface around upgrade scripts. You still need to write the upgrade scripts, order them, test them, package them. The extension mechanism might replace the, say, shell script that would otherwise run the upgrade files in a suitable order. Another issue is that extensions require access to the server file system. Changing this is being discussed as "inline extensions", but there is no consensus. This is a smaller problem, but it needs to be thought about. Also, I do need to support PostgreSQL 9.0 and earlier for little while more.

  • Requiring naming each change (patch names, like this). Naming things is hard. Numbering things is easy. And how many good names are you going to still be able to come up with after 100 or so changes?

    Take a lesson from file version control systems: versions are numbers or, if it must be, hashes or the like (UUIDs have been suggested).

  • Using a version control tool for tracking upgrade paths (like this). Sqitch, unlike the initial draft of this concept, doesn't actually require a version control tool for deployment, which wouldn't have worked for me, because what we ship is a tarball or a deb/rpm-type package. But it still requires you to maintain some kind of sanity in your version control history so that the tool can make sense out of it. That sounds fragile and inconvenient. The other choice appears to be writing the plan files manually without any VCS involvement, but then this loses much of the apparent appeal of this tool, and it's really no better than the "naming each change" approach mentioned above.

  • Taking snapshots or the like of a production or staging or central development system. Production systems and staging systems should be off limits for this sort of thing. Central development systems don't exist, because with distributed version control, every developer has their own setups, branches, deployments, and world views.

    You could set it up so that every developer gets their own test database, sets up the schema there, takes a dump, and checks that in. There are going to be problems with that, including that dumps produced by pg_dump are ugly and optimized for restoring, not for developing with, and they don't have a deterministic output order.

  • Storing the database source code in a different place or in a different manner than the rest of the source code. This includes using a version control system like mentioned above (meaning storing part of the information in the version control meta information rather than in the files that are checked into the version control system in the normal way), using a separate repository like Post Facto, or using something like the mentioned staging server.

    The source is the source, and it must be possible to check out, edit, build, test, and deploy everything in a uniform and customary manner.

  • Allowing lattice-like dependencies between change sets (like most examples cited above). This sounds great on paper, especially if you want to support distributed development in branches. But then you can have conflicts, for example where two upgrades add a column to the same table. Depending on the upgrade path, you end up with different results. As your change graph grows, you will have an exploding number of possible upgrade paths that will need to be tested.

    There needs to be an unambiguous, canonical state of the database schema for a given source checkout.

  • Requiring running through all the upgrade scripts for a fresh deployment (like this). There are two problems with this. First, it's probably going to be very slow. Fresh deployments need to be reasonably fast, because they will be needed for automated tests, including unit tests, where you don't want to wait for minutes to set up the basic schema. Second, it's inefficient. Over time, you might drop columns, add new columns, delete rows, migrate them to different tables, etc. If you run through all those upgrade scripts, then a supposedly fresh database will already contain a bunch of rubble, dropped attributes, dead rows, and the like.

    Therefore, the current version needs to be deployable from a script that will not end up replaying history.

  • Using metalanguages or abstraction layers (like Pyrseas or Liquibase or any of the metaformats included in various web frameworks). It'd probably a good idea to check some of these out for simple applications. But my concern is whether using an abstraction layer would prevent me from using certain features. For example, look at the Pyrseas feature matrix. It's pretty impressive. But it doesn't support extensions, roles, or grants. So (going by that list), I can't use it. (It's being worked on.) And in a previous version, when I looked at it for a previous project, it didn't support foreign-data functionality, so I couldn't use it then either. And those are just the top-level things the author thought of. Actually, the Pyrseas author has gone through some effort to have almost complete coverage of PostgreSQL DDL features, so give this tool a try. But it won't be for everyone.

So, I'm going to keep looking.

Monday, May 14, 2012

Time to retrain the fingers

For years, no decades, I've typed tar tzf something, tar xzf something. Except when someone annoying sent an uncompressed tar file and I had to then go and take out the z in the middle.

Then came bzip2, and we learned tar tjf, tar xjf. OK, I could live with that. One emerging problem was that the tab completion now worked the wrong way around conceptually, because you had to pick and type the right letter first in order to see the appropriate set of files to unpack offered for completion.

Then came lzma, which was (quick, guess?), tar tJf, tar xJf. And then there was lzop, which was too boring to get its own letter, so you had to type out tar -x --lzop -f.

But lzma was short-lived, because then came xz, which was also J, because lzma was now too boring as well to get its own letter.

Oh, and there is also the old compress, which is Z, and lzip, which I'd never heard of.

But stop that. Now there is

 -a, --auto-compress
            use archive suffix to determine the compression program

This handles all the above compression programs, and no compression. So from now on, I always use tar taf and tar xaf. Awesome.

The finger movements will be almost the same on QWERTY and AZERTY, and easier than before on QWERTZ.

Actually, this option is already four years old in GNU tar. Funny I'd never heard of it until recently.