
Like many people,
we are interested in deploying solid-state drives (SSDs) for our database systems.
Jignesh posted some performance test results a while ago, but as I had
commented there, this test ran with the write cache on, which concerned me.
The Write Cache
Interlude: The disk write cache is the feature that causes you to lose your data when the server machine crashes or loses power. Just like the kernel may lie to the user-land application about writing stuff to disk unless the user-land application calls fsync(), the disk may lie to the kernel about writing stuff to the metal unless the write cache is turned off. (There is, as far as I know, no easy way to explicitly flush the cache. So write cache off is kind of like open_sync, if you know what that means.) As PostgreSQL pundits know, PostgreSQL does fsyncs at the right places unless you explicitly turn this off, and ignore all the warning signs on the way there. By contrast, however, the write cache is on by default on consumer grade ATA disks, including SATA disks and, as it turns out, also including "enterprise" SSD SATA devices.
To query the state of the write cache on a Linux system, use something like hdparm -W /dev/sda. To turn it off, use hdparm -W0 /dev/sda, to turn it back on, hdparm -W1 /dev/sda. If this command fails, you probably have a higher-grade RAID controller that does its own cache management (and doesn't tell you about it), or you might not have a disk at all. ;-) Note to self: None of this appears to be described in the PostgreSQL documentation.
It has been mentioned to me, however, that SSDs require the write cache for write wear leveling, and turning it off may significantly reduce the life time of the device. I haven't seen anything authoritative on this, but it sounds unattractive. Anyone know?
The Tests
Anyway, we have now gotten our hands on an SSD ourselves and gave this a try. It's an Intel X25-E from the local electronics shop, because the standard, big-name vendor can't deliver it. The X25-E appears to be the most common "enterprise" SSD today.
I started with the sequential read and write tests that
Greg Smith has described. (Presumably, an SSD is much better at being better at random access than at sequential access, so this is a good worst-case baseline.) And then some bonnie++ numbers for random seeks, which is where the SSDs should excel. So to the numbers ...
Desktop machine with a single hard disk with LVM and LUKS over it:
- Write 16 GB file, write caching on: 46.3 MB/s
- Write 16 GB file, write caching off: 27.5 MB/s
- Read 16 GB file: 59.8 MB/s (same with write cache on and off)
Hard disk that they put into the server that we put the SSD in:
- Write 16 GB file, write caching on: 49.3 MB/s
- Write 16 GB file, write caching off: 14.8 MB/s
- Read 16 GB file: 54.8 MB/s (same with write cache on and off)
- Random seeks: 210.2/s
This is pretty standard stuff. (Yes, the file size is at least twice the RAM size.)
SSD Intel X25-E:
- Write 16 GB file, write caching on: 220 MB/s
- Write 16 GB file, write caching off: 114 MB/s
- Read 16 GB file: 260 MB/s (same with write cache on and off)
- Random seeks: 441.4/s
So I take it that sequential speed isn't a problem for SSDs. I also repeated this test with the disk half full to see if the performance would then suffer because of the write wear leveling, but I didn't see any difference in these numbers.
A 10-disk RAID 10 of the kind that we currently use:
- Write 64 GB: 274 MB/s
- Read 64 GB: 498 MB/s
- Random seeks: 765.1/s
(This device didn't expose the write cache configuration, as explained above.)
So a good disk array still beats a single SSD. In a few weeks, we are expecting an SSD RAID setup (yes, RAID from big-name vendor, SSDs from shop down the street), and I plan revisit this test then.
Check the approximate prices of these configurations:
- plain-old hard disk: < 100 €
- X25-E 64 GB: 816.90 € retail, 2-5 weeks delivery
- RAID 10: 5-10k €
For production database use, you probably want at least four X25-E's in a RAID 10, to have some space and reliability. At that point you are approaching the price of the big disk array, but probably pass it in performance (to be tested later, see above). Depending on whether you more deperately need space or speed, SSDs can be cost-reasonable.
There are of course other factors to consider when comparing storage solutions, including space and energy consumption, ease of management, availability of the hardware, and reliability of the devices. It looks like it's still a tie there overall.
Next up are some pgbench tests. Thanks Greg for all the
performance testing instructions.
(picture by XaYaNa CC-BY)