Category Archives: Storage

Storage

Lifecycle of Linux IO

Published by:

I’ve been playing with blktrace recently, and wanted to understand the flow of IO better and the order in which events take place. For example, when I see something like the following, I can better make sense of it:

8,48   0       50     0.966969101   163  A   W 1370345506 + 8 <- (8,49) 1370345472
8,49   0       51     0.966969659   163  Q   W 1370345506 + 8 [flush-8:48]
8,49   0       52     0.966970358   163  M   W 1370345506 + 8 [flush-8:48]
8,48   0       53     0.966972523   163  A   W 1370345538 + 8 <- (8,49) 1370345504
8,49   0       54     0.966973082   163  Q   W 1370345538 + 8 [flush-8:48]
8,49   0       55     0.966974199   163  G   W 1370345538 + 8 [flush-8:48]
8,49   0       56     0.966974967   163  I   W 1370345538 + 8 [flush-8:48]
8,49   0       60     0.966985444   163  D   W 1370345538 + 8 [flush-8:48]
8,49   0      150     0.967601527     0  C   W 1370345538 + 8 [0]

I couldn’t find any sort of detailed, step-by-step flow of blktrace events, so I put this flowchart together.  The information is based largely on what is described in the book “Understanding the Linux Kernel”, as well as man pages and other bits I’ve gleaned from the internet.

I believe it to be largely correct, although I’m not exactly the king of flowcharts so there may be some errors in the layout. There’s a discrepancy between when a queue is plugged according to the book, vs what I see in blktrace.  The book says the queue is checked for emptiness, then the queue is plugged, then a request descriptor is allocated, but blktrace reliably shows that the queue is plugged after the request descriptor is allocated.  If anyone has better information, or perhaps a better flow chart, I’d appreciate seeing it.

Storage Virtualization

VM benchmarking, trickier than one might expect

Published by:

I’ve always had a focus on storage throughout my career.  I’ve managed large enterprise vSANs with FC switches, commercial NAS filers, deployed iSCSI over ethernet, and managed ESX with both FC and NFS backends.  I’ve been entrusted to build very large storage servers, up to 32U, with Linux and off the shelf components.  Needless to say, I feel comfortable claiming that I know a little more than the average systems guy about storage, and particularly how Linux handles I/O, so when I turned my attention to benchmarking virtual machine disk performance, I found some interesting behaviors that most who seek to measure such things should probably be aware of, at least to interpret results, if they can’t otherwise be compensated for.

One of the primary things is how the Linux caching mechanisms can throw a wrench in things if you don’t think through what you’re doing.  One needs to be aware of which caches are in effect during each test. For example, it’s common to test with datasets larger than the system’s memory in order to stretch the system beyond its ability to cache, however, consider a 4GB virtual guest on a physical server with 32GB RAM.  Usually the guest systems are run with at least write-through cache from the host’s perspective (speaking in general terms, this can obviously be controlled by the end user on at least some virtualization platforms), so while the experimenter might think that using an 8GB dataset will be sufficient on the guest, or that issuing a drop_caches request between tests on the guest will suffice,  this dataset is likely to be saved in its entirety in the host’s read cache as it goes to underlying storage, artificially boosting the results.  Similarly, performing a write test on the guest and comparing it to the same write test on the host is almost certainly going to give the host an unfair advantage if the experimenter doesn’t take into account the increase in dirty memory available on the host, usually specified in percent of physical memory.

On top of that, there’s the complexity of testing  X number of virtual machines and forming a summation of how they all perform simultaneously on a physical host.  There are some pretty standard methods defined for doing this, such as putting some sort of load on each guest, and then benchmarking one while the others are running their dummy loads, but again, one must be careful, particularly with the dummy loads, that they’re not just looping tests that are small enough to cache, unless, of course, that’s the real-world behavior of the application, which brings me to my point.

It’s kind of a complex beast, trying to get meaningful results, and especially to share them with others who may have different expectations.  One has to determine a goal in disk benchmarking, and it’s usually one of two things; the testing of raw disk performance or an attempt to measure the real-world performance of an application or given I/O pattern.  The former would involve disabling any and all caches, while the latter would strive to utilize the caches how they normally would be.  The challenge in all of this, as mentioned, is that some people will value one set, while others will value the other.  Raw disk performance will tell you a lot about  just how good the setup is, for example whether one should go with that raid6 setup or do raid50 instead, on the other hand, does it really matter how well the disks perform without caches, don’t we want to know how it’s actually going to run?

No matter how it’s done, the most important thing of all is to frame your data properly. “This was the goal or purpose, these are the tests, this is the setup, here are the results”.  I’ve been running some tests that I’ll share shortly, but I wanted to get some of these cosiderations down, as I’ve rarely heard anyone speak of them while reading through the benchmarks of others, which frankly, has made much of the data I’ve seen surrounding vm performance largely useless.

Finally, lest this post be all rambling and not provide anything of concrete usefulness to individuals out there, the following are some mechanisms for controlling Linux caching.

Flush caches (page, dentries, inodes):  ‘echo 3 > /proc/sys/vm/drop_caches’

The above won’t do anything for dirty memory, which can be cleaned up with a ‘sync’, however, this won’t have much bearing on the write test you run afterward, you’ll need to know a little more about how dirty memory works. It would be naive to compare a system with 32G of memory, 3.2 of which can absorb pending writes, with a 4G system that only has 400M with which to cache writes.

In particular, two values are of importance:  /proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio. These two numbers are specified as percentages. dirty_background_ratio tells you how big your dirty memory can get before pdflush kicks in and starts writing it out to disk. dirty_ratio is always higher (the code actually rewrites dirty_background_ratio to half of this if dirty_ratio < dirty_background_ratio), and is the point where applications skip dirty memory and are forced to write direct to disk. Usually this means that pdflush isn’t keeping up with your writes, and the system is potentially in trouble, but could also just mean that you’ve set it very low because you don’t want to cache writes.  For example, you may want to do this if you know you’ll be doing monster writes for extended periods, no sense in bloating up some huge amount of dirty memory only to have the processes forced to write sync AND contend with pdflush threads trying to do writeback.  On the flip side, increasing these values can give you a nice cache to absorb large, intermittent writes.

Both of these have time based counterparts, dirty_expire_centisecs and dirty_writeback_centisecs, such that pdflush will kick in and start doing writeback by age regardless of how much is there. For example, it might do writeback at 500MB OR when data in dirty memory has been around for longer than 15 seconds.  Newer kernels also allow an alternative specification of an actual number, rather than percent, in dirty_bytes and dirty_background_bytes.

There are quite a few more things I could share, but I think I’ll leave with just one more: /proc/sys/vm/vfs_cache_pressure. Usually this is set at 100 by default. Increasing this number will cause the system to tend to clean up/minimize directory and inode read caches (the stuff that’s cleaned up by drop_caches), decreasing the number will cause it to horde more.

Stay tuned for some benchmarks of KVM virtio and IDE with no cache, writethrough, and writeback, compared to VMware ESX paravirtualized disks.

Linux Storage

migrating partition to mdadm raid

Published by:

So you want to migrate your existing Linux partitions to software raid1… I’ve read recently about folks migrating to software raid by actually copying data. I’ve been doing this on-the-fly (sort of), without copying the data, but instead just initializing the partition as an md device and mounting it as such with the data intact. Keep in mind that it needs a slice at the end of the filesystem for the md superblock (with the default version .9), which is why the resize2fs is used.  Now, if you want to rearrange how your data is mounted then you’re out of luck, but if you just want to migrate existing partitions to raid partitions, here’s an example with an ext3 filesystem.

started with data on mounted /dev/sdm1, want to add /dev/sdn1 in raid1

umount /dev/sdm1

##see how many blocks we currently have

tune2fs -l /dev/sdm1 | grep “Block count”

##subtract 64 blocks from the current block count, making space for the md superblock.

resize2fs /dev/sdm1 <blocks>

mdadm –create /dev/md0 –raid-devices=2 –level=raid1 /dev/sdm1 missing

##see that initial data is still there…
mount /dev/md0 /mnt
ls -la /mnt

##add mirror device
mdadm –manage /dev/md0 –add /dev/sdn1

##check
cat /proc/mdstat
mdadm –detail /dev/md0

##use parted to change partition type to ‘fd’, linux raid auto, make sure the kernel can find it on reboot.

parted /dev/sdm set 1 raid on

parted /dev/sdn set 1 raid on

Linux Storage

Using a Ramdisk For High I/O Applications

Published by:

I recently provided this solution for a system with high iowait. It’s a monitoring system with highly transient data, yet not entirely temporary data. The server we happened to put it on had plenty of memory and CPU, but only a mirrored set of drives (a blade), and the application isn’t really important enough to eat expensive SAN space.  The solution was to utilize the memory in the system as a ramdisk.

It’s a simple procedure. You create the ramdisk, then make an LVM logical volume from it (which is why we don’t use tmpfs). Occasionally you’ll snapshot it and backup the contents, and also back it up during a controlled shutdown. On startup, you simply re-create the ramdisk and restore the backup.

This solution should work for a variety of applications, however, you have to pay attention to certain circumstances, such as applications that might keep changes in memory rather than flushing to disk regularly (a.k.a. writeback cache, in which case you’ll want to shut down the app before performing the backup), as well as the obvious chance of losing the data collected after the last backup in the event of an unforeseen interruption such as power loss.

First off, you have to have ramdisk support in your kernel.  If support is compiled into the kernel (i.e. Redhat based installations), you’ll need to add the option “ramdisk_size” to the kernel line in /boot/grub/grub.conf (or menu.lst), with the size specified in KB. For example, “ramdisk_size=10485760” would give you 10GB ramdisks. Afterward, simply reboot and you should have /dev/ram0 thru /dev/ram15. Yes, it creates 16 devices by default, but it doesn’t eat your memory unless you use it.

I prefer to have ramdisk support built as a module (such as opensuse does), because you can just reload the module in order to add/resize ramdisks. To do this, you have to know the filename of the module, usually rd.ko or brd.ko.

server:/lib # find . -name brd.ko
./modules/2.6.25.18-0.2-default/kernel/drivers/block/brd.ko
./modules/2.6.25.5-1.1-debug/kernel/drivers/block/brd.ko

Then load the module:

server:/lib # modprobe brd rd_size=1048576

server:/lib # fdisk -l /dev/ram0

Disk /dev/ram0: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Of course you can add this module to the appropriate module configuration file (depending on your distribution) for future reboots.

Now that we’ve covered how the ramdisks work, here’s the script I’ve included in /etc/rc.local (Red Hat) to create/restore the ramdisk LVM volume at boot.

#create pv :
pvcreate /dev/ram0 /dev/ram1 /dev/ram2

#create vg :
vgcreate vg1 /dev/ram0 /dev/ram1 /dev/ram2

#create lv :
lvcreate -L 18G -n ramlv vg1

#create fs:
mkfs.ext2 /dev/vg1/ramlv

#mount lv :
mount -o noatime /dev/vg1/ramlv /mnt/ramlv

#restore data
cd /mnt/ramlv && tar -zxf /opt/ramlv.tar

#start the service that relies on the high performance ramdisk

/etc/init.d/zenoss start

The backup is run at shutdown and on a cron. The main reason I’m using tar with gzip is that it facilitates super fast restores, as the data I have gets a 5:1 compression ratio. With the disk being the major bottleneck in this particular server, I get roughly a 4-5x speed boost when copying the data back to ram from the compressed archive compared to copying from a file level disk backup. YMMV, another option is to simply rsync the data to disk on a schedule. With tar, the backups work harder, but don’t really impact the performance of the ramdisk while they’re running. Here’s the script:

#!/bin/bash

#logging
echo
echo “#############################################”
date
echo “#############################################”

#create snapshot:
/usr/sbin/lvcreate -L 5G -s -n ramsnap /dev/vg1/ramlv

#mount snapshot:
/bin/mount -o noatime /dev/vg1/ramsnap /mnt/snapshot

#logging
df -h /mnt/snapshot
echo TIME

#backup the only directory in /mnt/snapshot (named ‘perf’):
mv -f /opt/ramlv.tar /opt/ramlv-old.tar
cd /mnt/snapshot && time /bin/tar zcf /opt/ramlv.tar perf

#logging
ls -la /opt/ramlv.tar
/usr/sbin/lvdisplay /dev/vg1/ramsnap
/usr/sbin/lvs

#remove snapshot:
cd && /bin/umount /mnt/snapshot &&  /usr/sbin/lvremove -f /dev/vg1/ramsnap

And there you have it. Insane disk performance at your fingertips. A few things, you’ll want to ensure that your snapshot volume is sizable enough to hold any changes that may occur while you’re performing the backup (which is why I’ve got the logging commands in there). You’ll also probably want to keep a few backups in case the system dies while one is being performed, hence the ‘mv -f’ command in the script. Other than that, have fun, and feel free to share your experiences and improvements.

Storage

A Few SSD Comparisons

Published by:

So, I’m interested in SSDs, or Solid State Disks.  We’re seeing them come to the enterprise storage market, claiming that they can do the work of ten or more fiber channel platter based drives, albeit without the capacity.  I presume the reason that this works from a marketing perspective is that many applications need performance more than they do capacity, I know of several instances where we look at the number of spindles and only use a fraction of the storage space on each drive.  At any rate, now that major vendors are marketing them to the enterprise, it’s only a matter of time until the good stuff trickles down to the common folk.

Most enterprise storage is SLC NAND flash, which is inherently faster and more robust than the cheaper MLC that is commonly used un USB thumb drives, memory cards, and the like.  Both technologies have undergone improvements over the years, with vendors recently marketing a ten-fold increase in write cycles for both.  Even though the technology is improving, MLC is still the more consumer oriented device, and will be for the foreseeable future, because it’s cheaper to make at a given bit density.  While this may seem like it would relegate consumer SSDs to the lackluster performance seen on USB thumb drives, vendors get around this with their flash controllers, finding creative ways to write to arrays of flash chips, boosting performance enough to make MLC a viable option in the consumer storage space, which brings me to the main point of today’s article.

There are a lot of claims and counter-claims thrown about when it comes to SSDs.  Some say that they use more power, some say that they’re more power friendly.  One camp points to great read times, while another claims poor write performance.  So, being the curious individual that I am, I decided to run some of my own tests.  What follows is my own analysis of the products that I could get my hands on.  Be forewarned, these aren’t exhaustive tests, rather I focused primarily on my usage patterns and real-world situations.

Now to introduce the contenders:

  • Western Digital 2.5″ 5400RPM Scorpio, 160GB
  • Samsung 64GB SSD
  • OCZ “Solid series”, 60GB SSD

The comparison between the two SSDs is particularly of interest to me, because the Samsung was a $500 upgrade last February (of course I got a better deal than that), and the OCZ Solid SSD was recently purchased for $135. It’s their value line product and supposedly the lowest performing of their current line-up.

The two main areas I’m going to focus on are performance and power consumption.  I’m using two platforms, a Dell m1330 laptop, and a Lenovo ideapad S10, which is an Atom-based netbook. One special thing to note regarding setup, instructions from OCZ state to turn off AHCI or risk time-outs, pauses in your system.  Apparently they don’t handle (or need for that matter), some of the features of AHCI, such as Native Command Queueing.  I did not notice any difference on the m1330 with Vista SP1, but XP on the S10 definitely had long “WTF!?” pauses that were fixed by simply disabling AHCI in the BIOS. On to the benchmarks…

I’m beginning with the most relevant data captured, from iozone.  I’ll offer links to the full data, but one must be careful with interpreting the results, because iozone gives a complete overview of your entire platform, meaning that you see performance of processor caches, buffer caches, and disk. This can sometimes make it difficult to draw meaningful conclusions if one doesn’t understand all of the data. Another piece of information that will help you draw meaningful information from the data below, you can use the process monitor tool from SysInternals to view the transaction sizes that your various applications use. For example, my antivirus scanner reads files in 4k requests at a time, large files being copied with Explorer in Windows XP seem to be read and written 64k at a time, while in Vista files are read 1024k at a time and written 64k at a time. The behavior of the application, along with file sizes and their location on disk, are key in understanding the effects of the below data.

Many people have seen the phenomenal boot times of SSDs, and these tables highlight the reason. Comparing the older Samsung SSD to the Scorpio spindle, we see that random reads for the most common transaction sizes (4k-64k) are about 10 to 13 times faster. The OCZ SSD also shows this trend, and adds a big bump to sequential reads as well.  In exchange, however, we get slower writes, on the order of about 2 to 5 times, with random writes taking a big hit.  Still, it should be noted that random write performance isn’t particularly great even for the Scorpio at common transaction sizes.  All in all it seems to be a good tradeoff, especially considering that most data is write once, read many.

Another thing that this highlights is the benefit of defragmentation.  Many have asked the question “should you defragment an SSD?”, and the common wisdom is that defragmentation isn’t necessary with SSDs.  While they should indeed be limited in order to preserve the (currently unknown) longevity of the flash, one needs only to look at the performance between random and sequential to see that even SSDs benefit from defragmentation.  Some people are concerned about the write lifetime of flash, and while manufacturers try to put people at ease with their various wear optimization techniques, the reality is that most of these devices are too new to have a proven track record either way.  For the record, I’ve had my Samsung for about a year now, have beaten the hell out of it as far as writes, and haven’t had any issues yet ;-).

full iozone data –  xls csv

Here’s another look, this time by a simpler benchmark from ATTO on the XP netbook. I won’t go into too much detail here as it’s more of the same, but you can view the results by clicking on the thumbnails below.  Note that if one were to only use this tool, one might not see why the samsung SSD is subjectively much faster in day to day use than the Scorpio drive.

ATTO- WD Scorpio 160GB

ATTO- WD Scorpio 160GB

ATTO Samsung SSD 64GB

ATTO Samsung SSD 64GB

ATTO OCZ Core series 60GB

ATTO OCZ Solid series 60GB

Next up, a real world scenario: virus scan. This should show a huge improvement when moving to SSD, according to the iozone results. Some of the information will be sequential, but most will be random.  On top of that, as I mentioned, the virus scanner I’m using seems to read files 4k at a time. The setup is Avast! Antivirus, running a standard scan on Vista SP1.

The results speak for themselves. The iozone data seems to translate into real-world performance.

Now for battery life. I performed two tests, one was watching an xvid encoded 480p movie from the hard disk, the other was pretty much idle, with a script writing a small amount of data to the hard drive every 30 seconds.  The movie was chosen because it did a good job of generating  a constant stream of i/o (64k at a time) while not being absurdly taxing on the disk like running a benchmark might, a good real-world scenario. Actual results should in theory end up somewhere between the two benchmarks.

The m1330 loses a bit of life when switching to SSD, about 8 minutes. However, with the S10 it seems to be a wash. There are too many differences between the two platforms to pinpoint the cause. It could be due to the more aggressive performance settings I have in the m1330’s power options, could be a hardware difference, or even XP vs Vista.  All we can really say is that the mileage isn’t due to the disk alone, but how the platform reacts to it. Your mileage will depend on your platform, but the difference isn’t much.

Again, we see conflicting results. The S10 likes SSDs, while the m1330 doesn’t. I searched through the power options, and there were a few differences on the m1330 in regards to processor frequencies, but the hard disk settings were the same between platforms.  I have a hunch that the S10, being a smaller, lower wattage platform overall, will be more sensitive to the actual power consumption of the drive. Make of it what you will, the differences don’t seem to be all that much either way.

In summary, it seems that SSDs, in their current incarnation, offer a large boost to read performance in exchange for a medium-small cut in write performance. There are differences in battery life, but the differences are relatively small and differ between platforms. I would like to get my hands on some of the higher-end SSDs such as the intel x25, but until the prices come down, I think that comparing these (now) budget SSDs has been a useful exercise.