
In contrast, the filesystem is not so good at handling atomic file system operations. Taking the silver medal, ext3 impresses in the IOzone benchmark. Another interesting result is that XFS seems to have improved on SSDs between kernels 3.1.10 and 3.3.6.

On SSDs and HDDs, it delivers fast atomic actions and stable values in the IOzone benchmark. XFS is obviously still a good choice despite its age. In all other disciplines, the filesystems performed more poorly with kernel 3.3.6 than kernel 3.1.10. The ext filesystems only benefited in the random read discipline. In a test with kernel 3.3.6 (current when this issue went to press) on SUSE Tumbleweed, performance in almost all filesystems declined compared with kernel 3.1.10. It was only in random read that the ext systems scored points. In the IOzone benchmark results, which are based exclusively on data throughput, however, ext3 wins by a large margin (32 points), with ReiserFS (26), XFS (25), and ext4 (24) lagging a bit behind.Īs Figures 5 and 6 show, migrating to the latest kernel doesn’t always mean benefits.įigure 6: Running on classical disks, the candidates failed to perform better on the new kernel than on its predecessor. If you consider only the more atomic tests from Bash scripts, XFS is still in the lead ahead of Btrfs and ReiserFS. If you simply assign points on an equal footing for placement in each of the many individual disciplines, you receive interesting results: XFS scores 56 points ahead of ReiserFS and ext4, followed by Btrfs and ext2.
Linux file systems for windows series#
In addition to the IOzone tests, I performed a long series of my own benchmarks, including mkdir, touch, echo, cat, dd, rm, and rmdir in long loops with meaningful values. Also, the countless possibilities for optimization were ruled out in the tests (e.g., btrfs -o SSD can increase the throughput in individual cases by up to 30 percent). The benchmark script prepared each filesystem for the benchmark run with the appropriate mkfs tool (on ZFS, this was zpool) without further options. To do this, I ran sync and doubly deleted the page cache, inodes, and dentries (with echo 3 > /proc/sys/vm/drop_caches). Additionally, before each test, I ensured that the Linux kernel itself had no chance to perform any optimizations by serializing I/O operations through buffers and caches. To ensure that no unwanted optimizations by buffers or caches on the test system distorted the results, adding different levels of variance depending on the filesystem, I disabled both the RAID controller cache and all the HDD caches. SSDs take the lead Btrfs has major weaknesses in the case of small blocks, and ReiserFS surprises with good values. The complete set of raw data and the scripts are available online.įigure 4: Random write combines the weaknesses of traditional hard disks. Making these measurements is the task of IOzone Figures 1-4 show the most important results. If you are looking to compare filesystem speeds, you should not set too much store by absolute figures – that is, the net megabytes per second (MBps) in the graphics or the run times in the benchmarks – because the values depend too greatly on the specific hardware in use.Ī direct comparison between the candidates is much more definitive, especially for sequential or random read and write operations. This makes ZFS potentially dangerous, in spite of all the advantages and its good reputation, because version 0.6 of the code, for example, still cannot offer format assurances. Nevertheless, long-term historic data, which is very important for admins, still doesn’t exist.
Linux file systems for windows software#
Developer Paolo Pantò recently added it to the software index of the SUSE Build System. Note that ZFS, however, is not a competitor here, because it only recently became natively available on Linux – thanks to the “ ZFS on Linux” project. ZFS: A filesystem from the Sun universe that is considered by many experts to be the most advanced.


The operating system used in the test was SUSE 12.1 with the latest updates and kernel 3.1.10.
