Building a terabyte NFS server

One of our current projects is a digital film restaurator. This requires fast moving of enermous amount of data. E.g. digitizing a 200 meter reel produces some 5.5 TiB uncompressed image. Moreover film scanners work with constant speed. No way to slow down the image generators if throughput problem occurs in the data path. The expected total data rate is 133 Mib/s (not including network and protocol overhead).

We need two or three NFS servers with 5.5-6.0 TiB total capacity. Capacity and speed is much more important than the reliability. This server (group) is not a long life storage. It keeps scanned and digitally enhanced images for days only. In case of disk error data can be reproduced easily (but slowly) by rescanning the reel.
Therefore we have choosen a pile of IDE disks organized into a RAID0 (stripe).

The pilot machine has the following configuration:

Kernel: 2.4.20 + ACPI + XFS + lm_sensors patch.

The formatted RAID capacity (with xfs) is some 1870 GiB.)

# df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda9               139986     47203     85556  36% /
/dev/hda1                46636      9231     34997  21% /boot
/dev/hda5               964500    171416    744088  19% /usr
/dev/hda6               964500     59764    855740   7% /var
/dev/hda7              1930988    790264   1062332  43% /usr/local
/dev/hda10              964500     16428    899076   2% /tmp
/dev/hda8              4996728     32936   4709968   1% /home
/dev/sda1            1960794924      7504 1960787420   1% /mnt/xfs

After assembling the hardware I started to test and tune the disk subsystem. The following pages show the results of measurements:

  1. Comparing filesystems
  2. Testing input speed at various parameters
  3. Comparing input and output speed
  4. RAID-5 vs. RAID-0

Comments to: kissg@sztaki.hu