A do it yourself NAS is a lot more resilient than the prebuilt ones you can buy. It will never stop receiving updates or fail in some un-googleable way.
These notes describe setting up a NAS from scratch. I decided on a RaidZ2 array (equivelent to old-school RAID6) because I have 5 SSDs and wanted some extra resiliency. I’ll also configure the NFS protocol to make the files accessible over the network.
Set up the hardware
Anything modern should work, as long as it has enough SATA slots. I’ve listed my hardware below for reference. There are warnings about using ZFS with low amounts RAM but if you don’t care about performance you can get away with less.
- Motherboard: Supermicro A1SRI-2358F
- SATA PCIe Card: Some unbranded 4-port SATA card
- RAM: 16GB DDR3 ECC
- Drives:
- Boot Drive: 120GB SATA SSD
- Storage Drives: 5x 2TB SATA SSDs
Install Debian
- Download the Debian netinst ISO
- Write the ISO to a USB drive and boot from it
- Follow the installer, I chose mostly defaults except for:
- Partitioning: I chose to use the entire disk and set up LVM
- Software: I chose SSH server and standard system utilities, no desktop environment.
- Boot the system and log in via SSH
Set up ZFS storage pool
Install the ZFS packages
sudo apt install linux-headers-amd64 zfsutils-linux zfs-dkms zfs-zed zfs version # should output something like: `zfs-2.1.11-1` and `zfs-kmod-2.1.11-1`
Run
sudo fdisk -l
to determine the names of the drives you want to load into zfs, then compare those to the output of/dev/disk/by-id/
. Record the by-id names to determine the correct drive - we want to use the names calculated by the drive serial number, not the /dev/sdX namessudo fdisk -l ls -l /dev/disk/by-id # in my case, I got these outputs from sudo fdisk -l # /dev/sdd: 1.82TB # /dev/sda: 1.82TB # /dev/sdb: 1.86TB # /dev/sde: 1.86TB # /dev/sdf: 1.82TB # and these outputs from ls -l /dev/disk/by-id/ # /dev/disk/by-id/ata-CT2000BX500SSD1_2317E6CE16B0 -> ../../sdd # /dev/disk/by-id/ata-SanDisk_SSD_PLUS_2000GB_232920801032 -> ../../sda # /dev/disk/by-id/ata-T-FORCE_T253TY002T_TPBF2306120030602859 -> ../../sdb # /dev/disk/by-id/ata-Inland_SATA_SSD_IB23AG0002S00625 -> ../../sde # /dev/disk/by-id/ata-SPCC_Solid_State_Disk_AA230711S302KG01479 -> ../../sdf
Create the RaidZ2 pool
sudo zpool create ssd-pool raidz2 \ /dev/disk/by-id/ata-CT2000BX500SSD1_2317E6CE16B0 \ /dev/disk/by-id/ata-SanDisk_SSD_PLUS_2000GB_232920801032 \ /dev/disk/by-id/ata-T-FORCE_T253TY002T_TPBF2306120030602859 \ /dev/disk/by-id/ata-Inland_SATA_SSD_IB23AG0002S00625 \ /dev/disk/by-id/ata-SPCC_Solid_State_Disk_AA230711S302KG01479 # note: if you get the error "raidz contains devices of different sizes" (in my case I did because they vary by 1% or so) you can use the -f flag to force the pool to be created
Double check the pool was created successfully
sudo zpool status # should show the pool status as ONLINE
Set the pool to autoexpand when new devices are added to the pool and enable compression, both optional
sudo zpool set autoexpand=on ssd-pool sudo zfs set compression=lz4 ssd-pool
Enable weekly automatic scrubs. This is a good idea to catch any errors early on.
systemctl enable zfs-scrub-weekly@ssd-pool.timer --now
Change the pool’s mount point to /mnt/ssd-pool (by default it will be mounted at /ssd-pool)
sudo zfs set mountpoint=/mnt/ssd-pool ssd-pool
Create a dataset of 1TB within the pool, I named it “myfiles”
sudo zfs create ssd-pool/myfiles sudo zfs set quota=1T ssd-pool/myfiles
Double check the dataset was created successfully
zfs get mountpoint ssd-pool/myfiles # should output /mnt/ssd-pool/myfiles sudo zfs list # should show the ssd-pool and ssd-pool/myfiles datasets
Set up NFS share
Install the NFS server
sudo apt update sudo apt install nfs-kernel-server
Export the data folders by adding the following to /etc/exports:
/mnt/ssd-pool/myfiles (rw,no_subtree_check)
- rw: Allows both read and write access to the shared directory.
- no_subtree_check: Eliminates subtree checking, which is unnecessary for most cases and can cause issues. This is the default behavior as of 1.1.0 of nfs-utils.
Start and enable the NFS server
sudo systemctl start nfs-kernel-server sudo systemctl enable nfs-kernel-server
Verify the exports are working
sudo showmount -e localhost
Connect to the share from a client PC
To test mounting the share on another machine, run the following command on a client
# install nfs client sudo apt install nfs-common # create a directory to mount the shares sudo mkdir -p /mnt/nfs # mount the shares sudo mount SERVERIP:/mnt/ssd-pool/myfiles /mnt/nfs # check the mount df -h
If the mount is successful, you can add this to /etc/fstab to make it auto-mount on boot
SERVERIP:/mnt/ssd-pool/myfiles /mnt/nfs nfs defaults 0 0
If you want to run a speed test, try this from the client PC:
dd if=/dev/zero of=/mnt/nfs/testfile bs=1G count=1 oflag=direct
- oflag=direct is used to bypass the filesystem cache and write directly to the disk.
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 42.5806 s, 25.2 MB/s
That’s it! You now have a DIY NAS that you can expand as needed. It is also easy to add features, like automatic alerts on disk failures, or a Time Machine backup server. I’ll cover those in future posts.