Building a new Home File Server (Part3 – Funnies and Foibles)

In this final article on the new home server build I’ll be highlighting some issues and problems I’ve had to deal with, both in hardware and software. You can read the first two articles in Part 1 and Part 2.

Boot Drive

My goal was to have all the RAID drives in the front bays of the ML110, with the supplied HP 250GB drive as a boot drive in one of the optical drive bays.

I experienced an issue with this though – the BIOS of the ML110 has several modes for the SATA controllers, the standard config is to have the drive bays set up as a (fake) raid device, but since I don’t want (and never would!) use this pseudo raid setup I needed the front bay presented to the system as a host bus adaptor.

Changing the BIOS SATA configuration to AHCI (which will give best performance for the drives attached) results in just a single SATA controller (with 6 ports) being presented to the system.

The four front drive bays present as the first four drives, with the two optical bays presenting as the last two.

The problem with this was the HP will only boot from the first drive in the array, whilst it is possible to just install the bootloader to the first disk, then boot from the optical bay this isn’t a good solution – if that RAID drive fails you can’t boot, even when the drive is replaced, without re-installing the bootloader.

Therefore I had to install the boot drive to front bay 1, with the fourth RAID drive in the optical drive bay. Since the integral SATA controller is not hot-swappable this isn’t really an issue, drives can be swapped in the event of failure easily.

Western Digital Green Drive Issues

The drives used for the RAIDZ array are Western Digital WD20EARX drives, with very low power consumption.

There is an issue with these drives in that they are configured as standard to park their heads and power down parts of their circuitry every 8 seconds. An analysis of the drive’s SMART data showed that for 500 hours of uptime the number of load / unload cycles had reached in excess of 12000.

There are a couple of things to look at here, firstly the idle time can be adjusted using the wdidle3 tool from Western Digital, or in my case the third party Linux equivalent, idle3 tools.

This can result in the drives not sleeping at all though, which is at odds with our green goals, in my case the problem was the SMART daemon polling the drives and waking them, this can be resolved by editing the smartd.conf file to ensure it checks the idle status of the drive and if idle, not wake it.

An additional parameter can be set to wake it every x times in order to retrieve data if none has been obtained.

See the -n POWERMODE[,N][,q] section of the smartd.conf manpage.

ZFS Issues

The initial ZFS setup was really simple, but I did experience a problem with the ZFS drives not being mounted after a reboot.

In Ubuntu and Debian there are supposed to be packages to resolve this and ensure clean system startup, but these don’t appear to be available in the PPA, so the solution was to edit /etc/default/zfs and change the following two settings: –

ZFS_MOUNT='yes'
ZFS_UNMOUNT='yes'

NFS sharing

NFS sharing originally gave an error: –

root@proxmox:~# zfs share -a
cannot share 'zfspool/music': share(1M) failed NFS server not installed

to fix this do: –

apt-get install nfs-kernel-server

 

Webmin

I installed Webmin on top of Proxmox for ease of setting up SAMBA shares and other monitoring functions.

Final Results

I’m really pleased with the result, the machine is currently acting as a file server to our Windows and Linux PC’s and Proxmox is currently running the following virtual machines without even breaking a sweat: –

  • Three minecraft servers (with ramdisks)
  • One Tekkit Minecraft Server
  • Vortexbox

The responsiveness of the VM’s is impressive, and notably better than the old server, peak power consumption is around 90W, dropping to circa 40W when idle, which is great, especially if I can eliminate even more PC’s from the network through virtualisation.

To this end the next stage is to look at virtualising my MythTV backend, which looks like it is possible to do, by passing the TV Tuner PCI cards straight to the virtual machine; this will eliminate another PC running 24/7 and currently consuming up to 90W when working hard, which will lower my electricity bills!

This entry was posted in Linux, Technical. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *