Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

5 years ago, I wrote about the build of the server hosting sandeen.net, which ran using only 18W of power.  After 5 years, it was time for an upgrade, and I’m pleased to say that I put together a much more capable system which now runs at only 10W!  This system handles external email and webserving, as well as handling media serving, energy datalogging, and backup storage for household purposes.  It’s the server which dished up this blog for you today.

sandeen.net wattage, as measured by a Kill-a-Watt meter [amzn]

If you run a server 24/7 at home, that always-on power consumption can really add up.  If you have an old beast running at 250W, that’s using about 2MWh of power per year, and will cost you over $200/year in electricity at $0.10/kWh.  A 10W build uses only 4% of that!  Here’s how I put it together:

I chose this motherboard because it has 4 SATA ports; the pairs of drives are in MD RAID1 mirror configurations using XFS for all filesystems, and the large 3T media drives are spun down most of the time; the system does get over 10W when they’re spinning.  And I have to say, the Samsung drives seem really nice; they are warranted for 150 terabytes written, or 10 years, whichever comes first.

I took most of the suggestions from the powertop tool, and I run this script at startup to put most devices into power-saving mode; this saves about 2W from the boot-up state. When boot-up power consumption is 12W, 2W is a significant fraction!  ;)

#!/bin/bash

# Set spindown time, see 
# https://sandeen.net/wordpress/computers/spinning-down-a-wd20ears/
hdparm -S 3 /dev/sdc
hdparm -S 3 /dev/sdd

# Idle these disks right now.
hdparm -y /dev/sdc
hdparm -y /dev/sdd

x86_energy_perf_policy -v powersave

# Stuff from powertop:
echo 'min_power' > '/sys/class/scsi_host/host0/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host1/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host2/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host3/link_power_management_policy';
echo '0' > '/proc/sys/kernel/nmi_watchdog';
# I'm not willing to mess with data integrity...
# echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs';
# Put most pci devices into low power mode
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:13.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1a.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.1/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.3/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:01:00.0/power/control';
# realtek chip doesn't like being in low power, see
# https://lkml.org/lkml/2016/2/24/88
# echo 'auto' > '/sys/bus/pci/devices/0000:02:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:02.0/power/control';

The old server took the network down to 100mbps, but that just feels too pokey most days, so I left it at full gigabit speed.

I’ve been very happy with this setup; it runs quiet, cool, fast, and cheap, while using less power than an old-fashioned night-light.  Jetway says the board will have a lifecycle through Q1 2019, so hopefully anyone reading this post well into the future can still pick one up if desired.

If you’re curious, here’s dmesg, lspci, and cpuinfo from the box.  Note that the cpu can even do virtualization!

26 thoughts on “Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

  1. I’ve got a similar setup (all SSD though) and I considered moving to that power supply, but, with one 12v, I was unsure how to hookup the 4 drives without crazy hackiness, how did you do it?

  2. I built something along these lines several years ago when you posted about your first low power server. I have always wondered (but never asked) how you handled the media drives – especially since you power them down at times. I have a pair of 2T disks each with a system and a data partition. The system partitions are md mirrors. But I found that the pain associated with using md to mirror the large partitions was huge so I used normal partitions and set up a script to sync the two data partitions each night with rsync. How do you handle the large data partitions? I can’t spin down those disks since the system partitions are on them but I wonder if there is a better way to deal with the data. I built one of these for my brother a year or so after I built mine. He had a crash about a month ago and I could only salvage about half of his data. The world population will decrease by one if I ever loose my wife photos. Does XFS do anything to reduce the pain with md on large partitions?

      • The time to sync the array – over 12 hours as I recall and performance is down to a crawl the whole time. I originally let it go for a couple of hours on the first build attempt to do some testing. Console interactivity was poor and network access via Samba was worse. The hardware was a Jetway NF99FL-525 with 4G of ram, a pair of 2TB WD blacks, and Centos 5.

        Thanks.

      • Interesting; I first set up the large 2T raid 5 years ago, and since then have removed & replaced defective drives. That was all done on RHEL6, and I didn’t suffer much during the resyncs. I might have used “assume-clean” on the initial setup, when the drives were known to be blank, but even the drive replacement later was not painful.

        • I always wonder how do you
          – detect a faulty drive on raid 1 if there is no led
          – find which is the faulty one
          do you use hardware raid from bios or software?

          (Me using mostly single-hd computers and a 2 bay nas)
          I had one computer with raid 1 running from bios level and the computer died, not the drives. Then i simply pulled one drive and put it inside my desktop to continue working. (Win on those machines) Worked with no hickup. after Boot had the drive ready with all partitions. only had to change pc name and set network share to get the office running. (and then i took my time to fix the server).

          • Well that it is possible to see it and find out which is clear.
            My concern is that while setting up a system one should find out which disk is named how and put name stickers on them.

            Mistakes like replacing the good instead of the bad drive could lead to data loss. Especially in raid 5.

            If i have no name sticker or a led near to the drive i fear that the chance of replacing the wrong drive is 50%.

            Wouldn’t that speak for a hardware raid with leds or using a nas?

          • You can sort out what’s what. Here’s part of my /proc/mdstat:
            md121 : active raid1 sdc7[0] sdd7[1]
            16760832 blocks super 1.2 [2/2] [UU]

            So it’s comprised of partitions on /dev/sdc and /dev/sdd. (Note that drive assignments could change under different kernels, even…). But if I saw sdd dying in the logs, I could check for its hardware serial number with smartctl:
            # smartctl -a /dev/sdd | grep -i serial
            Serial Number: S251NXAGC00102E

            and then look for the drive stamped with that serial number.

  3. [ 2.548610] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.551630] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.707745] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 3.176181] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

    Any idea why you don’t get 2 of the SATA links at 6 Gbps ?

    • I guess because I have different drives on them? Yup, the WD20EARS is only a 3 gbps drive.
      [ 3.321842] ata3.00: ATA-8: WDC WD20EARS-00MVWB0, 51.0AB51, max UDMA/133
      [ 3.799210] ata4.00: ATA-9: WDC WD30EFRX-68EUZN0, 82.00A82, max UDMA/133
      But looks like I should have connected the system’s SSDs to the faster ports, and I didn’t. :)

    • Sure, but I wasn’t willing to take i.e. the deferred writeback changes, and some of the power saving changes didn’t play well with a couple of USB devices, so I didn’t blindly accept everything.

  4. Pingback: Use Less Power on Your NAS – always tinkering

  5. Hi, I came across your post and I’d be interested in knowing the transfer speed over 1Gbps. I have a backup server based on an old AMD Sempron and I get almost full 100MBps. That server uses around 70 Watts however, but it’s not always on. I also have a data server based on a banana pi m1, and I only get around 30MBps, which is not surprising with such a sbc. The data server only uses around 6-7 Watts however, together with a 2TB disk. Since this is an always-on server I want to get as low power consumption as possible, but with as high MBps as possible over 1Gbps. So your setup looks very interesting here and if the MBps are right I might give this a try. Thanks! :)

    • Depending on the load required, you might consider a PC Engines APU2c4. It uses about 6W of power and is plenty sufficient for a light server, including up to full gigabit throughput. I’m using one as a pfsense firewall with Squid and few other services running. It runs about 6-10% load on the CPU.

  6. How do you go up power to the drives with this Mobo? It looks like the Mobo has a single DC 12v Power-out connector. Is this what I should use? Can it power all 4 drives?

    Thanks

    • I believe mine has two 12V SATA power out connectors; IIRC I used splitters on those to power the 4 drives. I’m not sure I ever found specs on how many amps they can provide, but it works OK for my 2xSSD + 2xHDD setup. Since it provides 4 SATA data connectors, I would hope that it is sufficient to power 4 drives… works for me! I did make sure my external 12V power supply was sufficiently large (5A in my case).

  7. How did you set up your partitions? I have been having trouble. First I ran into an issue where I needed an EFI partition. Added that, now to boot loader (grub) fails to install. It looks like grub does not support md devices and must be installed on the actual drive.

    Here are that partitions I tried last:
    12 GB – Linux Swap
    250 MB – EFI System
    1 GB – /boot (ext4) <– Tried putting this in RAID 1, but Grub fails to install
    50 GB – / (ext4)
    175.2GB – /home (ext4)

    Thanks.

    • I remember boot/grub being a bit tricky, but I’m not remembering how I set it up, to be honest. These are the boot partitions:
      /dev/md126 on /boot type xfs
      /dev/md122 on /boot/efi type vfat

      I also made both of those v1.0 md superblocks, which place the md metadata at the end of the partition, so a single component partition can be mounted or treated like a regular partition (though this probably has risks too – I don’t remember if it was required). I may have stopped md for the boot partition and installed grub2 on each of the constituent drives, but … I really don’t remember for sure.
      My fstab only has UUIDs in it (not devices), and my grub.cfg has:
      insmod part_gpt
      insmod part_gpt
      insmod diskfilter
      insmod mdraid1x
      insmod xfs
      set root='mduuid/99d36104205a9926a78f8b4350c2cb30'

      Not sure if that helps … happy to answer more questions about the current setup, sorry I don’t remember the details of how I got here, exactly.

      • Got it working! First i tried installing Centos as prescribed, but it seemed super unreliable. In fact, I read several reviews of the board where people claimed it was unreliable with Linux. Then I tried Debian which i am more familiar with. I could not figure out how to get raid working on the boot partition with Debian. Then i found the OS setting in the bios. I had it set to Windows (7 I think). Setting it to Linux did the trick! I reinstalled Centos, and now it’s working great.

        Thanks so much for your help Eric, awesome build man!

        • Interesting; glad you got it working, I don’t remember having to change the BIOS in this way, but I probably should have taken better notes.
          One other note, I also added
          acpi_enforce_resources=lax
          to boot options, though I don’t remember exactly what prompted that – it may have been a BIOS bug that I needed to work around, maybe related to lm_sensors. I wouldn’t add it by default, but if you get ACPI resource warnings, you might see if that helps.

Leave a Reply

Your email address will not be published. Required fields are marked *