Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

5 years ago, I wrote about the build of the server hosting, which ran using only 18W of power.  After 5 years, it was time for an upgrade, and I’m pleased to say that I put together a much more capable system which now runs at only 10W!  This system handles external email and webserving, as well as handling media serving, energy datalogging, and backup storage for household purposes.  It’s the server which dished up this blog for you today. wattage, as measured by a Kill-a-Watt meter [amzn]

If you run a server 24/7 at home, that always-on power consumption can really add up.  If you have an old beast running at 250W, that’s using about 2MWh of power per year, and will cost you over $200/year in electricity at $0.10/kWh.  A 10W build uses only 4% of that!  Here’s how I put it together:

I chose this motherboard because it has 4 SATA ports; the pairs of drives are in MD RAID1 mirror configurations using XFS for all filesystems, and the large 3T media drives are spun down most of the time; the system does get over 10W when they’re spinning.  And I have to say, the Samsung drives seem really nice; they are warranted for 150 terabytes written, or 10 years, whichever comes first.

I took most of the suggestions from the powertop tool, and I run this script at startup to put most devices into power-saving mode; this saves about 2W from the boot-up state. When boot-up power consumption is 12W, 2W is a significant fraction!  ;)


# Set spindown time, see 
hdparm -S 3 /dev/sdc
hdparm -S 3 /dev/sdd

# Idle these disks right now.
hdparm -y /dev/sdc
hdparm -y /dev/sdd

x86_energy_perf_policy -v powersave

# Stuff from powertop:
echo 'min_power' > '/sys/class/scsi_host/host0/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host1/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host2/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host3/link_power_management_policy';
echo '0' > '/proc/sys/kernel/nmi_watchdog';
# I'm not willing to mess with data integrity...
# echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs';
# Put most pci devices into low power mode
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:13.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1a.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.1/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.3/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:01:00.0/power/control';
# realtek chip doesn't like being in low power, see
# echo 'auto' > '/sys/bus/pci/devices/0000:02:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:02.0/power/control';

The old server took the network down to 100mbps, but that just feels too pokey most days, so I left it at full gigabit speed.

I’ve been very happy with this setup; it runs quiet, cool, fast, and cheap, while using less power than an old-fashioned night-light.  Jetway says the board will have a lifecycle through Q1 2019, so hopefully anyone reading this post well into the future can still pick one up if desired.

If you’re curious, here’s dmesg, lspci, and cpuinfo from the box.  Note that the cpu can even do virtualization!

19 thoughts on “Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

  1. I’ve got a similar setup (all SSD though) and I considered moving to that power supply, but, with one 12v, I was unsure how to hookup the 4 drives without crazy hackiness, how did you do it?

  2. I built something along these lines several years ago when you posted about your first low power server. I have always wondered (but never asked) how you handled the media drives – especially since you power them down at times. I have a pair of 2T disks each with a system and a data partition. The system partitions are md mirrors. But I found that the pain associated with using md to mirror the large partitions was huge so I used normal partitions and set up a script to sync the two data partitions each night with rsync. How do you handle the large data partitions? I can’t spin down those disks since the system partitions are on them but I wonder if there is a better way to deal with the data. I built one of these for my brother a year or so after I built mine. He had a crash about a month ago and I could only salvage about half of his data. The world population will decrease by one if I ever loose my wife photos. Does XFS do anything to reduce the pain with md on large partitions?

      • The time to sync the array – over 12 hours as I recall and performance is down to a crawl the whole time. I originally let it go for a couple of hours on the first build attempt to do some testing. Console interactivity was poor and network access via Samba was worse. The hardware was a Jetway NF99FL-525 with 4G of ram, a pair of 2TB WD blacks, and Centos 5.


      • Interesting; I first set up the large 2T raid 5 years ago, and since then have removed & replaced defective drives. That was all done on RHEL6, and I didn’t suffer much during the resyncs. I might have used “assume-clean” on the initial setup, when the drives were known to be blank, but even the drive replacement later was not painful.

        • I always wonder how do you
          – detect a faulty drive on raid 1 if there is no led
          – find which is the faulty one
          do you use hardware raid from bios or software?

          (Me using mostly single-hd computers and a 2 bay nas)
          I had one computer with raid 1 running from bios level and the computer died, not the drives. Then i simply pulled one drive and put it inside my desktop to continue working. (Win on those machines) Worked with no hickup. after Boot had the drive ready with all partitions. only had to change pc name and set network share to get the office running. (and then i took my time to fix the server).

          • Well that it is possible to see it and find out which is clear.
            My concern is that while setting up a system one should find out which disk is named how and put name stickers on them.

            Mistakes like replacing the good instead of the bad drive could lead to data loss. Especially in raid 5.

            If i have no name sticker or a led near to the drive i fear that the chance of replacing the wrong drive is 50%.

            Wouldn’t that speak for a hardware raid with leds or using a nas?

          • You can sort out what’s what. Here’s part of my /proc/mdstat:
            md121 : active raid1 sdc7[0] sdd7[1]
            16760832 blocks super 1.2 [2/2] [UU]

            So it’s comprised of partitions on /dev/sdc and /dev/sdd. (Note that drive assignments could change under different kernels, even…). But if I saw sdd dying in the logs, I could check for its hardware serial number with smartctl:
            # smartctl -a /dev/sdd | grep -i serial
            Serial Number: S251NXAGC00102E

            and then look for the drive stamped with that serial number.

  3. [ 2.548610] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.551630] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.707745] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 3.176181] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

    Any idea why you don’t get 2 of the SATA links at 6 Gbps ?

    • I guess because I have different drives on them? Yup, the WD20EARS is only a 3 gbps drive.
      [ 3.321842] ata3.00: ATA-8: WDC WD20EARS-00MVWB0, 51.0AB51, max UDMA/133
      [ 3.799210] ata4.00: ATA-9: WDC WD30EFRX-68EUZN0, 82.00A82, max UDMA/133
      But looks like I should have connected the system’s SSDs to the faster ports, and I didn’t. :)

    • Sure, but I wasn’t willing to take i.e. the deferred writeback changes, and some of the power saving changes didn’t play well with a couple of USB devices, so I didn’t blindly accept everything.

  4. Pingback: Use Less Power on Your NAS – always tinkering

  5. Hi, I came across your post and I’d be interested in knowing the transfer speed over 1Gbps. I have a backup server based on an old AMD Sempron and I get almost full 100MBps. That server uses around 70 Watts however, but it’s not always on. I also have a data server based on a banana pi m1, and I only get around 30MBps, which is not surprising with such a sbc. The data server only uses around 6-7 Watts however, together with a 2TB disk. Since this is an always-on server I want to get as low power consumption as possible, but with as high MBps as possible over 1Gbps. So your setup looks very interesting here and if the MBps are right I might give this a try. Thanks! :)

Leave a Reply

Your email address will not be published. Required fields are marked *