Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

5 years ago, I wrote about the build of the server hosting sandeen.net, which ran using only 18W of power.  After 5 years, it was time for an upgrade, and I’m pleased to say that I put together a much more capable system which now runs at only 10W!  This system handles external email and webserving, as well as handling media serving, energy datalogging, and backup storage for household purposes.  It’s the server which dished up this blog for you today.

sandeen.net wattage, as measured by a Kill-a-Watt meter [amzn]

If you run a server 24/7 at home, that always-on power consumption can really add up.  If you have an old beast running at 250W, that’s using about 2MWh of power per year, and will cost you over $200/year in electricity at $0.10/kWh.  A 10W build uses only 4% of that!  Here’s how I put it together:

I chose this motherboard because it has 4 SATA ports; the pairs of drives are in MD RAID1 mirror configurations using XFS for all filesystems, and the large 3T media drives are spun down most of the time; the system does get over 10W when they’re spinning.  And I have to say, the Samsung drives seem really nice; they are warranted for 150 terabytes written, or 10 years, whichever comes first.

I took most of the suggestions from the powertop tool, and I run this script at startup to put most devices into power-saving mode; this saves about 2W from the boot-up state. When boot-up power consumption is 12W, 2W is a significant fraction!  ;)

#!/bin/bash

# Set spindown time, see 
# https://sandeen.net/wordpress/computers/spinning-down-a-wd20ears/
hdparm -S 3 /dev/sdc
hdparm -S 3 /dev/sdd

# Idle these disks right now.
hdparm -y /dev/sdc
hdparm -y /dev/sdd

x86_energy_perf_policy -v powersave

# Stuff from powertop:
echo 'min_power' > '/sys/class/scsi_host/host0/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host1/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host2/link_power_management_policy';
echo 'min_power' > '/sys/class/scsi_host/host3/link_power_management_policy';
echo '0' > '/proc/sys/kernel/nmi_watchdog';
# I'm not willing to mess with data integrity...
# echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs';
# Put most pci devices into low power mode
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:13.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1a.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:14.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1c.1/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:1f.3/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:01:00.0/power/control';
# realtek chip doesn't like being in low power, see
# https://lkml.org/lkml/2016/2/24/88
# echo 'auto' > '/sys/bus/pci/devices/0000:02:00.0/power/control';
echo 'auto' > '/sys/bus/pci/devices/0000:00:02.0/power/control';

The old server took the network down to 100mbps, but that just feels too pokey most days, so I left it at full gigabit speed.

I’ve been very happy with this setup; it runs quiet, cool, fast, and cheap, while using less power than an old-fashioned night-light.  Jetway says the board will have a lifecycle through Q1 2019, so hopefully anyone reading this post well into the future can still pick one up if desired.

If you’re curious, here’s dmesg, lspci, and cpuinfo from the box.  Note that the cpu can even do virtualization!

36 thoughts on “Return of the low-power server: a 10W, 6.5 terabyte, commodity x86 Linux box

  1. I’ve got a similar setup (all SSD though) and I considered moving to that power supply, but, with one 12v, I was unsure how to hookup the 4 drives without crazy hackiness, how did you do it?

  2. I built something along these lines several years ago when you posted about your first low power server. I have always wondered (but never asked) how you handled the media drives – especially since you power them down at times. I have a pair of 2T disks each with a system and a data partition. The system partitions are md mirrors. But I found that the pain associated with using md to mirror the large partitions was huge so I used normal partitions and set up a script to sync the two data partitions each night with rsync. How do you handle the large data partitions? I can’t spin down those disks since the system partitions are on them but I wonder if there is a better way to deal with the data. I built one of these for my brother a year or so after I built mine. He had a crash about a month ago and I could only salvage about half of his data. The world population will decrease by one if I ever loose my wife photos. Does XFS do anything to reduce the pain with md on large partitions?

      • The time to sync the array – over 12 hours as I recall and performance is down to a crawl the whole time. I originally let it go for a couple of hours on the first build attempt to do some testing. Console interactivity was poor and network access via Samba was worse. The hardware was a Jetway NF99FL-525 with 4G of ram, a pair of 2TB WD blacks, and Centos 5.

        Thanks.

      • Interesting; I first set up the large 2T raid 5 years ago, and since then have removed & replaced defective drives. That was all done on RHEL6, and I didn’t suffer much during the resyncs. I might have used “assume-clean” on the initial setup, when the drives were known to be blank, but even the drive replacement later was not painful.

        • I always wonder how do you
          – detect a faulty drive on raid 1 if there is no led
          – find which is the faulty one
          do you use hardware raid from bios or software?

          (Me using mostly single-hd computers and a 2 bay nas)
          I had one computer with raid 1 running from bios level and the computer died, not the drives. Then i simply pulled one drive and put it inside my desktop to continue working. (Win on those machines) Worked with no hickup. after Boot had the drive ready with all partitions. only had to change pc name and set network share to get the office running. (and then i took my time to fix the server).

          • Well that it is possible to see it and find out which is clear.
            My concern is that while setting up a system one should find out which disk is named how and put name stickers on them.

            Mistakes like replacing the good instead of the bad drive could lead to data loss. Especially in raid 5.

            If i have no name sticker or a led near to the drive i fear that the chance of replacing the wrong drive is 50%.

            Wouldn’t that speak for a hardware raid with leds or using a nas?

          • You can sort out what’s what. Here’s part of my /proc/mdstat:
            md121 : active raid1 sdc7[0] sdd7[1]
            16760832 blocks super 1.2 [2/2] [UU]

            So it’s comprised of partitions on /dev/sdc and /dev/sdd. (Note that drive assignments could change under different kernels, even…). But if I saw sdd dying in the logs, I could check for its hardware serial number with smartctl:
            # smartctl -a /dev/sdd | grep -i serial
            Serial Number: S251NXAGC00102E

            and then look for the drive stamped with that serial number.

  3. [ 2.548610] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.551630] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 2.707745] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [ 3.176181] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

    Any idea why you don’t get 2 of the SATA links at 6 Gbps ?

    • I guess because I have different drives on them? Yup, the WD20EARS is only a 3 gbps drive.
      [ 3.321842] ata3.00: ATA-8: WDC WD20EARS-00MVWB0, 51.0AB51, max UDMA/133
      [ 3.799210] ata4.00: ATA-9: WDC WD30EFRX-68EUZN0, 82.00A82, max UDMA/133
      But looks like I should have connected the system’s SSDs to the faster ports, and I didn’t. :)

    • Sure, but I wasn’t willing to take i.e. the deferred writeback changes, and some of the power saving changes didn’t play well with a couple of USB devices, so I didn’t blindly accept everything.

  4. Pingback: Use Less Power on Your NAS – always tinkering

  5. Hi, I came across your post and I’d be interested in knowing the transfer speed over 1Gbps. I have a backup server based on an old AMD Sempron and I get almost full 100MBps. That server uses around 70 Watts however, but it’s not always on. I also have a data server based on a banana pi m1, and I only get around 30MBps, which is not surprising with such a sbc. The data server only uses around 6-7 Watts however, together with a 2TB disk. Since this is an always-on server I want to get as low power consumption as possible, but with as high MBps as possible over 1Gbps. So your setup looks very interesting here and if the MBps are right I might give this a try. Thanks! :)

    • Depending on the load required, you might consider a PC Engines APU2c4. It uses about 6W of power and is plenty sufficient for a light server, including up to full gigabit throughput. I’m using one as a pfsense firewall with Squid and few other services running. It runs about 6-10% load on the CPU.

  6. How do you go up power to the drives with this Mobo? It looks like the Mobo has a single DC 12v Power-out connector. Is this what I should use? Can it power all 4 drives?

    Thanks

    • I believe mine has two 12V SATA power out connectors; IIRC I used splitters on those to power the 4 drives. I’m not sure I ever found specs on how many amps they can provide, but it works OK for my 2xSSD + 2xHDD setup. Since it provides 4 SATA data connectors, I would hope that it is sufficient to power 4 drives… works for me! I did make sure my external 12V power supply was sufficiently large (5A in my case).

  7. How did you set up your partitions? I have been having trouble. First I ran into an issue where I needed an EFI partition. Added that, now to boot loader (grub) fails to install. It looks like grub does not support md devices and must be installed on the actual drive.

    Here are that partitions I tried last:
    12 GB – Linux Swap
    250 MB – EFI System
    1 GB – /boot (ext4) <– Tried putting this in RAID 1, but Grub fails to install
    50 GB – / (ext4)
    175.2GB – /home (ext4)

    Thanks.

    • I remember boot/grub being a bit tricky, but I’m not remembering how I set it up, to be honest. These are the boot partitions:
      /dev/md126 on /boot type xfs
      /dev/md122 on /boot/efi type vfat

      I also made both of those v1.0 md superblocks, which place the md metadata at the end of the partition, so a single component partition can be mounted or treated like a regular partition (though this probably has risks too – I don’t remember if it was required). I may have stopped md for the boot partition and installed grub2 on each of the constituent drives, but … I really don’t remember for sure.
      My fstab only has UUIDs in it (not devices), and my grub.cfg has:
      insmod part_gpt
      insmod part_gpt
      insmod diskfilter
      insmod mdraid1x
      insmod xfs
      set root='mduuid/99d36104205a9926a78f8b4350c2cb30'

      Not sure if that helps … happy to answer more questions about the current setup, sorry I don’t remember the details of how I got here, exactly.

      • Got it working! First i tried installing Centos as prescribed, but it seemed super unreliable. In fact, I read several reviews of the board where people claimed it was unreliable with Linux. Then I tried Debian which i am more familiar with. I could not figure out how to get raid working on the boot partition with Debian. Then i found the OS setting in the bios. I had it set to Windows (7 I think). Setting it to Linux did the trick! I reinstalled Centos, and now it’s working great.

        Thanks so much for your help Eric, awesome build man!

        • Interesting; glad you got it working, I don’t remember having to change the BIOS in this way, but I probably should have taken better notes.
          One other note, I also added
          acpi_enforce_resources=lax
          to boot options, though I don’t remember exactly what prompted that – it may have been a BIOS bug that I needed to work around, maybe related to lm_sensors. I wouldn’t add it by default, but if you get ACPI resource warnings, you might see if that helps.

  8. I wish I could get a mini server like this for the house use as much power or close and be able to support 128GB RAM for running various vms oh and likely 10+ cores.. ohh well

  9. I had built out this server and ran it for a few years with almost zero issues. It was EXACTLY what I needed. I could store all of my digital music hosted via SMB, I ran a git server on it, ssh-ed into it all the time to do some minor software development. It was great! Thanks so much for putting this site together.

    Since Centos was killed off, I tried upgrading to Rocky Linux 8.5. Unfortunately, newer Linux kernels REALLY struggle with this hardware. This is “Bay Trail” hardware. You can Google for “Bay Trail c-state issues” to get more information. Supposedly these issues were fixed somewhere in the kernel 5+ range, but to my experience they are not. This board barely functions with Linux anymore. This is really sad to me since this solution fit my need EXACTLY.

    Anyway, thanks again for sharing this.

    • Thanks for sharing your experiences! Mine is still running, still on CentOS 7. Sorry to hear about the problems with newer kernels – I’ll look into that.

      I did another build this year because I wanted a NAS with even more storage, and this time I used a Jetway NF694 with quad pentiums. It seems to work fine, though with max memory of 8G, that’s not ideal for the 18T of storage it has. With only one onboard SATA connector, I added a dual PCIe SATA connector as well as a dual Mini-PCIe SATA connector, in order to add 4 drives in addition to the OS drive connected directly to the motherboard. (Two different SATA adapters got better throughput; a 4xSATA card on a PCIe x1 slot will saturate that slot, I think.)

      • Would you mind providing specifics on what hardware you used with the Jetway NF694? I tried an ASRock j5040 solution and it’s not working at all. Every board I get has Network issues (seems like a terrible board).

        I just need to be able to support 4 sata drives. 8GB RAM should be fine for me.

        Thanks.

        • The only things not built into the board are the power supply, memory, and drives. lspci for built-in stuff looks like this:
          00:00.0 Host bridge: Intel Corporation Atom Processor Z36xxx/Z37xxx Series SoC Transaction Register (rev 0e)
          00:02.0 VGA compatible controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Graphics & Display (rev 0e)
          00:13.0 SATA controller: Intel Corporation Atom Processor E3800 Series SATA AHCI Controller (rev 0e)
          00:14.0 USB controller: Intel Corporation Atom Processor Z36xxx/Z37xxx, Celeron N2000 Series USB xHCI (rev 0e)
          00:1a.0 Encryption controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Trusted Execution Engine (rev 0e)
          00:1c.0 PCI bridge: Intel Corporation Atom Processor E3800 Series PCI Express Root Port 1 (rev 0e)
          00:1c.1 PCI bridge: Intel Corporation Atom Processor E3800 Series PCI Express Root Port 2 (rev 0e)
          00:1f.0 ISA bridge: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Power Control Unit (rev 0e)
          00:1f.3 SMBus: Intel Corporation Atom Processor E3800 Series SMBus Controller (rev 0e)
          01:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
          02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)

          • I think you mentioned that you “…added a dual PCIe SATA connector as well as a dual Mini-PCIe SATA connector…”. Do you know which PCIe and Mini-PCIe devices you used?

            Thanks.

          • Oh right, I’m sorry. Forgot that I had added the new build info in a comment. The above lspci is from the old one. The new ones are both IO Crest JMicron cards, SI-PEX40148 PCIe and SI-MPE40150 mini PCIe They show up as “JMicron Technology Corp. JMB58x AHCI SATA controller” in lspci.

      • Just an update here for anyone who is interested. I did manage to find a mainbaord that works for my needs. I am now using an mITX-4125A (https://www.gigabyte.com/Enterprise/Embedded-Computing/mITX-4125A). I have two Samsung 870 EVOs in RAID 1 as my boot drives and two WD Red 4TB drives in RAID 1 as my data drives. I spin down the WD drives after 5 minutes of no use. When the WDs are spun down I am using around 14 watts. This is more than the 10 that the original Jetway used, but is close enough for me.

        This board supports up to 5 SATA drives. It also has a X4 PCIe slot.

          • I doubt that this particular board is available any longer, but Jetway still makes similar embedded low-power products. You might take a look at https://www.mini-itx.com or just see who Jetway lists as resellers.

            The most recent board of this type I have used is the JNF694-4200.

Leave a Reply to Avuton Olrich Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.