Mergerfs – another good option to pool your SnapRAID disks3 min read

I’m always on the hunt for better options to pool my SnapRAID array at home. I have found a new, great companion in MergerFS.  Mergerfs is another disk pooling solution (union filesystem). Here’s what the author says about it.

mergerfs is similar to mhddfs, unionfs, and aufs. Like mhddfs in that it too uses FUSE. Like aufs in that it provides multiple policies for how to handle behavior. Why create mergerfs when those exist? mhddfs has not been updated in some time nor very flexible. There are also security issues when with running as root. aufs is more flexible than mhddfs but contains some hard to debug inconsistencies in behavior on account of it being a kernel driver. Neither support file attributes (chattr).

Luckily, mergerfs is super easy to install.

Check the releases page to make sure you are downloading the latest version.
If you would like to run the latest version, you can always compile from the git repository like this.

That’s it. Mergerfs is ready to pool your data. A couple of nice things that it supports right out of the gate is globbing for mountpoints. This makes it very easy to add to your /etc/fstab.

The nice thing is that mergerfs provides create modes like AUFS. I like the default mode of epmfs, but it’s always nice to have options. I’m also using the minfreespace option with empfs so that I don’t overfill my data disks. So, in a nutshell this has even easier setup than mhddfs, and the benefits of create modes like AUFS. If it was kernel based, this would be the best combination of all pooling solutions. So far, it seems like it may be the current winner.

Enough talk, let’s see how this works… Let’s say I have my data disks mounted in /mnt/data/. To add a mergerfs mount line to my fstab, it would be as simple as this.

This would pool all mounts in /mnt/data and present them at /storage. This defaults to using  lfs (least free space) mode on created files and with the minfreespace option, so my disks won’t fill past 20GB remaining. I’m also using the fsname option so that my df -h is short and usable (otherwise, all the disks show up here and horrible wrapping occurs making the view challenging to use).

I ran a couple of tests tonight on my pooled SnapRAID array, and it appears that mergerfs is faster than mhddfs and just as fast as AUFS. Here’s the outcome of writing a 20GB file over Samba to the server and then reading a different 20GB file back. As the graphs show there is a little “breathing” on the transfers, but reading and writing have no problem saturating a gigabit connection over Samba (an impressive feat).

Note: There were times that this transfer exceeded 120MB/s, but it averaged around 105MB/s. Very impressive for a FUSE based pooling solution.
Note: the 140MB/s shown here is faster than gigabit speeds, but it’s due to caching on my Macbook. The transfer averaged around 118MB/s.



I love learning new things and trying out the latest technology.

You may also like...

40 Responses

  1. recklessplex says:

    I’m curious is there a simple step to take for migrating from AUFS to MERGEFS with Plex? I’m thinking of upgrading to 16.04 and I remember in your tutorial you instructed to lock down the kernel. Do you see any complications or things I need to undo?

    • Zack Zack says:

      Those are a couple of good questions. The nice thing with Mergerfs is that you don’t need a custom kernel for NFS exports , etc. The standard 16.04 kernel works great.

      Also, Mergerfs will drop right in place where you had you AUFS pool. Just unmount your pool, set up new /etc/fstab line and you are ready to go.

      • recklessplex says:

        Just shut it down and ran it. Seems to be seamless. Thanks Zack.

        For Windows I always found issues with upgrades and generally did a clean install. For Linux distros in general do you find them easy to just upgrade or do you start fresh? For the custom kernel do I unlock it or should the upgrade undo everything you had me do when I installed AUFS?

        When you upgrade or re-install and would the UUIDs change or is that a fixed value once a hard drive is partitioned?

        • Zack Zack says:

          Great! I’m glad to hear that it worked for you.

          There is normally no reason to do a fresh install with Linux, but I typically do a fresh install at every Long Term Service (LTS) release. This allows me to re-evaluate what packages I’m using and to start with a perfectly clean system. Using Docker has been starting from scratch super easy.

          Things like UUID’s do not change and are not OS/install dependent. Once, you have setup the filesystem, that UUID will always correspond with the same hard drive.

          • recklessplex says:

            I had our head of hardware at our company explain Docker to me. Seems very neat but a bit overboard for my needs at the moment.

            I ran into an issue in the upgrade. I went to 16.04 but the kernel remained as 4.0.4 AUFS. I did a few things and I’d be curious to know which one you think did the trick.

            1) Unhold on the packages you have “hold” in your guide and instal 4.4.16 kernel from Ubuntu

            2) Changed GRUB from 4.0.4 AUFS to default “0”

            Was only the second really necessary?

  2. Xsabre says:

    I am using your articles as a template for my new built. Just setup Mergerfs but am have problems sharing via NFS exports. If I export using standard method I have no problem. Utilizing mergerfs as export I get the following error on the client side.

    “mount.nfs: access denied by server”

    My /etc/export setup

    My /etc/fstab/

    #NFS Shares

    How is yours setup?

    • Zack Zack says:

      It appears you are doing this backwards. On your fileserver where you have the Mergerfs pool mounted at /storage, your /etc/exports should look like this (you will want to run exportfs -a or service nfs-kernel-server restart afterwards). For this example, I will assume your fileserver’s ip address is

      On the end trying to mount the NFS share, you will want to create a mountpoint and mount the NFS share there.

      Then, try to mount the NFS share.

      This should mount the NFS share at /mnt/storage. If that works, you need to umount /mnt/storage and add a line to /etc/fstab to automount the NFS at boot. I hope that helps.

  3. Xsabre says:

    Sorry for the late reply was dealing with a hurricane @ work and home. Everything turned out for the best.

    I had to add fsid switch to each export. So to clarify.

    My /etc/export

    This allowed me to mount on the client side without issue. I am testing the built on a VM so I can hammer out all of the details before hand. Thanks

    BTW, anyway on Mergerfs to selectively mount. ie /media/{HD1,HD2,HD3}..say I just want to mount HD1 & HD2 only on megerfs. What do I place in the fstab. Thanks again.

    • Zack Zack says:

      I’m glad to hear that everything worked out okay with Matthew. Yes, you need to use fsid’s for NFS, I just forgot to include them when I typed the response my phone.

      To answer your other question, yes, you can selectively mount via MergerFS. You can either create a different mountpoint for your pooled MergerFS disks in a different location from your other disks, or you can individually define the disks in your /etc/fstab mount line like this.

  4. bjay1404 says:

    So does that mean this sits on top of my existing filesystem? All of my drives are in ext4 and there’s no way I would reformat them. Also does that mean the file structure has to be the same across the disks?
    so /media/disk1/protected/hdmovies and /media/disk2/protected/hdmovies

    • Zack Zack says:

      Yes, it does sit on top of your disks. It pools them all together. It combines paths that are the same on multiple disks (it’s okay if they don’t all have the same directory structure) and presents them as one big pool. It also has multiple write policies, so you can determine where data will go when it writes to the disk. The eplfs policy I use in the tutorial writes to the disk in the pool with the least free space (fill the whole disk up before moving to the next one) and will create the paths and move onto a new disk if either the space isn’t available on the smallest disk or the path doesn’t exist on the pool yet. I’d encourage you to try it out. It won’t hurt the data on your underlying disks and there is no need for a re-format 🙂

  5. Joe says:

    i just noticed above you mention “I’m … using the minfreespace option with empfs so that I don’t overfill my data disks.” but the command line below that shows – create=eplfs – so i am a little confused. Is that a typo or is that accurate behavior for MergerFS

    • Zack Zack says:

      Not a typo. When I say that I don’t want to overfill a disk, I mean fill it up completely. I use the minfreespace option to ensure that my disks are filled up to exactly the level that I desire. I use eplfs, so that all of my files are more likely to reside on a couple disks than being sprinkled across many. This makes disk management easier for me as I completely fill up one disk before moving onto the other. Finally, for all of this to work properly, you need to the moveonenospc option set to true, so that if mergerfs starts to write to the disk with the least free space that is still under the quota, but it doesn’t have enough available room for it to complete successfully, it will complete the write on the next available disk. I hope that helps

  6. phuriousgeorge says:

    Going to try this method this time around to attempt better performance, thanks!

    Noted an update in the instructions. Missing wget on the line containing:

  7. espied says:

    I’m just trying to get going with this, I wonder if you could offer some help? I’m running ubuntu server in a vm, I have two drives mounted under /storage. I’m running into permissions issues here though and I can only add files as root.

    What the procedure for setting up /storage so that I can acces it without root. I’ve tried chown/chmod but I can’t get it working correctly.

    • espied says:

      fixed it. was being dim :~(

      • Zack Zack says:

        What did you do to correct the issue? It may help others in the future.

        • espied says:

          I was setting chmod with permissions suitable for a file not a folder, i.e. 644 not 755

          • espied says:

            I’m having some other trouble now though.

            my storage should be two small drives coming to 28GB, have i mounted things incorrectly?

            my fstab line is:

  8. Zack Zack says:

    I’m suspecting you didn’t actually mount the two 28GB disks (they need to be mounted somewhere before mergerfs can pool them).

    Could you provide the output of these two comands?

    • espied says:

      you suspicions were correct. I had mounted them originally but forgot to add them to fstab so they weren’t mounted on reboot. It all seems to be working correcting now. I think I’ll move everything over from the VM onto my actual server. How does MergerFS deal with drives already containing data? I was using greyhole storage pool so I have drives pretty full.

  9. oxzhor says:

    Hi Zack did you mount also the 5e parity disk ( SnapRAID)? or only the 4 data disks?

  10. Dulanic says:

    Does the minfreespace seem to have issues? I have mine also set to 20GB but it has maxed the drive out and my system does not like this. The /storage dir has disappeared a few times since this happened and I need to reboot to correct even temporarily.

    Or is something else happening?

    Disk usage Size 1.82 TB Free 0 bytes

    Within minutes after rebooting, I lose my mergerfs mount /storage.


    • Dulanic says:

      Some screen shots of what happens since it just happened again after about 5 minutes.

    • Zack Zack says:

      I’d need to see the output of df -h. Also, does the path you are trying to write to exist on any other/all of the other drives? Minfreespace does work, but it runs the check prior to writing. You can see a few of my disks as example (note that these have stopped at 20GB available and some have less than 20GB available).

      I would suggest manually moving a few large files off the disk that is completely full to a different disk. Also, make sure to update your mergerfs version to the latest. There have been two release since 2.22.0 🙂

      • Dulanic says:

        Thanks, I did update my version, I haven’t updated since I originally installed 6+ months ago. I also did manually move around 50+ gigs and again today it looks like it is below 20GB again at 14GB, I don’t get it.

  11. recklessplex says:

    Hello Zack, always a big thanks for your work. I ran into an issue and wanted your opinion. I have Ubuntu 18.04 (Bionic Beaver), I can’t seem to find a version that will work. Whenever I try to pick one it reverts to some version 2.15….dirty bionic. I really don’t want to be stuck on an old version. Do you think mergerfs will send out an update eventually?

    • Zack Zack says:

      There has been a pending issue for a 18.04 build for a while now. For the time being, I’d just build from source. You can always overwrite the .deb later when a new version comes available.

  12. BigBopper says:

    Just a heads-up that trapexit has changed the way policies work as of version 25.0. Path preserving policies will no longer fall back to non-path preserving policies.
    So if using the eplfs policy, and you run out of space on devices that have the relative path, adding a file will fail.
    I’m not sure exactly why the change was made other than his note: “It didn’t work as users expected. You’re using the path preservation policy.” on issue

    • Zack Zack says:

      Thank you! This is a significant change, and will require me to update this writeup. Based on the changes, I believe most users will want to run without a path preserving policy anymore to avoid getting the out of space error.

  13. birdjesus says:

    Hi Zack, are you still using the same mergerfs options?:
    defaults,allow_other,direct_io,use_ino,category.create=eplfs,moveonenospc=true,minfreespace=20G,fsname=mergerfsPool 0 0

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.