Setting up SnapRAID on Ubuntu to Create a Flexible Home Media Fileserver

Home Media Fileserver

Home Media Fileserver in Norco 4224 Case

I have SnapRAID setup to create a super flexible, reliable bulk media server. I have used SnapRAID for years across numerous versions of Ubuntu and a plethora of hardware. SnapRAID has been so reliable that I have updated hardware four times since I originally set it up, migrated through many versions of SnapRAID, added many data disks, added parity levels, and replaced disks all without issue.  All the while, it’s been super flexible and an awesome way to manage my bulk media.  I currently have a ridiculously over the top server that you can read more about here.  On it, I use three parity disks and 21 data disks.

The first thing I do after any new install is update the system, and install my base packages.

apt-get update && apt-get dist-upgrade -y && reboot

After the reboot, let’s keep installing the packages we will need to build SnapRAID.

sudo -i
apt-get install gcc git make -y

Finally, let’s install it.

tar xzvf snapraid-12.3.tar.gz
cd snapraid-12.3/
make check
make install
cd ..
cp ~/snapraid-12.3/snapraid.conf.example /etc/snapraid.conf
cd ..

Next, let’s cleanup.

rm -rf snapraid*

Next, I’m going to partition the disks, so I need to grab a couple packages.

apt-get install parted gdisk

Let’s partition one, and copy the structure to the other disks.

parted -a optimal /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 1 -1
(parted) align-check
alignment type(min/opt)  [optimal]/minimal? optimal
Partition number? 1
1 aligned
(parted) quit
sgdisk --backup=table /dev/sdb
sgdisk --load-backup=table /dev/sdc
sgdisk --load-backup=table /dev/sdd
sgdisk --load-backup=table /dev/sde
sgdisk --load-backup=table /dev/sdf

Now, we will make a place to mount the disks. I mount them via /etc/fstab labeled by their device type and serial number as seen beloew. This makes the disk easier to identify in the event of a disk failure.

mkdir -p /mnt/data/disk{1..4}
mkdir -p /mnt/parity/1-parity

Setup a filesystem on each data disk (Note, I’m reserving 2% of the disks space so that the parity overhead can fit on the parity disk). You can set the reserved space to 0% if your parity disk(s) are all larger than your data disks (i.e. you have 6TB parity disks and 5TB data disks).

mkfs.ext4 -m 2 -T largefile /dev/sdb1
mkfs.ext4 -m 2 -T largefile /dev/sdc1
mkfs.ext4 -m 2 -T largefile /dev/sdd1
mkfs.ext4 -m 2 -T largefile /dev/sde1

Put a filesystem on the parity disk (here I’m reserving 0%, or letting it use the whole disk for parity).

mkfs.ext4 -m 0 -T largefile /dev/sdf1

Get the device type and serial numbers like this, then add them to your /etc/fstab.

ls -la /dev/disk/by-id/ | grep part1  | cut -d " " -f 11-20

It should give you output like this.

ata-HGST_HDN724040ALE640_PK2334PBHDYY0R-part1 -> ../../sdb1
ata-HGST_HDS5C4040ALE630_PL2331LAG90YYJ-part1 -> ../../sdc1
ata-HGST_HUS726060ALA640_AR31001EG1YE8C-part1 -> ../../sde1
ata-Hitachi_HDS5C3030ALA630_MJ0351YNYYYK9A-part1 -> ../../sdf1
ata-Hitachi_HDS5C3030ALA630_MJ1313YNGYYYJC-part1 -> ../../sdg1

Let’s create some directories to mount our new disks.

mkdir -p /disks/data/disk{1..4}
mkdir -p /disks/parity/1-parity

You use the above to add them to /etc/fstab

nano /etc/fstab

It should look something like this.

# SnapRAID Disks
/dev/disk/by-id/ata-HGST_HDN724040ALE640_PK2334PBHDYY0R-part1 /disks/data/disk1 ext4 defaults 0 2
/dev/disk/by-id/ata-HGST_HDS5C4040ALE630_PL2331LAG90YYJ-part1 /disks/data/disk2 ext4 defaults 0 2
/dev/disk/by-id/ata-HGST_HUS726060ALA640_AR31001EG1YE8C-part1 /disks/data/disk3 ext4 defaults 0 2
/dev/disk/by-id/ata-Hitachi_HDS5C3030ALA630_MJ0351YNYYYK9A-part1 /disks/data/disk4 ext4 defaults 0 2

# Parity Disks
/dev/disk/by-id/ata-Hitachi_HDS5C3030ALA630_MJ1313YNGYYYJC-part1 /disks/parity/1-parity ext4 defaults 0 2

As you may be able to see, the above shows the type of connection, in this case SATA, the Manufacturer of the disk, the part number of the disk, the serial number of the disk, and the partition we are using from the disk.  This makes indentying disks in the event of a failure super easy.

Mount the disks after you add them to /etc/fstab

mount -a

Next, you’ll want to configure SnapRAID.

nano /etc/snapraid.conf

This is how I configured mine

parity /mnt/parity/1-parity/snapraid.parity

content /var/snapraid/content
content /disks/data/disk1/content
content /disks/data/disk2/content
content /disks/data/disk3/content
content /disks/data/disk4/content

disk d1 /disks/data/disk1/
disk d2 /disks/data/disk2/
disk d3 /disks/data/disk3/
disk d4 /disks/data/disk4/

exclude *.bak
exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/
exclude .AppleDouble
exclude ._AppleDouble
exclude .DS_Store
exclude .Thumbs.db
exclude .fseventsd
exclude .Spotlight-V100
exclude .TemporaryItems
exclude .Trashes
exclude .AppleDB

block_size 256

Next, we need to create the path that we mentioned above for our local content file.

mkdir -p /var/snapraid/

Once that’s complete, you should sync your array.

snapraid sync

Since moving to SnapRAID 7.x, the above mentioned script no longer works. I have revised the script to accommodate dual parity, and to integrate the changes in the counters.

Finally, I wanted something to pool these disks together. There are four options here (choose your own adventure). The nice part about any of these is that it’s very easy to change later if you run into something you don’t like.

1. The first option is mhddfs. It is super easy to setup and “just works”, but many people have run into random disconnects while writing to the pool (large rsync jobs where causing this for me). I have since updated my mhddfs tutorial with some new FUSE options that seems to remedy the disconnect issue. mhddfs runs via FUSE vs. a kernel driver for AUFS, so it’s not as fast as AUFS and it does have more system overhead.

2. The second option is to use AUFS instead. The version bundled with Ubuntu has some weirdness with deletion and file moves with both it’s opaque and whiteout files. It also does not support exporting via NFS.

3. The third option is to use AUFS, but to compile your own versions to support the hnotify option and allow for export via NFS. This is where I landed for a few years after trying both of the above for many months/years.

4. This is what I use Finally, a solution that performs well and is easy to use. MergerFS (the solution I’m currently using). This is a FUSE based solution, but it’s fast and has create modes like AUFS. It’s also easy to install and requires no compiling unlike AUFS to get it working. This is what I use now, and it’s great and actively developed.

After choosing one of the options above, you should now have a mount point at /storage that is pooling all of your disks into one large volume. You’ll still want to setup a UPS and SMART monitoring for your disks. Another thing I did was write up a simple BASH script to watch my disk usage, and email me if a disk gets over 90% used, so I can add another disk to the array.

Next, I would strongly suggest you read my other articles to setup email for monitoring, SMART information monitoring , spinning down disks, setting up a UPS battery backup, and other raid array actions. Being able to cope with drives failing useful, but it’s nice to know that one has failed and be able to replace it too.

Updating in the future
You may wonder…”Hmm, I installed this fancy SnapRAID a while back, but the shiny new version of SnapRAID just came out, so how do I update?” The nice thing about SnapRAID is that it’s a standalone binary with no dependencies, so you can upgrade it in place. Just grab the latest version, untar, and install.

tar xzvf snapraid-12.3.tar.gz
cd snapraid-12.3/
make check
make install

You can check your version like this.

snapraid -V

Other Items:
If you would like to have encrypted SnapRAID disks, the following will go through that.


I love learning new things and trying out the latest technology.

You may also like...

72 Responses

  1. Dave says:

    Awesome been looking for someone todo a more current SNAPRAID follow up. Thanks!!!

    • Zack says:

      I’m glad you found this helpful Dave. I just migrated this content from my old site, though, so this has been an article I have updated for a couple of years now. Please let me know if you have any questions.

  2. Joe says:

    your first set of commands didn’t migrate over well;

    apt-get update && apt-get dist-upgrade -y && reboot

    should be
    apt-get update && apt-get dist-upgrade -y && reboot

  3. Joe says:

    ack it looks like my amp; amp; didn’t get written correctly in my post.

  4. oxzhor says:

    Hi Zack,

    Great post! normally i work with CentOS but you done nice work on Ubuntu 16.x.
    I buy a old Dell R510 and put a perc H310 into it and will follow your post to setup a backupserver.
    I keep you updated over how the project will go :).

    Keep the good work up!

  5. Savage702 says:

    Ok, got my first failing drive, next steps?

    The following warning/error was logged by the smartd daemon:

    Device: /dev/sdf [SAT], 2 Offline uncorrectable sectors

    Device info:
    WDC WD30EFRX-68EUZN0, S/N:WD-WMC4N0455643, WWN:5-0014ee-0ae519cc4, FW:80.00A80, 3.00 TB

    For details see host’s SYSLOG.

    I guess turning off Plex sync/updating and stopping new downloads, etc. Then just as simple as replacing drive, setting new drive up, updating config and running a sync?

    • Zack says:

      Hello. Sorry to hear you have lost a disk. Your first steps are correct, I’d stop plex from updating and stop downloading. I would format the new disk and temporarily mount it somewhere for the time being.

      mkdir /mover
      mount /dev/disk/by-id/new-device-here /mover 

      Since the old disk is still available, I would use rsync in a tmux session to transfer the data to the new disk from the old disk. Here is an example. Note the trailing slash. This copies the whole disk to the other one.

      tmux new -s mover
      rsync -av --progress /mnt/data/olddisk/ /mover/

      Once this is finished, I would mount the old disk and mount your new disk in its same location. Next, you will want to get the replacement disk setup /etc/fstab so it mounts after a reboot. So, change the device name to match the new name.

      Finally, run a snapraid diff. At this point is should mention that theven UUID for the real ed disk has changed and moves won’t be optimal. As long as the number of deleted and updated files looks good, run a sync.

      Once that’s done, you can remove the failed disk and startup your shutdown services.

      For bonus points, check and see if your failed disk is under warranty. If, so submit and RMA and save the new disk that they send you as a backup disk.

      Let me know if this makes sense.

      • Savage702 says:

        Ok, makes sense, and thanks for that. I’m sure doing the rsync is better/easier/faster than letting it recover from parity. I ran a few status checks and tried a fix, 2 blocks with IO errors. I like these steps though, all the media will remain available for the kids today while this is doing it’s thing.

        I’m going through my stack of drives and making an inventory of which one is where in the stack, something I should have labeled 3 years ago when setting up. Better late than never. In doing this, I saw that I’m about 3 months out of warranty on my 3TB Red Drives… a little longer for 3 of them that I originally started with! (BOOO!!!)

        For 3 years, I’ve had a 3TB Red on standby, so broke that out of the static bag to realize it’s a re certified drive. 😐 Not thrilled about that, but will have a spare on hand by next week. I’ll make sure this time to get the drive all formatted and ready so it will be a much more plug and play scenario.

        • Zack says:

          Sounds like a good idea. One other thing I always like to do with new drives (especially re-certified) before putting them into service is stress test them. I use something like this (this will destroy all data on the disk, so make sure it’s the right one).

          badblocks -wsv /dev/sdX

          In your case where you have a missing disk, I’d just throw in the re-certified disk and get everything working. I’d stress test and format your new replacement disk when you get it prior to putting it into service.

      • Savage702 says:

        This can be safely ignored and just move along, correct?

        root@mediamaster:/media# parted -a optimal /dev/sdh
        GNU Parted 3.2
        Using /dev/sdh
        Welcome to GNU Parted! Type 'help' to view a list of commands.
        (parted) mklabel gpt                                                      
        (parted) mkpart primary 1 -1                                              
        Warning: The resulting partition is not properly aligned for best performance.
        Ignore/Cancel? i                                                          
        (parted) align-check
        alignment type(min/opt)  [optimal]/minimal? optimal                       
        Partition number? 1                                                       
        1 not aligned
      • Savage702 says:

        Thank you! I think I’m there, although it took all day to rsync the drive.

        WARNING! UUID is changed for disks: 'd4'. Move operations won't be optimal.

        134391 equal
        2 added
        0 removed
        187 updated
        0 moved
        0 copied
        0 restored
        There are differences!

  6. phuriousgeorge says:

    Found a little boo-boo in the instructions that made me have to double-back a little:

    cp ~/snapraid-10.0/snapraid.conf.example /etc/snapraid.conf

    Thanks for all your useful information you blog about, it’s really easy to follow and has done wonders to help me!

  7. oxzhor says:

    If you want to use two parity disk you can add it like:

    # Parity Disks
    /dev/disk/by-id/ata-hdd1-name-part1 /mnt/parity/1-parity xfs defaults 0 2
    /dev/disk/by-id/ata-hdd2-name-part1 /mnt/parity/2-parity xfs defaults 0 2

    thx for your feedback.

  8. twhitmer34 says:

    HI — First thank you so much for you blog, it helped me setup Snapraid for my media server

    I just got my first drive (soon to be) failure (SMART reporting bad sectors), it was my parity disk, I have requested an RMA and will be getting my new replacement drive soon. I was wondering if you could help me with my specific drive replacement

    my /etc/fstab looks like the following:

    # SnapRAID Disks
    UUID=c220a84e-a9a4-45a0-947d-c05ed2d84b35 /media/disk2 ext4 defaults 0 2
    UUID=d560171e-9d3a-4b88-a253-43baded26777 /media/disk3 ext4 defaults 0 2
    UUID=cd7ee7d8-f2e7-4288-b3f3-801c6ae0a848 /media/disk4 ext4 defaults 0 2
    UUID=0696a3ec-d1e9-4ef0-ae39-4f084504b591 /media/disk5 ext4 defaults 0 2
    UUID=b5a378eb-5864-4d33-a591-8ae0f4bd4b3c /media/disk6 ext4 defaults 0 2
    UUID=0d0ae4f2-c448-4b79-a461-3d343d0bf917 /media/disk7 ext4 defaults 0 2
    # Parity Disks
    UUID=b2ad89ed-5a1c-4fad-955e-85e80183349b /media/disk1 ext4 defaults 0 2

    and my snapraid conf is the following:

    parity /media/disk1/snapraid.parity
    content /var/snapraid/content
    content /media/disk2/content
    content /media/disk3/content
    disk d2 /media/disk2/
    disk d3 /media/disk3/
    disk d4 /media/disk4/
    disk d5 /media/disk5/
    disk d6 /media/disk6/
    disk d7 /media/disk7/

    my issue is that i dont have any more sata connectors to add the new drive without removing a drive. so i was thinking that i would
    1. comment out the parity disk is /etc/fstab with a “#”
    2. shutdown pc
    3. remove old drive parity drive
    4. add new replacement parity drive
    5. startup computer
    6. (the current parity drive is /dev/sda1)

    parted -a optimal /dev/sda
    GNU Parted 2.3
    Using /dev/sda
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) mklabel gpt
    (parted) mkpart primary 1 -1
    (parted) align-check
    alignment type(min/opt)  [optimal]/minimal? optimal
    Partition number? 1
    1 aligned
    (parted) quit

    7. mkdir /media/disk1/ (so i dont have to update my snapraid.conf)
    8. mkfs.ext4 -m 0 -T largefile4 /dev/sda1 (again assuming that the new drive is also going to sda1 like the old drive was — not sure this is true?)
    9.blkid /dev/sda1 (to get new disk UUID)
    10. update /etc/fstab parity line with new uuid and un-comment it out.
    11. restart
    12. run “snapraid -F snyc” (to force the making of a new parity file)
    13. all done and should be all good again.

    • Zack says:

      Hello, thanks for the question. Here is what I would do, if the old parity disk is still working…
      1. Shutdown the server
      2. Temporarily remove one of your data disks and add your replacement disk in it’s spot
      3. Boot from a live linux CD run it in try Ubuntu mode.
      4. Clone the old disk to the new disk (MAKE SURE YOU ARE USING THE CORRECT DISKS HERE!)

      sudo -i
      ddrescue -v /dev/OLD_DISK /dev/NEW_DISK

      This will bring over the partition table and as much of the parity file as is recoverable. It will also clone the UUID so, it should just work in place of the old disk.
      5. Once complete, I would shutdown the server again, and remove the old parity disk
      6. Replace the data disk that you removed early, and boot into the OS.
      7. Run the fix option to check the parity.

      snapraid fix -d parity

      8. Once complete, you should be good to go.

      This way you keep your old disk as a backup the entire time and don’t have to go through creating partitions, changing /etc/fstab, etc.

      • twhitmer34 says:

        Thanks — completed the ddrescue command and it seemed to work fine (3TB in 15hrs and no errors). I ctrl-c out of the snapraid fix -d parity (looked like it was going to take like 6 hrs)? I am not sure what it was “fixing” since i ran a snapraid snyc command before i did the ddrescue command.

        Can i just run a snapraid sync again? what does the fix – d parity actually do? thanks again

        • Zack says:

          The fix will just make sure that your parity file is actually complete and properly checksums. I would suggest running it as an insurance policy. The parity is what is protecting your data, so it is important that it’s valid 🙂

  9. Savage702 says:

    Good Morning Zack,

    Question for you, as always. 🙂
    In one of the older versions of this guide (which I used and continue to configure my setup on), in Snapraid.conf, you only listed 2 content disks out of the 4, like so:

    content /var/snapraid/content
    content /media/disk1/content
    content /media/disk2/content
    disk d1 /media/disk1/
    disk d2 /media/disk2/
    disk d3 /media/disk3/
    disk d4 /media/disk4/

    I noticed last night looking over this again as I was adding a new drive to my array, that you list all disks.
    Is this something I should modify/update on my config?

    Also, older instructions again, when you setup the filesystem, you didn’t reserve the 2% space or do the largefile4, instead it was simply:

    mkfs.ext4 /dev/sdb1

    I’d imagine I’d want all my drives to follow the same format vs changing things up, correct? I seem to recall the disk space reservation was handled elsewhere back then, but maybe not. ?

    • twhitmer34 says:

      i have a similar question — (currently “comment is awaiting moderation”). i am about to lose my parity drive — wondering the steps to to take, and like you used a previous version of this tutorial. My issue is that I have no more sata connectors to use (must take out a drive to add one). since my issue is sorta specific i tired the contact tab on the website but it “failed to send”. Any help is much appreciated

    • Zack says:

      Hello 🙂 Yes, I have added content files on all of my data disks at this point. They are not that big and it provides more content files to checksum to make sure everything is okay. To add them to more disks, just add more content lines for each disk. As long as you have the space, it’s worth it.

      In regards to partitioning, if you already have data on the disks, I wouldn’t worry about it too much. Meaning that I wouldn’t format them and start over. The largefile option is totally worth it. It limits the number of files you can create on the disk in exchange for fewer inodes. I also like saving a little space with the reserved space option, but it won’t be honored if the root user is the one that’s writing the files. If you are using mergerfs with the minfreespace option, that should prevent overfilling anyways.

  10. Dulanic says:

    Love this guide, thank you for posting this for v11. I keep running into a compile error… any ideas as to how I may correct this?

    checking for gcc… gcc
    checking whether the C compiler works… no
    configure: error: in `/root/snapraid-11.0′:
    configure: error: C compiler cannot create executables

    • Dulanic says:

      Update: Found this was the issue. May want to add in case others see this problem. FYI, I am running Linux Mint 18.1 which is based on Ubuntu 16.04.

      Had to run apt-get install gcc-multilib

      • Zack says:

        I’m glad you got it figured out. I’ve never had an issue with compiling this with just gcc on 16.04 (this is what I currently run). I would suggest for most users to just install the build-essential package, as that will give you all of the compilers that are typically needed. Thanks for the post!

  11. Hildebrau says:

    Can this be setup with a bunch of USB drives of various sizes? I see you are copying the partition table from one drive to all the others. I assume that means your drives are all identical sizes.

    I have a Hodgepodge of USB drives. I want to use mergerfs as you documented, but the snapraid underlying sounds like a great idea, too. As it stands today, I lose files when a disk dies. 🙁

    Thank you!

    • Zack says:

      Hello! Yes, this can certainly be setup with a bunch of USB drives in varying sizes. Also, you would just want to manually cemented the partitions like I did on the first disk rather than copying the partition table. This is still vary straightforward.

      SnapRAID will also pair well with this setup and is a good way to prevent data loss. Good luck on your adventure 🙂

  12. Dulanic says:

    OK if I reinstall the OS and need to reinstall Snapraid, yet keep the same config as before, is there a good way to do that? It looks like it goes crazy /w the make check? I kept all of the conf files etc… and have the exact same fstab, but not sure how to reinstall and “restore” the previous snapraid?

    • Zack says:

      You would want to move over your /etc/snapraid.conf, have the exact mountpoints in /etc/fstab, and install snapraid. Once everything is mounted in the same locations with the same snapraid.conf, it should work just fine. I’d just do a snapraid diff to make sure everything looks good after the move.

  13. Savage702 says:

    Question for you again.

    I think I have a Sata controller going bad. It’s a 2 port card, so I moved those drives over to an old RocketRaid card that has extra ports on, and removed the 2 port card.

    The drives showed up, then kinda dropped during sync, and an error mentioned something about one of the disks being “sane”.
    I tried a few things, but no luck, so put the failing card back in, moved all back, and running a sync now, and all seems to be back to normal.

    What am I missing. I have a feeling getting an 8 port card is in my future, and removing the 2 cards I have in there now… but would like to avoid difficulty. In snapraid, you should just be able to move the drives onto another controller and all be good, right?

    • Zack says:

      Great question. And, to answer it, yes, you can put a different SAS/SATA Controller in there and be fine. I would really suggest the Dell H310 used off eBay. I just bought another one used yesterday for $35 with free shipping. I’ll flash it to IT mode like my other ones using these directions.

      • Savage702 says:

        Ok, so really, I should not be having any issues moving everything over, and my array being just fine. So something is def. up.
        The H310 card is half of what I was looking at for a standard 8 port sata card…. I just picked this one up on ebay, looks like it’s already in IT mode, so I don’t have to hassle with that.

        and one of these cables

        Fingers crossed that this is the cause of my server crashes and freezes.

        FYI – Miss reading your updates on the site here. Hope all is going well.

        • Zack says:

          It should work great. You will need some 8087 -> forward breakout cables or 8087 -> 8087 to hook up to backplanes. The H310s don’t have standard SATA plugs on them.

          • Savage702 says:

            Right, pretty sure I purchased the right cable in the amazon link above? One of the people in the comments on Amazon said they also had an H310 card. Cable arrived already, just tapping my fingers on the card. Oddly enough, I think I may have fixed the issue with replacing the sata cords and reseating the cards. Everything seems to be playing nice right now (touch wood). I’ve done a few scrubs and fixes and stuff to clear out some of the errors, and SMART seems to be reassuring me the drives are ok. All turning out to be a bit of a mystery.

          • Zack says:

            I’m glad you got it working. In regards to updates on the site let me know what you would like to see. I frequently updates old posts with new info, but I typically only write a new article when I’m trying something new at home. I haven’t set up anything new lately. My real job and family tend to keep me very busy:)

          • Savage702 says:

            Nothing I particularly want to see, i think I found this site when you were pushing out a lot of new info. Maybe it was a time of discovery… not sure. You’re always finding/doing something cool to read about though.

            But yeah, Family and “real” job do keep one busy. That’s why I like my stuff to just… work. LOL

      • Savage702 says:

        Wow, I’m a walking disaster with this stuff. Got the card & cable, but had a lot of issues.
        Something happened in the process, and either I didn’t get the tape on right, or the slot didn’t like the card, and during all that, one of the drives took a slight dirtnap…. which required me having to fsck the drive. Ultimately though, seems the PCIe x16 slot was preferred, and it finally worked. Not after losing the tape in one of the other slots, and possibly it shifting position in the other slots, I honestly have no idea. The picture from your link, not sure how he has that tape on there, but it doesn’t look like it’s going anywhere.

        Oh well… It’s working for now. Had a slight issue with my first sync, the sync went through but status showing a sync at 99%. My nightly scripted sync though ran successfully, and status no longer shows a sync stuck at 99%.

        I do think one of my drives is not playing nice in the array causing my issues, but SMART isn’t complaining like it has before on a flaky drive.

        Ok, I’m off to hit up our Docker section. I have questions there. I want to start getting containerized.

        • Zack says:

          I’ve never had to use the tape method on any of my motherboards before. We you having issues without the tape before you applied it? Ultimately, I’m glad you got it all working. Docker is great, although it can be confusing when you first start out. I’d just start with a couple of simple containers to better understand how the ports and volumes (storage you pass to the container) work. Good luck!

          • Savage702 says:

            Yes, system would not POST, just a bunch of beep codes that changed each time I shut down and started back up again. Display would not come up on screen.

            I’ve read of issues where using onboard video and these cards might cause issues. That is my case. My mobo is getting pretty aged, but probably not as old as the H310 card? could be wrong. Biostar TZ77B.

            Also, whenever I try to work with SMART to get the overall health of the drives that are attached to that controller, I get an error. I don’t for my other drives running on the Mobo’s controller. I get the following if it’s attached to the H310.

            smartctl -d ata -H /dev/sdi

            smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.10.0-26-generic] (local build)
            Copyright (C) 2002-16, Bruce Allen, Christian Franke,

            Read Device Identity failed: Invalid argument

            A mandatory SMART command failed: exiting. To continue, add one or more ‘-T permissive’ options.

          • Zack says:

            That is weird. I have run these in a few different systems (some with onboard video and some without), but haven’t hit that before. I’m glad the tape solved it. In regards to SMART, have you tried a more simplified command like this?

            smartctl -a /dev/sdi
  14. Savage702 says:

    Hey Zack,

    Wondering what I should be looking for/at here? Suddenly getting an error on my Sync starting last night.
    Ran a quick smartctl on the drive, and it didn’t see any issues.
    Checked the permissions on the drive, and it appears to be the same as the others.
    The drive has not yet come up to be used for data, but will be soon.

    Self test…
    Loading state from /var/snapraid/content…
    Scanning disk d1…
    Scanning disk d2…
    Scanning disk d3…
    Scanning disk d4…
    Scanning disk d5…
    Scanning disk d6…
    Scanning disk d7…
    Using 1092 MiB of memory for the FileSystem.
    Saving state to /var/snapraid/content…
    Saving state to /mnt/data/WD-WMC1T2600294/content…
    Saving state to /mnt/data/WD-WMC1T2561550/content…
    Saving state to /mnt/data/WD-WCC4N0XXNFFK/content…
    Saving state to /mnt/data/WD-WCC4N0ZJ26FY/content…
    Error opening the content file ‘/mnt/data/WD-WCC4N0XXNFFK/content.tmp’. Read-only file system.

    • Zack says:

      Have you run a long smart test on the drive?

      smartctl -t long /dev/sdX

      Then, share your output here.
      smartctl -a /dev/sdX

      But, the first step is to just unmount and remount the drive, or reboot. It should come back up after that. After a reboot, try to create a few files on the disk. Let me know how that goes.

      • Savage702 says:

        No, I had not run the long test… just started that, 399 minutes.

        Since posting, I had got it back online, after trying to unmount and run fsck on the drive…. seems something wouldn’t let me do it, even though all services had stopped and it was properly unmounted, I couldn’t get fsck to run for the life of me. So I rebooted, then I was able to write directly to the drive and ran a successful sync.

        As of this AM though, last nights sync gave the same errors trying to write the content file. So I just rebooted (it works again) and am trying the long test and have ordered a replacement drive since I no longer had a spare sitting around.

        If there was some error or inconsistency with the drive, shouldn’t fsck be running on the reboots?

        • Zack says:

          That sounds like a good diagnostic approach. If there is inconsistency in the filesystem, then fsck “should” run when it tries to mount the disk. But, failing disks can do all sorts of funny things. It was wise to just move to replace it. Once that disk has been replaced, and all of the data transferred, you can run badblocks -wsv over on the poorly behaving disk. This will let you know if the disk is actually bad or not. DON’T RUN THIS UNTIL YOUR DATA IS ALL BACKED UP AND SYNCED TO THE NEW DISK 🙂

          • Savage702 says:

            Ok, well… the good thing is, only the content file and lost+found is writing to this disk at the moment, but it’s next up to be used. I do have some crap shows I’m thinking of deleting, so that’ll hold me over for a while. lol

            Anyway, here are the results, I put them on Pastebin, might be more legible than putting in the comments.

            My new drive arrived today, so a new project for the weekend…. that, along with discovering when I rebuilt my system last year, and moved to Docker, I somehow set everything up to have all my docker containers on a spare drive, setup the folder structure, everything… to find out I let them all build on my OS drive.

            I didn’t have the slightest clue (since everything was working so well). That was till Plex grew out of control in size and I ran out of room. LOL

            rsync’ing all my Plex data over to another drive and getting ready to take that on too.

          • Savage702 says:

            So I posed a reply the other day, and it errored out posting, and when I tried to re-post, it said I’d already said that… so not sure what happened… maybe it’s in moderation? Anyway… It has not gone back to read only since .

            Here are the results of the long test on Pastebin for ease of viewing.


          • Savage702 says:

            Test – Ignore/delete.

  15. Savage702 says:

    Ok, well… the good thing is, only the content file and lost+found is writing to this disk at the moment, but it’s next up to be used. I do have some crap shows I’m thinking of deleting, so that’ll hold me over for a while. lol

    Anyway, here are the results, I put them on Pastebin, might be more legible than putting in the comments.

    My new drive arrived today, so a new project for the weekend…. that, along with discovering when I rebuilt my system last year, and moved to Docker, I somehow set everything up to have all my docker containers on a spare drive, setup the folder structure, everything… to find out I let them all build on my OS drive.

    I didn’t have the slightest clue (since everything was working so well). That was till Plex grew out of control in size and I ran out of room. LOL

    rsync’ing all my Plex data over to another drive and getting ready to take that on too.

    • Zack says:

      Thanks for the update! The smartctl info looks clean. I would suggest swapping the SATA cable to the disk, except you don’t have any UDMA CRC Resets in your log either, so that looks okay. Still, it will be nice to have a backup on hand if you need it in the future.

      Docker is awesome, and Plex is a beast when you have lots of files. I’ve hand to migrate to larger SSDs twice now on for all my Docker containers. I’m just glad you figured it out.

      Also, if you aren’t running your stuff off and SSD, I would strongly suggest it. Plex is so much faster off a decent disk. I have mine running on a ZFS mirror of two 400GB Intel S3700’s (a bit of overkill) 🙂

      Good luck!

      • Savage702 says:

        Swapping Saata cable not so much an option as this drive is attached to the h310 card. Other drives on the cable are acting ok. Guess I’ll just keep up with it and see how it goes.

        Thanks for the SSD suggestion. I need to rethink my plan then. It was going on to a WD Green drive, but I do have about 3-4 streams going at any given time, so I’d rather not lose performance. Currently Plex is only taking about 36GB, so maybe some old SSDs will do the trick.

        Thanks again.

        • Savage702 says:

          Ok, so I swapped the cable between that drive and another drive that’s on the same breakout cable, to see if the other drive would start kicking off errors. Pending what happens next as neither drive has wigged out as yet.

          I also added the new drive in, which is on the other breakout cable, and added it as another drive to store the content file. I guess it’s just a wait and see thing now. I purged some terrible shows I never watched, and did sync’s between each show deletion to just try and set it off… but nothing as yet.

          A new problem adding the new drive…. I only have 10 bays, so now I have a drive hanging out the side, standing upright on top of a box. /sigh.

          There’s a 4U 15 Bay Rosewill case within my budget that would allow me space for 4 more drives and allow me to dangle my SSD’s somewhere. LOL Also need to find a decent 1155 chipset mobo with 2 PCIe x16 slots that will allow me to run 2 h310’s without POST errors (my current board will only run the h310 in 1 of my available ports.

  16. Savage702 says:

    Ok, I’m sorry… you lost me on the SAS expander. I feel I’m missing something. I can add that card in, and it… connects to the h310, and gives me more connections for breakout cables? How does it connect? If I’m not looking to add THAT many drives, wouldn’t 2 h310’s be better/offer better throughput… or are you talking about this as an option so I don’t have to go looking for another motherboard? (I guess that would be ideal at this time, and cheaper than a Mobo & another h310).

    Funny how searching for these cards you recommend has your name popping up all over the internet. lol

    • Zack says:

      Hello 🙂 Yes, the SAS Expander would be connected to your H310 via 8087 -> 8087 cable. The Intel SAS Expander is great because it can be powered via a PCI express slot OR a 4-pin molex connection (so it doesn’t have to eat up one of your PCI express slots). I would only suggest going this route to save money on needing to track down another HBA (h310) and a motherboard. The good news is that with spinning disks, the SAS expander really isn’t a bottleneck.

      Yes, I do show up a lot certain internet articles. You’d think with all of my internet nerd-cred I’d be rolling in money from Adsense and donations. Unfortunately, I’m luckily if this site even breaks anywhere close to even by the time I pay for hosting 🙂

  17. kiwijunglist says:

    Thanks for this, I quite like you guides. I am redoing my media server and using your guide and this one – I find your guide is better though as is more straight forward.

    Question: For my data hdds that will make up my mergerFS pool, what commands would you recommend to format the drive and create the single partition.

  18. crimsondr says:

    Excellent guide! Super easy to setup snapraid and mergerfs. Also used your sync script. Moving from my large ZFS array to a split between ZFS and snapraid. The media collection will be on snapraid and the system/critical files on ZFS. Should make migrating to new drives and re-configuring the ZFS pool easier in the future. Thanks for the amazing guides!

  19. Savage702 says:

    I’m back! Always exciting to come back here, read back over everything and ask some advice! 🙂

    Running into serious space issues… so it’s time to start the very slow move to larger drives from my array of 3TB Reds.

    I just purchased an 8TB Red, and want to replace my 2 parity drives with the 1 Red.
    I would imagine that taking 1 Parity out first and adding the 8TB in as a second Parity might be best, and let it do it’s thing.
    Then once it’s done, I can remove the other 3TB drive out.

    I’d then like to use those 3TB Reds to expand the array a little until I can get another 8TB Red to throw into the mix.
    Is there anything I need to do to the Parity drives other than format them before getting them ready to use? There’s nothing special about the way they were setup IIRC… right?

    Does it sound like I’m going about this the best way? Any advice?

    I also saw an update on the mergerfs page? What was the update/change… just version numbers?

  20. Zack says:

    Sorry for the massive delay. I’ve been busy with many other projects and haven’t checked in lately. Your approach with migrating the parity disks should work fine. To use the old 3TB disks you don’t even need to format them, you just need to delete all the files on them (once you’ve moved to the larger parity disks and ran a sync). You’d mount these now empty 3TB data disks at their new mount point.

    Trapexit adds features and changes things all the time in Mergerfs. There have been some performance items added to the newest versions. I would try to stay current with the releases. But, you do need to read his Changelogs as sometimes things change, like the write policies that may require changes on your mount options.

    I hope that helps.

    • Savage702 says:

      No worries! I actually missed your reply, and had already figured I was on the right track and did it anyway. It all worked out. Fun.
      Hope all is well with you and yours!

      Next, an 8TB data drive, and I’ll want to replace 2x3TB drives already in the system.

      I know if we do a like for like drive swap, then copying the data over from one drive to the next is easy enough.
      What would you suggest for moving the data from 2 drives onto a single drive? Do we just do the same thing and run a sync?

      • Zack says:

        Awesome! I’m glad that you got it all figured out. To answer your question about the two drives to one, I’d just make a temporary mergerfs mountpoint for the two drives and rsync from the merged directory to the new one. So, let’s say you have the two old disks mounted at /mnt/old1 and /mnt/old2 and you’d like to copy all the data from them to your new disk that you have mounted at /mnt/new1. Here are the steps.

        sudo -i
        mkdir /tmpstorage
        mergerfs -o allow_other,use_ino /mnt/old1:/mnt/old2 /tmpstorage
        rsync -av --progress /tmpstorage /mnt/new1
        # Once, that's completed...
        umount /tmpstorage
        rm -rf /tmpstorage 

        In SnapRAID, you’d need to update the config to remove the two old disks and replace with the one. After that’s done, run a sync 🙂

  21. 4KVidGuy says:

    Hi Zack, I got all the way thru this setup and hit a wall: “snapraid: command not found”. My PATH comes out fine, but I’m stuck.

    • Zack says:

      Hello, it sounds like SnapRAID wasn’t properly installed. I would try to install it again. Also, check the output of each command before you proceed to make sure that it was completed successfully before you proceed.

  22. Savage702 says:

    Well, I think I got it sorted out. I changed it to category.create=mfs in fstab. Although I’m not entirely sure why it was filling the drives past 10G when that’s what I have minfreespace set to. Wasn’t an issue before. But that’s fixed it, at least for now.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.