SnapRAID v11 Released

Andrea, just released the latest version of SnapRAID. This version introduces split parity, which allows you to spread your parity files across numerous disks. This makes transitioning up to larger data disks much easier and can mitigate the need to buy larger disks just for parity. You can read more about it’s features here.  Also, I have updated my SnapRAID tutorial to reflect the update to v11, and I have posted a new sync script to work with the new split parity. TEST

Zack

I love learning new things and trying out the latest technology.

4 Responses

  1. Pko says:

    Hello, very interesting, but regarding this new “split parity” function I have a doubt… Lets suppose I have a snapraid system with 7 4TB disks (5 data=20TB, 2 parity=8TB) and I add 2 new 6TB disks. Until now, I could have the 7 old disks as data and the 2 new as parity (using just 4 TB on each parity disk), so it would be 7×4=28TB as data and 2×6=12TB but only 8TB used parity. Would now with split parity be possible to have 4×4+2×6=28TB data and 3×4=12TB parity, no wasted space? would it be more reliable, having 12TB parity data? I think there are no functional advantages, since in both cases I could add more 6TB disks (maybe in place of 4TB ones) and use all new space, but there are a few practical ones, for example I could “put to sleep” 3 parity disks instead of 2 most of the time. What other (dis)advantages do you see in each option?

    • Zack says:

      Hello. These are great questions. Obviously, the split parity functionality needs to be assessed for each use case. Most of the time, it will probably be easier to just use whole disks for parity disks. Now, to try to answer your questions.

      Would now with split parity be possible to have 4×4+2×6=28TB data and 3×4=12TB parity, no wasted space?

      This would technically work by having one of the 4TB disks contain two parity files, but you would not have the resilence of a true dual parity solution, so I don’t view this as a viable option. In your example, the “upgrade” from 4TB disks to 6TB disks is relatively small. So, if you had a couple old 2TB disks laying around, you could create dual parity by having each one made up of a split of one 4TB disk and one 2TB disk. This would allow each complete parity piece to properly cover all of the space on your new 6TB data disks. So, you would end up with (5) 4TB disks + (2) 6TB disks= 32TB of data and (1)4TB+(1)2TB and another (1)4TB+(1)2TB=12TB of parity space.

      would it be more reliable, having 12TB parity data?

      No, dual parity is dual parity, so it’s not more reliable by having more space dedicated to it. You are now calculating parity for a bigger data set using a couple of larger data disks, so your parity disk sizes needed to grow accordingly.

      I think there are no functional advantages, since in both cases I could add more 6TB disks (maybe in place of 4TB ones) and use all new space, but there are a few practical ones, for example I could “put to sleep” 3 parity disks instead of 2 most of the time. What other (dis)advantages do you see in each option?

      Without a couple of extra smaller disks, you will either have to use the 6TB disks as your parity disks for the time being and waste 2TB of space on each until you start adding more 6TB disks. I like using my newest, most reliable, and energy efficient disks as my data disks. They will be spinning more often than my parity disks which only spin once a night during a sync. So, I can have my data on a smaller number of new, larger disks, and use my older disks as parity.

      I hope that answers your questions 🙂

  2. codgedodger says:

    Amazing! One question though, I’ve been running mergerfs with SnapRAID for about 6ish months now. Not single issue! Your guides are amazing and have helped me build my Plex server using everything with docker.

    One question/advice though. Would it be possible to setup a cache for the mergerfs with SnapRAID? I just upgraded my boot drives to two 800GB Intel DC3500s (RAID1) and will have the previous 240gb SSDs just sitting around. I’ve read about using different types of caching for drives but they seems to require a full format. Would you have any advice to try and speed up my spinning disks? I’m running 20 8TB drives… Yes, very over kill and an expensive hobby but I consider myself a data horder. lol

    • Zack says:

      Thanks for the kind words, and I’m glad to hear things are going well. You are speaking to the choir about storage space. I “only” have 100TB usable in my SnapRAID server, so I completely understand what you mean by overkill 🙂

      To answer, your question, there isn’t a reliable caching plugin for mergerfs or any FUSE based filesystem for that matter. There has been one attempt (this is super slow, so don’t bother) and more of a sweeping method developed, but I haven’t used either.

      Luckily, your 8TB should be much faster than gigabit reading and writing anyways. But, if you have 10GB network in your house like I do, you can always fire up rsyncs directed to the underlying disks, so that you can read/write to all 20 disks at once.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.