Linux Episodes on mhddfs source not being added reliably during library update
#16
Thank you, looks interesting!
If I have helped you or increased your knowledge, click the 'thumbs up' button to give thanks :) (People with less than 20 posts won't see the "thumbs up" button.)
Reply
#17
I'll look into that in the future, if I run into any problems with my current system.

I'm running aufs now rather than mhddfs - as a kernel level system, it's much faster. Built into the Ubuntu Server kernel, so there was no screwing around to get it up and running. Still have the fasthash issue with aufs, but it's been seamless since.
Reply
#18
Thanks for the mergerfs tip! Been testing it and worked great with latest Helix.

However, I'm having trouble with Isengard Beta 1, and latest nightly as of today. New episodes are not being picked up.

Fasthash advancedsetting doesnt work eitehr. Used to work in Helix, but also doesn't in Isengard. Have tried mhddfs and mergerfs. Not touched aufs in a while since I was having some issues with it.

Reason for using Isengard is to use my fav skin so keen to get it to work. Anyone else having this issue in Isengard?

Edit:
This is the output of "xattr -l .mergerfs". I set category.search=newest in fastab, but noticed func.getattr=newest is also set. Woudl these two conflict with each other?

Code:
user.mergerfs.srcmounts: /mnt/disk1:/mnt/disk2:/mnt/disk3:/mnt/disk4
user.mergerfs.category.action: all
user.mergerfs.category.create: epmfs
user.mergerfs.category.search: newest
user.mergerfs.func.access: newest
user.mergerfs.func.chmod: all
user.mergerfs.func.chown: all
user.mergerfs.func.create: epmfs
user.mergerfs.func.getattr: newest
user.mergerfs.func.getxattr: newest
user.mergerfs.func.link: all
user.mergerfs.func.listxattr: newest
user.mergerfs.func.mkdir: epmfs
user.mergerfs.func.mknod: epmfs
user.mergerfs.func.open: newest
user.mergerfs.func.readlink: newest
user.mergerfs.func.removexattr: all
user.mergerfs.func.rename: all
user.mergerfs.func.rmdir: all
user.mergerfs.func.setxattr: all
user.mergerfs.func.symlink: epmfs
user.mergerfs.func.truncate: all
user.mergerfs.func.unlink: all
user.mergerfs.func.utimens: all
Reply
#19
(2015-05-20, 14:39)eskay993 Wrote: Thanks for the mergerfs tip! Been testing it and worked great with latest Helix.

However, I'm having trouble with Isengard Beta 1, and latest nightly as of today. New episodes are not being picked up.

Fasthash advancedsetting doesnt work eitehr. Used to work in Helix, but also doesn't in Isengard. Have tried mhddfs and mergerfs. Not touched aufs in a while since I was having some issues with it.

Reason for using Isengard is to use my fav skin so keen to get it to work. Anyone else having this issue in Isengard?

Edit:
This is the output of "xattr -l .mergerfs". I set category.search=newest in fastab, but noticed func.getattr=newest is also set. Woudl these two conflict with each other?

Code:
user.mergerfs.srcmounts: /mnt/disk1:/mnt/disk2:/mnt/disk3:/mnt/disk4
user.mergerfs.category.action: all
user.mergerfs.category.create: epmfs
user.mergerfs.category.search: newest
user.mergerfs.func.access: newest
user.mergerfs.func.chmod: all
user.mergerfs.func.chown: all
user.mergerfs.func.create: epmfs
user.mergerfs.func.getattr: newest
user.mergerfs.func.getxattr: newest
user.mergerfs.func.link: all
user.mergerfs.func.listxattr: newest
user.mergerfs.func.mkdir: epmfs
user.mergerfs.func.mknod: epmfs
user.mergerfs.func.open: newest
user.mergerfs.func.readlink: newest
user.mergerfs.func.removexattr: all
user.mergerfs.func.rename: all
user.mergerfs.func.rmdir: all
user.mergerfs.func.setxattr: all
user.mergerfs.func.symlink: epmfs
user.mergerfs.func.truncate: all
user.mergerfs.func.unlink: all
user.mergerfs.func.utimens: all

Hmm. That's strange that it's not working in Isengard.

category.search is a superset of func.getattr, so if you set category.search to something, func.getattr will also be set to that same value. So your settings look correct. However, my settings are slightly different. I also set category.create to "mfs", since I was running into an issue where mergerfs was thinking I was out of disk space when I really wasn't. I don't know if you are also having that issue. Here's my output:

Code:
user.mergerfs.srcmounts: /media/btrfs1:/media/btrfs2:/media/btrfs4:/media/btrfs5                                   :/media/btrfs6
user.mergerfs.category.action: all
user.mergerfs.category.create: mfs
user.mergerfs.category.search: newest
user.mergerfs.func.access: newest
user.mergerfs.func.chmod: all
user.mergerfs.func.chown: all
user.mergerfs.func.create: mfs
user.mergerfs.func.getattr: newest
user.mergerfs.func.getxattr: newest
user.mergerfs.func.link: all
user.mergerfs.func.listxattr: newest
user.mergerfs.func.mkdir: mfs
user.mergerfs.func.mknod: mfs
user.mergerfs.func.open: newest
user.mergerfs.func.readlink: newest
user.mergerfs.func.removexattr: all
user.mergerfs.func.rename: all
user.mergerfs.func.rmdir: all
user.mergerfs.func.setxattr: all
user.mergerfs.func.symlink: mfs
user.mergerfs.func.truncate: all
user.mergerfs.func.unlink: all
user.mergerfs.func.utimens: all

If disabling fasthash doesn't work, then it must be another issue with Isengard.
Reply
#20
Thanks jerms415. I matched your setup exactly and stil no luck Sad I've asked in the original thread where this issue was raised about usefasthash in Isengard (here). I'll stick with Helix for now and use my fav skin in a functional albeit slightly broken state Smile
Reply
#21
Quote:I also set category.create to "mfs", since I was running into an issue where mergerfs was thinking I was out of disk space when I really wasn't.

Hi. trapexit here. Author of mergerfs. Can you provide more information about this statement? epmfs only considers drives with the path available. mfs considers all drives and will clone the path appropriately.
Reply
#22
(2015-06-03, 22:48)trapexit Wrote:
Quote:I also set category.create to "mfs", since I was running into an issue where mergerfs was thinking I was out of disk space when I really wasn't.

Hi. trapexit here. Author of mergerfs. Can you provide more information about this statement? epmfs only considers drives with the path available. mfs considers all drives and will clone the path appropriately.

Sure, I can explain it more. The epmfs mode works the way that you say it does, but it might be confusing for those of us coming from mhddfs. On mhddfs, if a drive fills up it starts using the next drive, even if the next drive is completely empty (i.e. it has no directories or files). On mergerfs, by default it will not use the empty drive (unless the files you are writing are at the top level of the shared filesystem, I suppose). Instead, it will report that the shared filesystem is out of space, even though there is a drive that is completely empty.

Switching to mfs makes mergerfs behave more similarly to mhddfs, in that it uses the empty drive. However, it doesn't behave quite the same way. Here's a quote from https://romanrm.net/mhddfs describing how mhddfs works:
Quote:But what if you try to add new files somewhere inside that /mnt/virtual? Well, that is quite tricky issue, and I must say the author of mhddfs solved it very well. When you create a new file in the virtual filesystem, mhddfs will look at the free space, which remains on each of the drives. If the first drive has enough free space, the file will be created on that first drive. Otherwise, if that drive is low on space (has less than specified by “mlimit” option of mhddfs, which defaults to 4 GB), the second drive will be used instead. If that drive is low on space too, the third drive will be used. If each drive individually has less than mlimit free space, the drive with the most free space will be chosen for new files.

Personally I prefer the way mhddfs handles it, because I believe it tends to keep files a little more organized on the underlying disks than mfs does. Files that are written at similar points in time tend to be located on the same disk. However, it's not that big of a deal. I'm still extremely happy with mergerfs because it can be configured to correctly report the mtime, which solved this scanning bug in kodi. Now my scans are super fast!

Thanks for all your hard work on it!
Reply
#23
Understood.

I explicitly didn't want to do the "drive A is full now as I write... let me move it to drive B and continue" behavior due to the possible failure conditions in doing so. I'm not opposed to exploring it again but it does complicate things.

An alternative which would give fairly close behavior is one which was recently requested by another user. The behavior would be in effect... least free space (vs most free space). The intent being to fill a drive before moving to the next. This would probably need a buffer size similar to mhddfs so when one drive is "full" we select the other. This could still result in errors as a result of trying to write a 5GB file to a drive with only 4GB of buffer but would largely behave as expected. Existing path least free space is probably a good permutation of the same.

If you have any feature requests or catch any bugs please submit them to the issue tracker at github.

Interesting about the mtime issue. I hadn't even considered that situation. I knew splitting it up so that each function could have a separate policy would be useful for someone Smile
Reply
#24
BTW... If you look on github you can download deb packages for x86 64bit. No need to compile it from source if you don't want to.
Reply
#25
I've added a feature to mergerfs which is pretty much the same as mhddfs in regard to creation policy which will pick the drive with the least space or just the first drive with at least X bytes.
Reply
#26
Trying mergerfs now, using category.search=newest.

I switched to aufs some time ago, and while it worked very well I found I was having a lot of headaches with whiteout files everywhere and the resulting persistent empty directories on the source drives. Pool access was great, but there was occassional really bizarre behavior that was incredibly difficult to diagnose.

So, I'm going to give MergerFS a try - the upside is switching pooling solutions is inherently seamless to your installed applications, just mount the new pool where the old one was and everything should Just Work.

The downside, of course, is this exposes all those empty folders and .wh files scattered *everywhere*. If it works well, though, I'll just purge 'em all.


Edit: Erf, hope this works with Isengard now. Well, I'll find out shortly Smile
Reply
#27
For those interested I've added mhddfs like copy on out of space feature to mergerfs v2.6.0.
Reply
#28
Ooooh, thanks!

I'll add, I've been using mergerfs for a couple months now, and its been flawless. Gets around the fasthash issues mhddfs has, and avoids the spammy whiteout files aufs makes.

Really, its awesome. Thanks, Trapexit!
Reply
#29
@Wintersdark, do you mind sharing your fstab entry?

@trapexit (or anyone who knows) is there a policy for Actions (specifically mv / rename) that chooses "Same Drive if enough space, falls back to [SOMETHING ELSE]". Or is that such a basic function that I am missing that it would be anything else Tongue

Thanks!
Reply
#30
(2016-03-03, 05:43)yuuzhan Wrote: @Wintersdark, do you mind sharing your fstab entry?

@trapexit (or anyone who knows) is there a policy for Actions (specifically mv / rename) that chooses "Same Drive if enough space, falls back to [SOMETHING ELSE]". Or is that such a basic function that I am missing that it would be anything else Tongue

Thanks!

Sure.

Currently:

Code:
server@server:~ $ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# / was on /dev/sdb1 during installation

###########################################################################################Drive 3tb.wd.nas.2
# <file system>                           <mount point>         <type>  <options>              <dump>  <pass>
# /dev/sdc1 - Root partition
UUID=cb60f539-3611-4a2f-ae87-5d76160298c8 /                     ext4    errors=remount-ro      0       1
#
# /dev/sdc3 - 3TB WD Red NAS #2
UUID=84d1bfd0-f926-4d10-b6a6-ccda6e34680b /mnt/disk1            ext4    defaults               0       2
#
# /dev/sdc2 - swap partition
UUID=3584caa1-9cbd-4774-a983-c7721bc527ae none                  swap    sw                     0       0

##########################################################################################Drive 3tb.seagate.1
# <file system>                           <mount point>         <type>  <options>              <dump>  <pass>
# /dev/sda1 - parity partition
UUID=a809414f-89f0-4804-a9bc-e4b2ffda6e87 /mnt/parity           ext4    defaults               0       0

###########################################################################################Drive 3tb.wd.nas.1
# <file system>                           <mount point>         <type>  <options>              <dump>  <pass>
# /dev/sdb2 - 3 TB WD Red NAS #1
UUID=3a315131-a7ec-4cfe-ae97-080fc4919b89 /mnt/disk2            ext4    defaults               0       0

##########################################################################################Drive 2tb.wd.blue.1
# <file system>                           <mount point>         <type>  <options>              <dump>  <pass>
# /dev/sdd1 - 2TB WD Blue
UUID=3dc60174-ae35-449d-ad7a-f5b9d177270f /mnt/disk3            ext4    defaults               0       0

#######################################################################################Drive 1.5tb.wd.green.1
# <file system>                           <mount point>         <type>  <options>              <dump>  <pass>
# /dev/sde1 - 1.5TB WD Green
UUID=875f2541-5fb4-41cb-a0fb-f30be34b58e7 /mnt/disk4            ext4    defaults               0      0


#MergerFS pool mount
/mnt/disk*   /mnt/pool         fuse.mergerfs defaults,allow_other,func.getattr=newest    0   0

Then I've got Snapraid managing the disk1/2/3/4 and parity entries - tonight, in fact, I'm ditching the old (and starting to act up) 1.5tb green in favour of a nice, fresh 3tb Toshiba.

This setup has worked wonderfully for me, with only one hiccup - I ran into some issues with my old 14.4 LTS installation lacking a needed library at one point (I forget exactly what the issue was now); but a distro update fixed that. So long as you're running a later build of Ubuntu, that's in the clear.

Otherwise, I've never had data corruption issues, and it's simply worked magically in the background with no perceivable performance hit - I can still happily saturate my gigE home network moving files to and from it. And, of course, it's totally transparent too: you can access drives directly without any issue at all, if for whatever reason you need to.
Reply

Logout Mark Read Team Forum Stats Members Help
Episodes on mhddfs source not being added reliably during library update0