You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+16-2
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,8 @@ The following open-source projects seem to be able to help reach my goals. It re
25
25
26
26
-[SnapRAID](https://www.snapraid.it). Provides data parity, backups, checksumming of existing backups.
27
27
-[Claims to be better than UNRAID's](https://www.snapraid.it/compare) own parity system with the ability to 'fix silent errors' and 'verify file integrity' among others.
28
-
-[BRTFS Filesystem](https://btrfs.wiki.kernel.org/index.php/Main_Page) similar to ZFS in that it provides be the ability to 'send/receive' data streams (ala `zfs send`) with the added benefit that I can run individual `disk scrubs` to detect hardware issues that require me to restore from snapraid parity.
28
+
-[BRTFS Filesystem](https://btrfs.wiki.kernel.org/index.php/Main_Page) similar to ZFS in that it provides be the ability to 'send/receive' data streams (ala `zfs send`) with the added benefit that I can run individual `disk scrubs` to detect hardware issues that require me to restore from snapraid parity. **My observed Btrfs performance is that its poor compared to XFS filesystem on linux.***Since we use btrfs only for the 'data' disks in the slow mergerfs pool we are not sensitive to speed.*
29
+
-**XFS Filesystem for NVME cache on mdadm array**. After finding bugs and instability in my ZFS+NFS+mergerfs implementation my cache disks are now formatted to XFS in RAID1. I did not use btrfs raid1 natively here because btrfs performance was poor (50% throughtput penalty). XFS was able to match ZFS raw speeds (without arc) ~900MB/s.
29
30
-[MergerFS](https://github.com/trapexit/mergerfs). FUSE filesystem that allows me to 'stitch together' multiple hard drives with different mountpoints and takes care of directing I/O operations based on a set of rules/criteria/policies.
30
31
-[snapraid-btrfs](https://github.com/automorphism88/snapraid-btrfs). Automation and helper script for BRTFS based snapraid configurations. Using BRTFS snapshots as the data source for running 'snapraid sync' allows me to continue using my system 24/7 without data corruption risks or downtime when I want to build my parity/snapraid backups.
31
32
-[snapraid-btrfs-runner](https://github.com/fmoledina/snapraid-btrfs-runner). Helper script that runs `snapraid-btrfs` sending its output to the console, a log file and via email.
@@ -38,14 +39,27 @@ The following open-source projects seem to be able to help reach my goals. It re
Copy file name to clipboardexpand all lines: mergerfs.md
+54
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,7 @@
1
1
# MergerFS
2
2
3
+
**WARNING: Using ZFS + NFS (non-zfs native export) + mergerfs cause [ZFS mount instability and crashes](https://github.com/trapexit/mergerfs/discussions/1098).**
4
+
3
5
MergerFS is used to "merge" all physical distint disk partitions (/mnt/disk*) into a single logical volume mount.
4
6
5
7
### Policies
@@ -44,6 +46,58 @@ To attempt to mirror what unraid provides with their share "cache" we are going
44
46
45
47
Recall that I chose to use ZFS and RAID1 mirror for this purpose to provide assurances that my data would not be lost before it gets moved onto parity-protected-snapraid-slow-storage-disks.
46
48
49
+
## NFS instability
50
+
51
+
`/mnt/cached` is my mergerfs pool and ZFS mountpoint on my local system. The `mergerfs` process seems to be crashing at some point due to NFS. I haven't yet found the root cause of this issue and have tried everything from upgrading kernel, ZFS, nfs-kernel-server, libfuse and OS (Ubuntu 20.04 to 20.10).
52
+
53
+
The crashes seem to be more pronounced when using NFSv4 protocols. NFSv3 is more stable but that is a stateless protocol and I would much prefer v4 only NFS shares. I have disabled v4 and force v3 for the time being to try to make my implementation stable.
strace: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf: Operation not permitted
69
+
strace: attach: ptrace(PTRACE_SEIZE, 2081428): Operation not permitted
70
+
root@nas:/home/gfm# echo "0"|sudo tee /proc/sys/kernel/yama/ptrace_scope
71
+
0
72
+
``
73
+
74
+
If that doesn't work, change setting `/etc/sysctl.d/10-ptrace.conf` to 0. Reboot.
75
+
76
+
Strace isn't helpful according to mergerfs developer. Here's the proper way to debug mergerfs using gdb
77
+
78
+
### gdb debugging mergerfs
79
+
80
+
```
81
+
If it's crashing then strace is pretty useless. Need a stack trace from gdb.
82
+
83
+
gdb path/to/mergerfs
84
+
85
+
run -f -o options branches mountpoint
86
+
87
+
when it crashes
88
+
89
+
thread apply all bt
90
+
```
91
+
92
+
### Remove ZFS from the equation by using XFS RAID 1
0 commit comments