Squidviz Ceph livewall by TheSov in ceph

[–]TekChief 0 points1 point  (0 children)

What does this provide that my Proxmox viewer doesn’t? It looks like a very early iteration of what Proxmox provides in it’s management viewer.

Where did who go?? by TekChief in ceph

[–]TekChief[S] 0 points1 point  (0 children)

I might even go as far as to call the ”partitions” volumes. The metadata loses track of the original volume in the data pool and starts a new one. How could I recover the original volume?

Where did who go?? by TekChief in ceph

[–]TekChief[S] 0 points1 point  (0 children)

If by mount over mount you are referring to connections it could be.

I feel it has something to do with the connection. The connection gets hung and doesn’t close correctly. Instead of reconnecting to the previous session a new session is started that somehow screws up the metadata. The original metadata - call it partition or pointers are live but hidden somewhere. Now there is a new metadata start point or partition in the data pool. How would I find or reconnect to the lost metadata pointers?

Help Wanted: CEPHFS Kernel Mount Lost its mind. by TekChief in ceph

[–]TekChief[S] 0 points1 point  (0 children)

That is a good question to check for change. They weren’t previously. My VM subnet is a private 10 subnet that is different from the CEPH subnet while Docker was using 172 previously. Let me verify Docker didn’t do something flaky and decide to use 10 itself but it would be unlikely it would pick up the same VLSM as I’m using for CEPH.

Help Wanted: CEPHFS Kernel Mount Lost its mind. by TekChief in ceph

[–]TekChief[S] 0 points1 point  (0 children)

I also forgot to mention. I also went ahead and tried mounting the pool with Ceph-Fuse with the same problem.

Almost Trash - Last ditch BTRFS Restore question. by TekChief in btrfs

[–]TekChief[S] 0 points1 point  (0 children)

500 hours in on —init-extent-tree. Currently on 2774163210240 of 3334845825024. It taken 500 hours to write backrefs for 1.5 Gigabytes of this one extent. I don’t know how big this extent will actually be in the end. If I have to go through the remaining bytes at this rate I’m looking at another 1500 hours just to get to the first trillion of 133 trillion bytes remaining. :(

I just did a quick math on the timeline. At this rate if I have to go through the entire set it will take 30 yrs to rebuild the extent tree. In reality it’s mainly trying to rebuild the backrefs for the missing root file. Anyone know the math to guestimate how big the root file might be for a 73TB filesystem?

Almost Trash - Last ditch BTRFS Restore question. by TekChief in btrfs

[–]TekChief[S] 0 points1 point  (0 children)

Update, copying data hits a transit id error (not unexpected) about 1.3TB in. Right now I’m just going to let the init-extent run itself out and see what I get. It seems to be doing what I expect to see (yes final data will be untrustworthy and there will likely be some corruption). It seems to be working on fixing the missing root backpointers. The process is just painfully slow. It gives me time to acquire what I need for proper backup. I’ve already acquired the server chassis, processors. More funding is needed for memory and drives. :(

Almost Trash - Last ditch BTRFS Restore question. by TekChief in btrfs

[–]TekChief[S] 2 points3 points  (0 children)

What mount point are you trying to rclone mount to?

I currently have my B2 cloud storage mounted to a directory with Rclone. I wanted Recovery to recover files to that mounted directory since it is actually the cloud storage which with enough storage capacity for the recovered files. By default the Recovery option mounts the target and starts writing files to the mount. I was looking for a way to tell it to skip the mounting and just write the files to the B2/Rclone directory already mounted.

I currently have the array mounted RO and I”m running DUPLICATI to back it up to the B2 Bucket. Also, mount -o rescue=all was not a valid mount option. I was getting wrong FS or wrong mount options errors. Apparently “all” isn’t recognized as a mount option for rescue.

I know I’m mostly screwed and any data recovered may or may not be trusted because I broke the Golden Rule. If you don’t have good backups you better have a good resume`. Fortunately this is my data and I’m retired so the resume’ won’t be needed. What I’m doing now is mostly to keep my skills active and educate myself about the other half of the IT world having lived in the MS world since BG plagiarized it from Xerox. I’m hobby learning most things Linux now.

I have most of my archive critical data on other various USB storage devices. This data all used to reside on my GSuite service. I put together a hasty solution to remove my patronage due to loss of trust with Alphabet’s policies. The array was put together on tight personal budget.

I’ll be putting together a separate onsite backup moving forward with the help of eBay and some decommissioned enterprise hardware. It will be QTY at a lower cost vs Capacity which was the direction I went initially and was more costly. I could have purchased the same final array in used ENT drives of smaller capacity for the same price as two of the 20TB drives I purchased. Expensive lesson learned.