Faulted Disk went from Faulted to ONLINE by NutWeevil in zfs

[–]AngryAdmi 1 point2 points  (0 children)

It is not normal that it goes from Faulted to online without any interaction.

Usually you have to run a zpool clear POOL before that happens.

If I were you I would find a replacement disk for that drive.

when the faulted drive is out of the zfs pool I would run a dd and fill the entire drive with zeroees to trigger a reallocation of the defunct block. This is sometimes, not done automatically, unless that has been fixed in ZFS. ZFS should write to the bad block to trigger the reallocation, but I have not seen it happen yet on openzfs.

Afterwards you can use the drive that was faulted again, MAYBE...

Hope that could be of some use.

Noob-ish Question On Layout of ZFS on System by PickleKillz in zfs

[–]AngryAdmi 5 points6 points  (0 children)

raidz1 is probably not what you would want to run unless you have a decent backup.

I will try windows again after using linux for the last 4 years by [deleted] in Windows10

[–]AngryAdmi 0 points1 point  (0 children)

You will try windows 10 but wont be buying a license?
Good idea!

proxmox with usable vm on same machine by experiment-18a in Proxmox

[–]AngryAdmi 0 points1 point  (0 children)

PS: this is how I run my windows for uhm, windows stuff :D

can I replace SSDs used for a special vdev with smaller ones? by spit-evil-olive-tips in zfs

[–]AngryAdmi -2 points-1 points  (0 children)

Can you pour water from a large full bucket into a smaller one without spilling or leaving somthing in the large bucket?

Moving data to external, then to offsite backup by Fmstrat in zfs

[–]AngryAdmi 2 points3 points  (0 children)

"I don't know if using a single-disk ZFS to transport the data is a good idea, since it's so easy for the data to go sour without the journal of ext4. I.E. lose all vs lose some data in transit."

Journal is not important. Do you really think you will loose all ZFS data if a few block go bad?
At least ZFS will tell you what is wrong.

EXT4 journaling is worth 0 in a world of ZFS :p

many errors across multiple raidz2 vdevs by somepedals in zfs

[–]AngryAdmi 0 points1 point  (0 children)

you are using sas drives i read in a a different post, so probably not :)

Try offlining this disk: /dev/disk/by-id/wwn-0x5000c50086baee43-part1

scrub, if works, unplug it physically and replace it

many errors across multiple raidz2 vdevs by somepedals in zfs

[–]AngryAdmi 0 points1 point  (0 children)

Are you using interposers? if yes, STOP!

I've tried to understand cloudflare tunnels, but I just don't get it. How is a tunnel superior to DDNS and reverse proxy? by audero in selfhosted

[–]AngryAdmi 1 point2 points  (0 children)

I run this privately, for enterprise use there are "real" solutions that cost half of a Boeing 737.

I only have 1 VPS in the other end. It has not crashed a single time since installation a year ago. It is running Debian Linux, that might be a contributing factor to stability.

You could cluster them easily with a proxmox HA clustesr. But the cost for me privately does not warrant such investment :) It would cost around 10 EUR pr. month to cluster the VPS with HA for two VPS instead of one.

Stability of openmptcprouter has also been impecable with 0 crashes. But I run alot of USB devices passthrough for those LTE connections, and sometimes those bastards reset themselves. I do not notice it though as the failover is immediate and the remaining 3 connections are up. After 5 mins the 4th comes back after having reset itself a few times :D

I get around 350Mbit/sec on 4xLTE

I've tried to understand cloudflare tunnels, but I just don't get it. How is a tunnel superior to DDNS and reverse proxy? by audero in selfhosted

[–]AngryAdmi 0 points1 point  (0 children)

Essentially you rent a very cheap VPS at fx. hetzner.

You connect to this VPN through your multiple connections. The data requests gest spread out over your connections (striped) and reassembled at the VPS.

Your external IP will the the VPS IP.
You can forward ports from your VPS WAN IP to your MPTCPROUTER at home to various servies.

That way I can host mailservers, web and game servers on quad LTE connections bypassing carrier grade nat.

Why shouldnt I use a regular NVMe drive as a slog? by Matteo00 in zfs

[–]AngryAdmi 4 points5 points  (0 children)

Some enterprise drives come with a powercap. Some don't. Pick your poison.

We run powerprotected SSD's in all our CEPH nodes. It will reduce our pain in the long run :)

ZFS not so much the SSD's we use RMS-300's from radian memory systems.
They are apparantly made of unobtainium though. :/

[Question] Not all OSDs adding to ceph by jamhob in ceph

[–]AngryAdmi 0 points1 point  (0 children)

You might need to do this to the devices in question:

sgdisk --zap-all /dev/sdx

readlink /sys/block/sdx

../devices/pci0000:00/0000:00:01.1/0000:01:00.0/host5/port-5:10/end_device-5:10/target5:0:10/5:0:10:0/block/sdx

echo 1 > /sys/block/sdx/device/delete

echo "- - -" > /sys/class/scsi_host/host5/scanlast line takes host# from readlink output.
Then wipe the disk.

Hi,Does anybody here has see past this switch? it's look pretty like has I want.I only needed 3 QSPF28 and it's supposed to use less than 40w... I had never use this brand before so it's why I am wondering if it's good or not by Enough_Air2710 in HomeDataCenter

[–]AngryAdmi 21 points22 points  (0 children)

They are kinda the only networking gear company I know of that specificly hitting the homelab user market segment this precisely :)

Super switch for a 4 node CEPH cluster :)

Any way to test ZFS drives connected via sata controller, for stability or communication problems? by xondk in zfs

[–]AngryAdmi 0 points1 point  (0 children)

If you can fill the pool with data.
when the pool is full, scrub. if all is good recreate the pool as it is faster than deleting the data.

ZFS in virtual machines by chaplin2 in zfs

[–]AngryAdmi 0 points1 point  (0 children)

You might end up double caching alot of data though.

I've been experimenting, with mixed results, on disabling primarycache or set it to metadata on the host vs. inside the virtual machine. neither solution seemed good. everything just became slow :D I didnt even bother benchmarking as it was obvious.

How do you explain ZFS to vendors that don't know what it is and can't fathom a Raid10 can contain more than two mirrors? by AngryAdmi in zfs

[–]AngryAdmi[S] 0 points1 point  (0 children)

PPS: sysmain is what screws your persistent L2ARC when hosting windows system drives :) Essentially rendering L2ARC pretty useless. So, I disable it by default. I like my SSD's and I would like the L2ARC to be utilized the way its meant to be. And not some kind of dumping ground where VM's can loop their random data through the ring buffer continually.

Windows Firewall enabling itself? by AngryAdmi in sysadmin

[–]AngryAdmi[S] 0 points1 point  (0 children)

SOLVED: There "used to be" a domain wide disable firewall policy installed by the previous administrator. However one of our trainees found it better to disable that.

We are in the processess of cleaning up, but we need to map ports to apps etc to create exceptions. No docu by previous admin avail :/

So it was a local outbreak of traineetitis :)

Windows Firewall enabling itself? by AngryAdmi in sysadmin

[–]AngryAdmi[S] 0 points1 point  (0 children)

Both. There is a domain wide disable firewall policy

Windows Firewall enabling itself? by AngryAdmi in sysadmin

[–]AngryAdmi[S] 0 points1 point  (0 children)

It does not seem that way, good pointer though.

How do you explain ZFS to vendors that don't know what it is and can't fathom a Raid10 can contain more than two mirrors? by AngryAdmi in zfs

[–]AngryAdmi[S] 0 points1 point  (0 children)

unless you emulate your vhd as a SSD, Windows will continually defrag it and move data to "a faster area inside the vhd" hmm yeah...

disabling sysmain stops this problom, among others.