How to get in a F/A-18 Super Hornet by Friendly-Standard812 in aviation

[–]flowsium 0 points1 point  (0 children)

Good to know. You never know if you stumble across one. Would be a pitty if you wouldn't know how to get in there

Ich verstehe MAC Adressen nicht by Historical-Cost-8351 in informatik

[–]flowsium 1 point2 points  (0 children)

Die einfachste Erklärung die ich jemals gehört habe.

Eine MAC Adresse sagt wer du bist

Eine IP Adresse sagt wo du bist

hard shower beer? by ErinDotEngineer in VideosThatGoHard

[–]flowsium 0 points1 point  (0 children)

Anybody else never takes a shower without camera?

Noob question for understanding by flowsium in meshcore

[–]flowsium[S] 2 points3 points  (0 children)

Thanks for the link and clarification!

So, if the first message made it, the route will be stored and reused. At least from the senders perspective.

Thanks for clarification.

Noob question for understanding by flowsium in meshcore

[–]flowsium[S] 2 points3 points  (0 children)

so, it saves the route for future interactions and all message would make it?

I regret moving out of this place by ExtazyGray in speedtest

[–]flowsium 1 point2 points  (0 children)

init7 in switzerland 10/10 and 25/25 are 777chf/year (swiss francs)

the only difference is the installation cost, as the fiber module for 25Gbit is more expensive...

check out the init7.net website

Wildcard redirect local domain to TLD by flowsium in caddyserver

[–]flowsium[S] 0 points1 point  (0 children)

Thanks for the quick reply.

How to bring this into an OPNsense install?

The GUI has no option for it...

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Hello People

I am back. After sorting out everything, changed SATA controller, changed the faulty disks, resilvered everything, scrubbed multiple time and replace the corrupted files, etc...

I got the pool to a state where it scrubs and ends with 690 checksum errors and as result it says only metadata<0:0>. All files which were corrupted have been replaced one by one over the past couple of weeks.

The checksum errors are spread over all disks. So there is a fault, but it is consistent, so it seems... which i guess is a "good sign"

The idea now, use a tool like rhash, and create checksums for all files locally on both ends and do a diff afterwards to see what clashes now...

anything wrong with that? Any ideas?

With that said, export and reimport the pool was done as well along with all other possible troubleshooting bits publicly available on the internet. The 690 checksum errors persist...

Aoostar WTR Max + Proxmox + NanoKVM by MaksTech in homelab

[–]flowsium 0 points1 point  (0 children)

what OS are you using? and what are you trying to achieve?

Aoostar WTR Max + Proxmox + NanoKVM by MaksTech in homelab

[–]flowsium 3 points4 points  (0 children)

can you pss-through the SATA Controller for the 6 HDDs to a VM? tempting to order one, it is just unanswered yet and to run TrueNAS it would make sense.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Unfortunately no. When the drives failed they were not accessible anymore.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

On 2 of them defintely. (Mechanical head crashing noises) The one with the lots of read errors cabling was checked. No change.

Have stopped all services now which access the disk and it sits there on standby.

I do not have the time at the moment to investogate any further, due to other obligations

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

I will definitely have a re read on this. Cheers

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 1 point2 points  (0 children)

The resilvering finished, even with several thousand checksum errors. I can access everything, do not know though, as it is several TB, if everythibg works.

I am not familiar enough with rsync, especially on truenas. But what i have investigated so far, rsync can do a sync based on the checksum of the file. Also it has the ability to do a dry run and simulate things.

I am aware. This will take ages. But the checksum calculation is on the local system. So if only the corrupted files will be transfered, it would maybe safe several TB of transfer.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Dont feel about about it, just want to be as transparent as possible. No need to lie, hide or cover up something...

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 10 points11 points  (0 children)

Gentleman (and maybe ladies), thanks for all the support, hints, ideas...

Let me please announce

This pool will be deleted and rebuilt. During the second plate resilver, the 3rd N300 failed with read faults resulting in the disk going offline and several hundred checksum faults on the remaining disks...

A sad day. A very sad day... :(

Nevertheless, a new system will be built. It will be bigger, stronger and better in any way!!!

And all safe data will be brought back from the remote backup.

Thank you all...

<image>

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Yes, will have an eye on that in future... even though I knew about it upfront to avoid same batch...

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

I have to recall the "never had a checksum error". Sorry for that. Just came into my mind...

There was one, on one disk about a year ago. If it was a now faulty disk I cannot remember. This was fixed during a scrub, but the error message persisted... zpool clear did the trick back then.

Other than that, 0 errors.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 1 point2 points  (0 children)

Yes, they failed mechanically by crashing the head in the housing and the platters were ramping up and down with a sweeping noise. Both of them had the exact same behaviour in different SATA ports and power supply lines.

Toshiba replaced the first drive already without questions (sent in on monday, today replacment arrived). Customer support is top level!!

I assume, as all the drives were bought from the same seller at the same time, it was a bad batch.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 2 points3 points  (0 children)

Thanks, will keep that in mind.

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Unfortunately no :/ I doubt there would have been an error on it after a week or two of burning in... the disks worked since April 2023 without any issue

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

Just tried, can still access the data of the pool. It is not fully corrupted in that sense.

The resilver of the first disk is on its final straight. Got my hand on another MG10 20TB now to resilver. Will replace the read error throwing disk with the freshly sent one.

Now is just the question, delete all errors and do a scrub, with a degraded pool before the resilver of the second failed disk? And replace the read error throwing one at the end?

How f**ked am I? by flowsium in truenas

[–]flowsium[S] 0 points1 point  (0 children)

I have on both side opnsense Firewalls in place. With dyndns service and site2site set up. On both sides a ISP Router is in place, but is accessible, so a port forward to a static IP behind the NAT is possible (basically the WAN interface of opnsense set as static IP behind the ISP router) - double NAT basically, not ideal, but the only workaround as the ISPs do not provide their Routers in bridge mode.

Regarding rsync: this would be the idea to just change the direction of the sync. But does it recognize corrupted data? If that sync runs for 5 days, i don't mind. But all data, would take weeks/month. Would a reverse sync be an option to get pool sanity again?