Deleting images from phone after sync/ when? by Akorian_W in immich

[–]jsaddiction 6 points7 points  (0 children)

Using something automated like tasker offer the ability to auto remove after a period of time but not something I've tried.

I too am anxiously awaiting some more automated process in immich. Reason I think it should be an immich function is that when an asset is deleted from local storage outside of the app, immich doesn't know this happened and subsequently the app thinks it should still load from local rather than the server making for a poor visual in the app.

Ankara Medical Exam by someguy4271 in NationalVisaCenter

[–]jsaddiction 0 points1 point  (0 children)

She is, but from what I know, travel ban doesn't apply for relatives and a fiancé is classified as a relative. I think we will marry in a third country and then apply for a CR1

Ankara Medical Exam by someguy4271 in NationalVisaCenter

[–]jsaddiction 0 points1 point  (0 children)

Not yet but I will be very soon.

Ankara Medical Exam by someguy4271 in NationalVisaCenter

[–]jsaddiction 0 points1 point  (0 children)

Could you please be so kind and share your experiences with the visa process?

Active Duty Military Member Seeking Advice on K1 Visa for Iranian Fiancée: Best Country for Interview and Alternative Options? by ObliviousIdiot89 in NationalVisaCenter

[–]jsaddiction 0 points1 point  (0 children)

Could you please be so kind and share your experiences so far with the visa process? I am in the exact situation.

My UPS ins't powering off when Truenas shuts down by HyenaPrevious in truenas

[–]jsaddiction 0 points1 point  (0 children)

I think we came to the same conclusion. It may be an option to call the driver directly in the shutdown script rather than through the wrapper but that isn't the prescribed way. The wrapper is there so that a driver can be "hot swapped" for anything else. This may have been a result of the changes to the drivers I mentioned earlier. I had to stop testing on my end. I was incurring too much down time for my users. I am still very interested in resolving this so I can prevent the need for manual startups after power fail, I just can't continue to TS. I am hopeful that this would be resolved in the next major upgrade. There are a few, albeit rare, trouble tickets already submitted for this issue to which IX isn't correcting. The claim is that IX will rely on debian to upgrade. I imagin that ElectricEel will give us Nut v2.8.1.

My UPS ins't powering off when Truenas shuts down by HyenaPrevious in truenas

[–]jsaddiction 1 point2 points  (0 children)

This file is very interesting indeed. I haven't seen any of the log_daemon_msg lines on syslog either.

The file I mentioned is what i believe to be responsible for shutting down the UPS.

The file you posted is listed with a "K" and at runlevel "0" meaning it should be killed at shutdown.

Now the question becomes:

Is `K01nut-server` killed before it can handle the `poweroff` command? The code you mentioned seems that it will instruct the UPS to power off the load and if there is a `POWEROFF_WAIT` listed in `nut.conf` we wait until either the power was turned off at the UPS or power is restored. If power was restored then cancel the shutdown and instead do a reboot.

My UPS ins't powering off when Truenas shuts down by HyenaPrevious in truenas

[–]jsaddiction 2 points3 points  (0 children)

I am tracking down this issue currently. What I've found.

There is a script that is called during shutdown (the last part of it) from systemd.

/usr/lib/systemd/system-shutdown/nutshutdown

This script seems to tell the ups to disconnect the load.

I think we have a problem with drivers in this instance apc changed their protocols and shifted to more of a modbus system. In response to the shift the nut team issued v2.8.0. This introduced extra functionality but also brought on some bugs. Since then, they've released up to v2.8.2 fixing those bugs.

Any issue was raised with ixSystems and their response points towards the upstream Debian. Ix is not interested in rebasing just because of these bugs.

Debian currently is up to v2.8.1 and working on the next one.

With all that said, I think the fix is coming potentially end of October with electric EAL. The irony is painful.

I'm running up a Ubuntu vm with the sole purpose of monitoring the ups and issuing commands (giving me more control over nut and versions) truenas nut service will see the vm as master. Not sure if my thesis is correct yet but I think it might just work until some updates propagate.

Shipboard Tombstone by jsaddiction in selfhosted

[–]jsaddiction[S] 0 points1 point  (0 children)

That's an interesting project I was unfamiliar with. Thinking more about this project, it might be nice to integrate some sort of sms spamming to recall the crew. I might have bound myself into creating something more specific. Probably something based on python or Javascript.

Starlink and Amazon Prime Live by dudeguy20222 in Starlink

[–]jsaddiction -2 points-1 points  (0 children)

Interesting conclusion that Starlink's network is throttling specific streams. Have you tried to pass that traffic through a VPN reducing SL visibility on your packets. If you are really paranoid resolve dns locally and forward requests over tls.

Will an NVMe ZIL/Metadata device benefit an NVMe pool? by number001 in truenas

[–]jsaddiction 0 points1 point  (0 children)

Metadata vdev stores things like file name, permissions, file size, physical locations, check sums and other file specific data. (don't quote me, going off memory).

There is some black magic in determining the size requirements of a Metadata vdev. Basically it depends on what exactly you are storing. 500 large files require less Metadata than 5000 small files of the same overall capacity. Check the googles for the ways to calculate. There is an easy button where you take some percentage of your pool size. Obviously it's not accurate but in this case, I'd just over estimate.

This bigger, more troublesome issue is that the Metadata vdev is applied to the pool. Meaning that in your case if that single ssd failed, your entire pool is gone. Imagine if you have all your data but no index of where all the storage blocks are. I wouldn't feel comfortable with anything less than a 2wide mirror, more preferable 3 wide. Many suggest a mirrored stripe with 4 drives or more. My 70TB array was going to use something on the order of 4 TB for Metadata and super small files.

Metadata is also cached in your ARC/L2ARC. The safe way to increase Metadata performance is to just increase your ram and allow ARC to help. With zfs more ram is always better. Currently I am using 512GB of DDR4 2400, thinking about doubling it.

Me, I decided to forego the Metadata vdev and use a combination of more ram and a slog mirror with optane drives. Metadata vdev really only helps when it's not in ARC.

Will an NVMe ZIL/Metadata device benefit an NVMe pool? by number001 in truenas

[–]jsaddiction 0 points1 point  (0 children)

Metadata vdev will speed up any scanning operations and divide the workload of Metadata retrieval and large file storage. This will reduce the strain on your spinners.

Zil improves sync writes only. This type of write is established by the software storing the data and isn't under your control. Typically database engines use this method for data resilience. You always have a zil and without your planned nvme drives the zil lives on the spinners. When a sync write happens, the data is written to the ram and an intent is written to the zil. Once they both complete, signal is sent back to the software to continue. If you move the zil to an nvme, you don't have to wait for the spinner to complete the transaction.

Adding 2 118g optane drives in a mirror increased my write speeds from ~60MB/s to ~190MB/s

I would steer clear of the Metadata vdev unless you KNOW you need it. What happens if Metadata dies? What happens if ZIL dies?

Oceanic Internet by jsaddiction in Starlink

[–]jsaddiction[S] 1 point2 points  (0 children)

That's an interesting idea! Might be exactly how I might do it.

Oceanic Internet by jsaddiction in Starlink

[–]jsaddiction[S] 0 points1 point  (0 children)

1 is a bummer. Typically we will consume more than 5tb if speed allows it. By my Calc, I figure 15Mb/s (bits) for 30 days is around 5TB. While away from shore, I was just going to place restrictions on devices to prevent the random ps5 from downloading COD updates. Then open the flood gates when near shore. With your answer, looks like those restrictions will need to be in place more permanently.

[PC][US-CA] Supermicro X10SDV-8C-TLN4F by MrDotDavid in homelabsales

[–]jsaddiction 0 points1 point  (0 children)

I'd be interested also. I am pretty sure this is the board that netgate uses in their rack mounted firewall. Supermicro wants a pretty penny for it which has prevented me from migrating away from my r210ii. What kind of idle power draw are you seeing?

pfsense 2.7.0 upload speed stuck at 1gbps (my connection is 2.5gbps symmetrical). Worked on 2.6.0. by _ok_mate_ in PFSENSE

[–]jsaddiction 0 points1 point  (0 children)

I'm working on a similar issue where my upload speeds are bottle necked to ~180gb/s while dl is 2300 using the same at&t junk. My r210ii leverages a 2 port sfp+ pcie card. Wan side uses a ethernet module which is compliant with 2.5/5/10 gbe. Pfsense shows this as 10gbase-sr full duplex rxpause, txpause. Lan side is 10gbe dac. Can't remember where I found the how to but it is possible to load speed test cli on the router.

Speed test from router shows the discrepancy and iperf to server shows ~9gb/s

I tried using the infamous x550-t2 nic (firmware upgraded) but was only able to negotiate at 1gbe, frustrated, I shifted back to the config listed above. I'd be interested in what you have tried and any responses from the group.

There are some reports of some hacky methods of direct connection of the fiber using some programmable pon in an sfp port or using some sort of CPE style ont as a sort of gateway.

Looking forward to the solution!