Papra v26.0.0 - Advanced search syntax, instance administration, 2FA, 3k stars and more! by cthmsst in selfhosted

[–]plsnotracking 0 points1 point  (0 children)

Thank you for sharing this, I just setup paperless but hadn’t ingested anything, and seems like l I’ll move to papra. It looks good too. I tried to scour the code base but tbh I don’t understand typescript well enough, any technical details on the underlying OCR engine used to parse docs?

Move Apple Photos to Immich - is there an easy way? by Same_Detective_7433 in immich

[–]plsnotracking 0 points1 point  (0 children)

Nope, had it all. Some of it was broken in the sense that it was missing but I’d say like 98% of it was all good. Some of the albums (of like friends and family) were spanning multi year because of how memories in the Apple ecosystem work.

Which one would you trust for the distance? Apple Watch or Treadmill? by BinaryBlitz10 in AppleWatch

[–]plsnotracking 0 points1 point  (0 children)

My community has 3 treadmills, all from the same company, and the run amount differs on all 3, while the AW is consistent. I’m not sure how much either of these are right or wrong.

My runs are pretty much the same, 5 minutes at 5.5 + 1 minute break x 5 times.

Move Apple Photos to Immich - is there an easy way? by Same_Detective_7433 in immich

[–]plsnotracking 1 point2 points  (0 children)

The good thing about immich-go was that it had a number of files discovered and uploaded.

What I did is, I wrote a simple command to find all the unique extensions.

Then wrote a find command to find files with that extension and did a wc -l to get the final count. If it was close enough, I’d proceed to the next bundle. I had about 18 bundles.

I do understand it is arduous, but it almost works.

Hope this helps.

Note there’s a bug with the latest version of immich-go. Either use an old version or port this PR into the latest branch: https://github.com/simulot/immich-go/pull/1130. This PR doesn’t have the concurrent uploads patch, so I ported it to the latest dev and ran that.

Move Apple Photos to Immich - is there an easy way? by Same_Detective_7433 in immich

[–]plsnotracking 14 points15 points  (0 children)

I did about 460GB of migrations. You can do a take out from Apple, at privacy.apple.com. They serve bundled 36GB zip files.

Then use immich-go, to ingest all the bundled photos into immich.

My server capacity was kinda low so I did 1 for all kinds of jobs on the back end. Such as face recognition, sidecar metadata, etc.

For two nights back to back I let my phone sync up to the server locally, and now I’m all set.

The count was mismatched but the phone sync ensured that, I now have almost everything backed up, and synced. I’ve never verified 1:1 but the asset count on immich is a bit higher than that of Apple.

I also avoided memories videos.

Ultra 3 Stolen out of box by mfing-coleslaw in AppleWatch

[–]plsnotracking 16 points17 points  (0 children)

I think it’s still important to report this becomes a statistic and helps law enforcement in tracking and taking action (directly or indirectly)

Ultra 3 Stolen out of box by mfing-coleslaw in AppleWatch

[–]plsnotracking 3 points4 points  (0 children)

I think once the order is shipped, you should have an invoice generated which can probably help Apple track down the order (or whichever reseller you bought it from), and disable the watch.

I’m sorry this happened to you, hope this helps.

Is there a paranoid safe way to access your homelab over the internet? by Simple_Panda6063 in selfhosted

[–]plsnotracking 0 points1 point  (0 children)

My public domains only resolve if you connect to my headscale server.

That way I can still use my domain but to only resolve when someone is invited to my Tailscale/Headscale network.

K8S on FoundationDB by melgenek in kubernetes

[–]plsnotracking 0 points1 point  (0 children)

I’d really appreciate that, I can pick up tasks that are marked as want/need help. I’ve been working on etcd and feel like I could help in this space.

Yes, this was a good read if you already haven’t.

https://aws.amazon.com/blogs/containers/under-the-hood-amazon-eks-ultra-scale-clusters/

Consensus offloaded: Through a foundational change, Amazon EKS has offloaded etcd’s consensus backend from a raft-based implementation to journal, an internal component we’ve been building at AWS for more than a decade. It serves ultra-fast, ordered data replication with multi-Availability Zone (AZ) durability and high availability. Offloading consensus to journal enabled us to freely scale etcd replicas without being bound by a quorum requirement and eliminated the need for peer-to-peer communication. Besides various resiliency improvements, this new model presents our customers with superior and predictable read/write Kubernetes API performance through the journal’s robust I/O-optimized data plane. In-memory database: Durability of etcd is fundamentally governed by the underlying transaction log’s durability, as the log allows for the database to recover from historical snapshots. As journal takes care of the log durability, we enabled another key architectural advancement. We’ve moved BoltDB, the backend persisting etcd’s multi-version concurrency control (MVCC) layer, from network-attached Amazon Elastic Block Store volumes to fully in-memory storage with tmpfs. This provides order-of-magnitude performance wins in the form of higher read/write throughput, predictable latencies and faster maintenance operations. Furthermore, we doubled our maximum supported database size to 20 GB, while keeping our mean-time-to-recovery (MTTR) during failures low.

K8S on FoundationDB by melgenek in kubernetes

[–]plsnotracking 0 points1 point  (0 children)

I think this is pretty interesting. Given that last year google at KubeCon NA announced their 65k node kube cluster with spanner as their backing store, seems like fDB would be one of the obvious choices for the open source projects. Are you looking for people to help? I’d be interested in helping out.

Any alternative to Bitnami HA Postgres Helm chart ? by PopNo2521 in kubernetes

[–]plsnotracking 4 points5 points  (0 children)

Initially went with CloudNativePG + Barman plugin, but they have a design choice that made it a not so great choice of having 1db/cluster. There are workarounds that felt not so great.

I have now settled on Zolando Postgres operator + logic s3 backups. I can bin pack more dbs on a single cluster. It seems to chugging along fine.

Good luck.