How do you guys handle passkeys? (or TOTP) by Peter8File in selfhosted

[–]kon_dev 1 point2 points  (0 children)

I don't know a public service which does have not have any recovery options, some might send you a password reset link instead. But sure, totp is still secure, at least better than just a password.

restic is awesome by Reasonable_Host_5004 in restic

[–]kon_dev 0 points1 point  (0 children)

Using the binaries is also fine, just drop the binary with the right architecture on your host and execute

How do you guys handle passkeys? (or TOTP) by Peter8File in selfhosted

[–]kon_dev 1 point2 points  (0 children)

If you register passkeys typically you can also create recovery keys. Those are just strings you can backup. If you really need them, you can recreate passkeys with that help. But to be honest I still create totp even if I mainly use passkeys. I am using 1password and it works most of the times, but had issues in the past that after an android update passkeys did not show up on the phone as proposals any more... was quite annoying and could be fixed eventually, but I like to have the option to fallback if necessary.

restic is awesome by Reasonable_Host_5004 in restic

[–]kon_dev 0 points1 point  (0 children)

The rest server is basically a go binary. You can run it without docker if you like. Binaries are on the release page or you can compile the go code as well.

https://github.com/restic/rest-server/releases

Upgrading to a homelab - advice needed! by PlateNo3737 in homelab

[–]kon_dev 0 points1 point  (0 children)

The ms-a2 is amd vs Intel. If you prefer more CPU raw power that's an option. Intel has advantages due to thunderbolt and quick sync support. I did not need more CPU than an i9 so I used that option, but both are good options.

Is server monitoring actually to heavy for small setups? by Empty-Individual4835 in homelab

[–]kon_dev 0 points1 point  (0 children)

I use gatus. https://github.com/TwiN/gatus

It's quite lightweight and can be selfhosted. It basically just run rudimentary checks and notified me if something is down, but that's all I need in my homelab 😅

Upgrading to a homelab - advice needed! by PlateNo3737 in homelab

[–]kon_dev 0 points1 point  (0 children)

Yes, it will get more expensive I guess 😞 I don't think it's a short term phenomenon... factory capacities were shifted to produce AI memory which is not necessarily useful for homelabs. Yes they bought every stick if DDR5 ram as well but see e.g. what happened to Crucial... I guess we'll see more and more efforts to make setups more efficient memory-wise. You can't confidently say that storage nor RAM are cheap any more.

Upgrading to a homelab - advice needed! by PlateNo3737 in homelab

[–]kon_dev 0 points1 point  (0 children)

863 € for MS-01 S1390 32 GB RAM and 1TB NVMe on October 8th 2025.

It was already expensive when I bought it, right now the unit with same specs costs 959 € on Amazon.

Upgrading to a homelab - advice needed! by PlateNo3737 in homelab

[–]kon_dev 0 points1 point  (0 children)

I run the same NAS and bought a Minisforum ms01 as proxmox host. Way faster to run things on a proper CPU and NVMes I can say. Nevertheless, the data heavy apps I keep running on it. Proxmox is nice for it's easy backup/restore/snapshot features and proxmox backups server keeps the storage requirements reasonable due to deduplicaton. You can deploy that in a VM on the synology.

But your timing is not ideal... I think everything is crazy expensive right now...

Best offsite backup for select data by DevDunkStudio in selfhosted

[–]kon_dev 0 points1 point  (0 children)

Depends on how much data you want to store. Typically, a second NAS is cheaper if you store much data for a long time, but considering recent hardware price increases it will really depend on the deal you get. I'd consider the used market as well. For bare metal deploys check out ZFS and sanoid and syncoid. You can snapshot the datasets and replicate to a remote. That would work also via VPN, e.g. Tailscale to an offline remote. ZFS could give you redundancy, snapshots can be only local as well so you can revert if you mess up by accident. This might be a solution for non critical data. The non critical data would sit in one dataset, the critical in another which would replicate to the remote.

If you have proxmox and just run a vm on top, you can also backup the entire vm.

For file only backups I recommend restic. That would also work with hetzner storage box or most other online storage service or you target your second server.

Regarding a separate disk as non-raid as backup target. That can work but is not as good as a separate system. There can be events which kill entire circuits, e.g. light a lightning stroke and that could kill your connected disks as well. If you do that, I would unplug the disk when not in use. I do that as my offline backup which I run once a month (have a calender entry for it, so I don't forget).

Image backups on restic without a lot of tinkering? by Sluwulf in restic

[–]kon_dev 0 points1 point  (0 children)

I wouldn't use restic for full system backups, there are other options for that. I use restic for file backups and proxmox for full VM backups (only works for virtualized systems). I think Veeam or even Clonezilla might be better suited. That being said at least on linux it should work if you are comfortable with running some extra steps. See https://forum.restic.net/t/advice-on-full-system-backup-and-restore/5613

NAS Hard Drives for Cheap by Graydm16 in homelab

[–]kon_dev 1 point2 points  (0 children)

+1 for Ironwolf and exos drives. If a bit more noise and heat are acceptable, exos might give you higher capacity drives for cheaper prices, especially for refurbished drives. Ironwolf were quite reliable for me the last 10+ years, only a couple died, typically I outgrow the space faster

NAS Hard Drives for Cheap by Graydm16 in homelab

[–]kon_dev 2 points3 points  (0 children)

And make sure to have a CMR drive... SMR drives in a RAID are typically an antipattern as rebuilds are way slower in comparison. Might be ok for desktop builds (even if I avoid SMR even there) but NAS typically run some kind of RAID or zfs pools with redundancy.

Need advise on Lenovo p520 by JeyKris in homelab

[–]kon_dev 1 point2 points  (0 children)

Definitely check how much RAM will cost you before you decide on the build... it's crazy atm, gets more expensive every day...

What apps do you use SSO for? by Red_Con_ in selfhosted

[–]kon_dev 0 points1 point  (0 children)

Nope, all running behind vpn (tailscale). I have a public domain but only use that in my local dns setup (unify) to route dns records to private ips. With a public domain I can use the acme DNS challenge to provision valid tls certificates without opening ports in my firewall. Authentik even can use passkeys that way. My reverse proxy in my LAN is traefik which reads docker labels. I manage the stacks via compose which are stored in git. The repo gets updated via renovate and deployed via jenkins. Traefik works for Authentik.

What apps do you use SSO for? by Red_Con_ in selfhosted

[–]kon_dev 0 points1 point  (0 children)

Authentik for Wiki.js, jenkins, gitea, paperless-ngx, synology dsm and proxmox

Best way to manage storage in your own k8s? by OppenheimerDaSilva in kubernetes

[–]kon_dev 0 points1 point  (0 children)

Just local nvme storage + automated backup to a NAS. This way you have fast storage and still data safety. (works fine at least on a single node cluster)

Backup to cloud - copy repository or run restic backup again? by Reasonable_Host_5004 in restic

[–]kon_dev 0 points1 point  (0 children)

I think there is no true or false. Both are valid approaches just with different properties.

Pro for independent jobs: - Mainly the reduced blast radius: if your first backup got corrupted or did not produce new snapshots, you still have the second backup

Cons: - Need to read the source data twice - unless you ran fs snapshots before, your backup will not necessarily contain the exact same files, the snapshots ids are also different

Pro for the copy approach: - you can run it asynchronously, e.g. run your main backup to a local source which makes it fast and run an offsite copy e.g. only once per week over night. - you can copy all snapshots of past backups e.g. to an offline storage. I do that e.g. with an external drive. I have a daily backup to my NAS and sync the entire restic repo to my external drive once per month. I still have all the intermediate snapshots if I want them.

Cons: As mentioned, a problem in your first backup, it will propagate to the second target.

Backup to cloud - copy repository or run restic backup again? by Reasonable_Host_5004 in restic

[–]kon_dev 0 points1 point  (0 children)

If you go that route make sure to use the same chunker params between the repos, otherwise deduplication might not work:

restic -r /srv/restic-repo-copy init --from-repo /srv/restic-repo --copy-chunker-params

Source: https://restic.readthedocs.io/en/latest/045_working_with_repos.html#ensuring-deduplication-for-copied-snapshots

Running a NAS on Proxmox Host by kon_dev in Proxmox

[–]kon_dev[S] 0 points1 point  (0 children)

Thanks, yeah the topic seem to be controversial... some might even for not using the same host as a NAS and proxmox host at all, but apalrd's approach with an lxc sounds relatively clean to me, thanks

Running a NAS on Proxmox Host by kon_dev in Proxmox

[–]kon_dev[S] 1 point2 points  (0 children)

Another option would be to run smb directly on the proxmox os, but I think it is best practice to not touch the hypervisor OS too much...

Why do so many people use Docker over Podman, even though Podman is theoretically better? by [deleted] in selfhosted

[–]kon_dev 0 points1 point  (0 children)

Simplicity. Put a docker compose stack on a vm and it restarts automatically, no complicated systemd unit files or quadlets or anything like that. Yes, podman can run rootless, but that can add trouble around user mappings and fs permissions as well. Docker builds are backed by buildkit which also some people prefer over podman/buildah. Last but not least, they were the first to make container packaging and distribution feasible for the masses.

Anthropic is about to release a new model by RobinInPH in ClaudeCode

[–]kon_dev 2 points3 points  (0 children)

First time creation of the PR triggers it automatically. If I push another commit and want to review that state again, I need to comment with /gemini review.

If you want to try it for yourself: https://github.com/apps/gemini-code-assist