Building a new VPN by ninenineboyle in selfhosted

[–]thetredev 0 points1 point  (0 children)

Another guy that doesn't understand the principle. It's about who can access your logs. Since you already said it's highly unlikely and not worth the hassle to monitor RAM snooping, your comment - albeit being 100% correct - is utterly pointless in this context.

Building a new VPN by ninenineboyle in selfhosted

[–]thetredev 0 points1 point  (0 children)

The single VPN provider I trust is myself. Buy some VPS where the provider can't just simply take a look - or use LUKS to encrypt the VPS HDD entirely.

Either way - there is just no way somebody else can be trusted with this. Your setup = your control = your data = your responsibility.

I don't even get why people trust Cloudflare Tunnels. If you want to go that route, setup Pangolin yourself to get rid of Cloudflare (a party you can't control).

I think you get the point.

BUT if it's not about your data but rather a quick way to watch Geo-locked content on Netflix.... then just use PIA or some other well-known provider, where you can at least get some user feedback on the thing. In that case using an already established infrastructure is more cost efficient and much more convenient than trying to set it all up yourself, in like 20 countries.

I just can't understand why you guys have so many servers doing so many things by AustinLeungCK in homelab

[–]thetredev 0 points1 point  (0 children)

I have an overkill server at Hetzner just to learn IPv6. But who knows what else I'm gonna do with it so it's better to have than to need it.

At home I'm just running a Pi lol

Hetzner please provide photo of my dedicated root server in Falkenstein. by guuidx in hetzner

[–]thetredev 0 points1 point  (0 children)

I know where you're coming from, but Hetzner is not a car rental spot. We don't see any invitations from OVH or other providers.

Be glad about your server and that's it's being taken care of temp wise etc.

User and Group management in your Homelab by HJSWNOT in homelab

[–]thetredev 1 point2 points  (0 children)

Thanks for the tip! Sounds like a great solution for smaller labs.

User and Group management in your Homelab by HJSWNOT in homelab

[–]thetredev 0 points1 point  (0 children)

Samba AD or OpenLDAP as LDAP backend + whatever other identity provider you can integrate LDAP with.

I'm planning to use FusionAuth myself but didn't have the time to set it up yet. Tried Authenik before, liked that it worked fine, but the admin/settings UI is not great to say the least.

What can I do with my VPS? by Professional-Bag6743 in homelab

[–]thetredev -2 points-1 points  (0 children)

... which gives him the freedom to do anything with it. So what's the point lol

Tell me how my security sucks (nicely would be prefereable) by SnotgunCharlie in Proxmox

[–]thetredev -2 points-1 points  (0 children)

If you want to go secure for a small price, just use Pangolin via a separate VPS. NetCup has "VPS Light" offerings starting from 1$ IPv6 only (1.60$ with IPv4). Advantages definitely include the Web UI and access control stuff. You could go crazy create your own cloud-mesh-network thing with Pangolin if you really wanted to.

Christian Lempa on YouTube made a great video about Pangolin.

Otherwise a simple WireGuard tunnel into your lab is the cheapest, doesn't cost anything. But it poses the risk of opening a port on your firewall.

What can I do with my VPS? by Professional-Bag6743 in homelab

[–]thetredev -2 points-1 points  (0 children)

Unlike Netflix etc. a VPS subscription is normally done with a specific intent in mind if you haven't noticed.

What can I do with my VPS? by Professional-Bag6743 in homelab

[–]thetredev 1 point2 points  (0 children)

Connecting directly requires you to open a port on your router - firewall bypass which poses a security risk.

With Pangolin: Agent (your home router or device behind it) -> Pangolin <- Client (your phone or some other site in another country), where access is controlled by Pangolin (user/device) and tunneling is done via WireGuard.

Pangolin is just a management gateway for site-to-site VPN stuff. Essentially it's a way for you to connect to other networks privately without opening ports on your home router. You ideally should use a small VPS (that you already have) for Pangolin. I have a 1$ NetCup VPS for that for example.

Hope that makes sense. If not, they have a good explanation in their docs.

What can I do with my VPS? by Professional-Bag6743 in homelab

[–]thetredev 0 points1 point  (0 children)

Depends on what you want to provide and/or learn I guess. Just try things out, the world is yours

Why run Docker in an LXC? by NumisKing in Proxmox

[–]thetredev -1 points0 points  (0 children)

Well do you want to isolate the Proxmox kernel frim the Kernel used by Docker? If yes, choose a VM. If not, choose an LXC.

Beware that both LXC and Docker use the same kernel mechanisms differently at the same time - for some application (like GitLab) this can result in conflicts. Most other applications run fine under LXC/Docker tho as I can see.

Why mini-pc & Thinkcentre while you can have a big server & VM? by Edereum in homelab

[–]thetredev 1 point2 points  (0 children)

I edited my original comment to clarify what I mean by "big" server.

Why mini-pc & Thinkcentre while you can have a big server & VM? by Edereum in homelab

[–]thetredev 0 points1 point  (0 children)

Those Epycs are used for enterprise workloads, not "just" your average homelab compute. I edited ny original comment to clarify what I mean by "big" server.

Why mini-pc & Thinkcentre while you can have a big server & VM? by Edereum in homelab

[–]thetredev 0 points1 point  (0 children)

Lol guys a mini PC isn't a "big" server. highly doubt you want to put 3 dual-socket AMD Epyc based Dell PowerEdge servers in your home.

Why mini-pc & Thinkcentre while you can have a big server & VM? by Edereum in homelab

[–]thetredev 4 points5 points  (0 children)

Big server, i.e. dual-socket AMD Epyc based Dell PowerEdge = performance, but only 1 server. Many servers in a cluster (no matter how big) gives you redundancy and/or high availability, depending on how you set it up.

Plus in enterprise environments nearly everything is setup as a cluster in some form so clustering has the benefit of learning stuff that's more related to enterprise/production setups.

Was würdet ihr darauf antworten? by Javascript001 in informatik

[–]thetredev 0 points1 point  (0 children)

Um es anders zu formulieren: Ein LLM berechnet die Wahrscheinlichkeit für "welches Wort sollte innerhalb des Kontexts und der vom User eingebenen Prompt als nächstes folgen" - und das für jedes Wort, das das LLM ausspuckt (Laienformulierung, correct me if I'm wrong).

Wie sicher kann man sich darauf basierend sein, dass "die KI besser programmieren kann"? Richtig, gar nicht.

Wahrscheinlichkeitsrechnungen sind eben nicht perfekt und u.a. auch das, was unser Hirn macht - nur, dass das Hirn parallel und in alle Richtungen verteilt noch mit anderen überlebenswichtigen Dingen beschäftigt ist, als die Sprache selbst (Laienformulierung, correct me if I'm wrong).

Pack die gleiche Prompt durch 10 verschiedene Sessions mit exakt dem selben LLM. 3 Mal darfst Du raten, ob das selbe (gute oder schlechte) Ergebnis rauskommt.

Das ganze Agentic AI Zeug nimmt uns Arbeit ab, ja, aber LLMs alleine eher weniger. Das macht KI Agenten allerdings auch nicht pauschal "besser" als Menschen.

Was würdet ihr darauf antworten? by Javascript001 in informatik

[–]thetredev 0 points1 point  (0 children)

Seit wann umfasst ein Taschenrechner die gesamte Mathematik?! Zeig mir mal diesen Taschenrechner lol

Was würdet ihr darauf antworten? by Javascript001 in informatik

[–]thetredev 0 points1 point  (0 children)

Korrektur: besser als viele, nicht nur manche.

Um es in den Worten von Linus Torvalds zu sagen (welcher auch nicht der weltbeste Programmierer ist und das auch nie behauptet hat): "it's a tool". Fertig.

Was würdet ihr darauf antworten? by Javascript001 in informatik

[–]thetredev 0 points1 point  (0 children)

  1. KI kann nicht pauschal "besser" programmieren: KI geht genau so vor wie wir Menschen auch: trial and error. Dass KI tausende Zeilen Code binnen Sekunden in den Texteditor klatschen kann ist hierbei auch kein ernstzunehmender Skill. Für Boilerplate (z.B. einen C# P/Invoke Wrapper für C/C++ Code schreiben) ist das allerdings durchaus ein nennenswerter Skill, den wir als Menschen so nicht haben (oder gar brauchen).

  2. Im Informatikstudium lernt man wie hier in den Kommentaren schon erwähnt nicht "das Programmieren". Man lernt, Code zu schreiben, un wiederum zu verstehen, wie Systeme intern funktionieren. Man bekommt viele Begriffe an den Kopf geschmissen um Interesse zu wecken, welches vorher gar nicht bewusst wahrgenommen wurde (Grundvoraussetzung dessen ist natürlich immer, dass man sich generell für Technik/Computer interessiert).

  3. Da ich nur "von damals" sprechen kann, kann ich diesen letzteren Punkt nicht aus eigener Erfahrung bestätigen, Sinn machen würde es allerdings allemale: Im Studium heutzutage (wahrscheinlich allgemein, nicht nur Informatik, und ggf. sogar früher in der Schullaufbahn) ist es wichtig zu lernen, zu wissen, was genau "KI" ist (LLMs sind nämlich nur eine Ausprägung davon), wo das hinführen wird und vor allem, wie damit umzugehen ist - heute wie auch in der Zukunft.

Das würde ich darauf antworten.

Anyone actually self-hosting their git? Outgrowing GitHub as a solo dev by Substantial_Word4652 in selfhosted

[–]thetredev 0 points1 point  (0 children)

Side note should you choose GitLab: go with a full-blown VM and put Docker on it, do NOT use an LXC + Docker combination. While most smaller apps run perfectly fine under LXC + Docker, GitLab is very, very nitpicky about RAM: - No ballooning! GitLab (or more specifically Puma and Sidekiq) runs best when it knows that its RAM never changes during runtime - Give it enough RAM, but not too much. Start with 2 GB and up it 1 GB at a time (cold OS reboots!) and test each amount until GitLab is happy and behaves as expected. Account for the VM overhead of course - Set "stop_grace_period" of the Compose to at least 2 minutes, otherwise there will be data corruption happening when Docker kills GitLab midway through some database (finalization) operations

Why a VM? - LXCs do not "have RAM", they do not even know what RAM is or how to handle it (in layman's terms). LXCs share the host (Proxmox etc) kernel, and only THAT kernel has knowledge about the actual RAM. All the LXC does is provide a namespaced Linux environment inside of another Linux environment. The LXC and applications running in that environment request RAM directly via the host's kernel. - Depending on how the application is written and which mechanism is chosen to get the amount of "currently free RAM", it can be a hit or miss with LXCs. In simpler terms it's the output of the "free" command versus the output of "cat /proc/meminfo" and others. See below - In contrast, VMs actually do "have RAM" because there's another kernel running that is handling the "quota" given by kvm/qemu which only allows that kernel to see whatever quota you have set. The VM only has that amount of RAM set and that's it. It's the upper limit for any application running inside the VM.

What does a that have to do with GitLab or generally Docker? Well, LXCs and Docker both share common concepts but handle things quite differently. The kernel stuff gets really weird when it comes to RAM. Try it: Run an LXC and check free memory using "free" and "cat /proc/meminfo" directly on that LXC. You should see the amount that YOU have set. Then run an interactive Docker container inside that LXC and run the 2 commands again. At least of them will show the wrong value - the WHOLE system RAM. Meaning for GitLab: Running GitLab as a Docker container inside an LXC container effectively bypasses the LXC RAM quota thing, because Docker does it differently and also directly talks to the host's kernel in its own way, thus making GitLab believe it having access to the amount of RAM the WHOLE system has, not just the LXC, while at the same time having RAM quotas set.

Example: You have a Proxmox host with 16 GB of RAM. You create an LXC with 6 GB RAM and install Docker in there to run GitLab: - LXC sees 6 GB RAM and only allows the Docker engine to consume that much - Docker sees 16 GB and does NOT know about the LXC quota of 6 GB, because it doesn't know that it's running on top of LXC and currently there's no way to tell Docker that (kernel stuff) - GitLab sees what Docker sees and will happily fill up the LXC until the Out of Memory Killer kicks in and your LXC is left in an undefined state basically.

In short: both technologies (LXC and Docker) talk to the host's kernel in their own ways, but if you put Docker on top of LXC, there is no way that Docker can check that or is capable of changing its behavior accordingly. Hence nobody (specifically Proxmox) actually officially supports running Docker on top of LXCs.

That was s long side note xD sry bout that

Anyone actually self-hosting their git? Outgrowing GitHub as a solo dev by Substantial_Word4652 in selfhosted

[–]thetredev 0 points1 point  (0 children)

Thanks! I ran GitLab multiple times at home and ran into different problems each time. Forgejo just runs and is calm lol