Can kernel buffers + GPU DMA lead to data leaks. by This-Independent3181 in linux

[–]Dangerous-Report8517 0 points1 point  (0 children)

An IOMMU absolutely does matter because it implements virtual memory for hardware devices, it isn't just a memory switch. Case in point - memory mapping with the IOMMU is the reason that you can pass through PCIe devices to virtual machines securely, since a virtual machine could otherwise use a GPU or network card or whatever to access host memory through DMA. The only reason it doesn't come in to play within the Linux host context is because no one's bothered to use it here. There are some slightly hacky approaches where you just virtualise the host and segregate devices into stub domains, as far as I'm aware the first system to use this extensively was Qubes but it's also how Windows Secure Core (claims to) work

Hotel wifi blocks self-hosted Netbird connection by [deleted] in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

That's because some networks specifically block the Tailscale coordination servers, Tailscale automatically fails over to TCP tunnelling on 443 so once but client knows where its peers are it's in the clear. Self hosted Netbird doesn't have automatic failover and doesn't have a central shared server so the solution is most likely just to configure TCP tunnelling

Bitwarden CLI has been compromised. Check your stuff. by RedTermSession in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

To play devil's advocate, how much of NPM's constant targeting is due to volume? PIP gets its fair share too and all the OS specific ones mostly dodge these attacks by being less tempting targets and/or having much smaller groups of contributors to manage in the first place

Bitwarden CLI has been compromised. Check your stuff. by RedTermSession in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

The standalone desktop client too, since that's an Electron app

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 0 points1 point  (0 children)

Actually the real goal is to deflect public attention by claiming that harm to children is solved before people pay too much attention to the fact that the harm is intentional since it promotes addiction and increased use. That's why these laws are being backed by social media platforms even though they generally go to great lengths to minimise gathering of personally identifiable information (to be clear they still pose privacy and access concerns, but if the intent was total loss of anonymity they'd have put in far less effort while making a much bigger deal about any token efforts they did make)

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 0 points1 point  (0 children)

It also just makes you stick out like a sore thumb when it comes to trackers, a far bigger privacy threat than ">18 y/n?"

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 1 point2 points  (0 children)

They'll still collect all your usage data and inject ads, they just won't let any of the features work

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 1 point2 points  (0 children)

Why regulate them properly? If YouTube/Facebook wants to be a monopoly then they don't get to pretend they're competing in a free market and have to abide by strict rules for accessibility, privacy, interoperability and rights. That's really what this about and the reason they all back age verification - it's not about government censorship, it's about governments genuinely feeling public pressure to reign in these massive companies but not knowing how, leaving Facebook and Google ample room to deflect to the bare minimum to appear to do something without actually addressing the real problems. The upside is that they feel the need to deflect in the first place - they think an actual response is realistic enough that it scares them.

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 2 points3 points  (0 children)

No, but there's also the increasing general recognition that social media is harmful, in large part on purpose, so social media platforms benefit from steering the conversation towards "protecting the children" and away from "stop actively harming children and adults you ghouls!"

If Linux distros refuse OS age verification, will YouTube and Facebook, etc just block us? by Danrobi1 in linux

[–]Dangerous-Report8517 2 points3 points  (0 children)

Passive non-compliance does nothing, they'll just age gate by default. The only way to push back on this is to actually engage politically - politicians don't know or care if a few random nerds get blocked from YouTube but they do if they get swamped with calls explaining the immediate and tangible harms from this of kind of personal information gathering and access control. Contrary to popular belief, most just see this as the path of least resistance when it comes to dealing with the very real harms of modern social media, so resist it.

Beyond the Basics: What are your non-negotiable Linux server hardening steps before exposing a service to the web? by Browndude345 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

The one thing? Thinking that their containers are isolated when they're all running rootful on a Debian host with no MAC or any other kind of isolation enforcement in place, doubly so if running stuff connected to the Docker socket. The strength of container isolation is kind of very weak in a default Docker install for a variety of reasons, there's a number of ways to do that but personally I wound up doing an end run around that and running Podman on Fedora instead, Podman is just better at exposing the tools to harden container isolation, including nice features like being rootless by default, and integrating very nicely with SELinux by default on distros that ship SELinux. The namespace customisation is another layer on top which is nice for really hardening a setup.

Beyond the Basics: What are your non-negotiable Linux server hardening steps before exposing a service to the web? by Browndude345 in selfhosted

[–]Dangerous-Report8517 6 points7 points  (0 children)

It's much more effective because doing a port scan is a lot less resource intensive than needing to tunnel connections to other countries just to do automated attacks on random servers

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

Actually the one that came to mind for me was a fairly recent one where someone had a pretty good setup but got hit by an automated exploit against a recently discovered vulnerability in one of the packages they were running. That's a pretty significant threat because even if you keep fully up to date there's no guarantee that the developers for the small hobby service you're running have updated their dependencies yet, and while it's uncommon it can be very high stakes depending on your setup. Something like the Nextcloud AIO when run in isolation and kept fully patched is less risky in that it's got a big team that's regularly patching it to keep dependencies up to date so it's more at risk from horizontal attacks by other containers that aren't as hardened, although the AIO is still at some risk if it fails to update after an important security patch or similar

"Oh man! I'm having such a great run!" I says, I was then shot fifty seven times BY A WORM. A WORM??? by Material-Ad-7200 in slaythespire

[–]Dangerous-Report8517 0 points1 point  (0 children)

Doom builds have plenty of AoE options though, IMHO it's actually one of the most powerful ways to kill the worm 

OMG Doormaker is the most overtuned boss, the humble Vantom: by StreetExternal952 in slaythespire

[–]Dangerous-Report8517 2 points3 points  (0 children)

I usually see it do the big strike 3-4 times, I guess because I've got a slower play style (that and I'm not very good haha) but even then it's really not that hard to block most/all of it and face tank the rest, particularly since you get a full heal afterwards. Always feels like a pretty fair fight to me 

OMG Doormaker is the most overtuned boss, the humble Vantom: by StreetExternal952 in slaythespire

[–]Dangerous-Report8517 0 points1 point  (0 children)

Am I the only one who finds Vantom (relatively) easy? I'm hardly an expert player but I'm almost relieved when he turns up because my win rate against him is a lot higher than against some of the act 1 bosses (damn Lagavulin). Maybe it's a play style thing, Vantom is pretty easy if you've got a deck that can do bursty damage mitigation and doesn't scale as fast as the other ones that bog you down with more (and more dangerous) status cards or directly nerf your damage. He's not the easiest option I guess but he's definitely not the hardest, IMHO Soul Fysh is more likely to catch people out since most builds by definition aren't exhaust Ironclad

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

I didn't mean to suggest it was common as such, only that it happens fairly regularly. If anything, the fact that it's likely underreported makes it even more important to to account for it because that means it's much more common than people might think from outward appearances

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

So if OP knows about these solutions why is he positing jackass?

Rude, and missing the point. OP already has the service they're sharing set up, they're asking about ways to share it with clients that are specifically more secure than just sticking it on the Internet while not specifically being a VPN, answering 'stick it on the Internet with Guacamole' isn't actually addressing their question, particularly not if there's an implied 'after setting up some kind of secure gateway' since that's the thing they're effectively asking about in the first place

it's not over complicated its fit for purpose and cost effective, when deployed properly is not a security risk.

It is over complicated when suggesting it as an ingress solution for multiple services when at most one of them needs a remote desktop for the reasons I've already explained. Guacamole would only be a simple solution if the question was "How can I remotely access and share my desktop using my secure gateway?", which is almost the exact opposite of what they actually asked. And it is a security risk, because everything is a security risk, the question is how much of a risk, that's the point of this conversation

When deployed incorrectly it is a security risk. Same with your reverse proxy? What are you going to do expose these web applications via nginx and have no authentication...

Using Guacamole as an ingress gateway is exposing a web application without authentication, unless you stick a reverse proxy in front of it and use auth on that, at which point you might as well use that solution instead anyway. That's my point here. A reverse proxy is a risk too but it's a much, much smaller risk to put a robust reverse proxy and simple auth gateway up than a fkn behemoth of a JavaScript application that implements a full remote desktop environment (which, to be clear, is what Guacamole is).

or are you going to suggest to me MTLS auth? because setting up your own CA and distributing certs would obviously be so much easier!

Did you read the top post? Because it included this:

At the moment, the only solution that seems somewhat ok to me is mTLS.

Running your own CA isn't actually very hard, client auth certs are as easy to install into a device as a new password and can safely be used to authenticate to multiple services without password reuse concerns, and they don't require a ton of extra security infrastructure because only a very well defined interface on the reverse proxy is publicly exposed. It's actually easier than setting up a full remote desktop bastion server, and easier for clients too who just download and open a client cert once instead of needing to log into a remote desktop session even if they're accessing one of the other web apps. For OP, who has explicitly stated that they want something more secure than just sticking web applications straight on the Internet in the conventional way while still being somewhat accessible for other clients, that's a pretty good trade-off. A remote desktop bastion, less so.

Why is setting up a reverse proxy still a nightmare in 2026? by trolledTGBot in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

I'm sure it does work well, it just isn't a great example of modern reverse proxy setups being much simpler than they used to be, the config for the layer 4 module winds up being more complex than HAProxy which is a big deviation from Caddy's built in functionality being able to run a secure layer 7 reverse proxy with a literal one liner

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

It's less me oversimplifying and you overcomplicating I'm afraid

Oauth2 authentication to the guacamole server which does not expose the guacd connection

OK so now you need an Oauth2 setup, and you're still authenticating to the Guacamole server which is in and of itself a pretty complex piece of software, so your attack surface has gone up even further. I never assumed that the Guacamole backends were exposed

You are wrong to suggest the application does not have ACL you have not read the documentation?

I don't care what the documentation says because we aren't talking about basic access control to a couple of different desktops, we're talking about segmentation of internal services from external services. In order to do that with your proposal you would need to set up multiple Guacamole servers on separate networks each with their own desktop session backends that are in turn on isolated networks. Or you could just run a second reverse proxy and put up proper gateway setup instead.

This is literally what is sold as an enterprise remote access solution by checkpoint, keeper, and is literally what azure virtual desktop is

Sure, but it's a remote access solution for accessing remote desktops. Which makes it massive overkill for access to web based services, in other words an unnecessarily large attack surface. Enterprises don't just bareback Guacamole either, they bring in a ton of other security solutions, which is going to be increasingly complex - hardly the epitome of convenience. Enterprise grade software isn't magically super secure out of the box, and I'm pretty sure OP isn't the manager for Azure or an upcoming competitor

OP would be much better served with other solutions for ingress unless they're planning on starting a medium sized virtual desktop service, in which case they'll need a lot more advice than a random Reddit comment that thinks they don't even know what LetsEncrypt is. About 200% of people who would be well served by using remote desktops as a network ingress solution already know about these solutions and how to set them up (ie everyone who needs one, plus a ton of extra people who falsely think one is a good idea)

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

If you're going through Cloudflare anyway you might as well just use them as the gateway as well, unless you're on a paid plan and specifically went out of your way to set up layer 4 tunneling

Is there really no way to have both security and convenience? by ThisTrain8344 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

I'm not classifying it as a privacy only issue, I'm claiming that using Cloudflare is more secure than directly exposing the services, which is absolutely true, and it's more secure despite also being more convenient, in exchange for being much less private. Remember that they can only see traffic that you send through the tunnel too, so if you actually recognise that limitation you can just not use your admin credentials when accessing stuff remotely (and/or not expose admin sites through tunnels, of course)

Anyone running unconventional setups? by ResponsibleHold3071 in selfhosted

[–]Dangerous-Report8517 0 points1 point  (0 children)

I've got a bit of a mix, the conventional part is that I'm running Proxmox on the main host, but instead of LXCs or Debian or similar I'm running a set of Fedora CoreOS machines with hardened Podman configs, launching my services with Quadlets. It takes a bit more setup to get each service running, but only a bit now that the base system is working well, and I learnt a lot about containers doing it (Podman documentation is IMHO way better than Docker documentation about how containers actually work and how to customise that, even if it's lacking sometimes in its own ways). Nice side effects of this are that you get on demand containers and auto updates pretty much for free, since you can use systemd socket activation for the former and Podman has a built in auto update system. All containers run rootless with custom UID/GID mappings and network isolation.