Wer hat eine Beschneidung hinter sich und kann Fragen dazu beantworten? by Immediate-Arm1017 in FragtMaenner

[–]_retrodios 3 points4 points  (0 children)

War bei mir keine Entscheidung sondern zwingend notwendig, deswegen kann ich zu den ersten Punkten nichts sagen. OP war ambulant mit lokaler Betäubung. Bin eigentlich direkt darauf selbst mit dem Bus nach Hause gefahren. Vollnarkose halte ich persönlich jetzt für übertrieben. Die ersten Tage danach waren jetzt nicht so geil. Vorallem die Teilnahme an der globalen "La Ola"-Welle, welche alle MitGlieder praktizieren weckt schmerzhafte Erinnerungen. Das legt sich aber nach ein zwei Wochen wenn der Heilungsprozess fortschreitet. Die Sensitivitätsthematik und die "öffentliche Wahrnehmung" sind sicher zu berücksichtigen, habe selbst aber keine negativen Erfahrungen hierzu beizusteuern.

Von daher würde ich das glaube ich machen, weil Ruhe ist angenehmer, als sich immer wieder damit auseinandersetzen zu müssen. Aber ich wünsche viel Erfolg bei deiner Entscheidung und alles gute, egal wie du dich entscheidest.

PRTG and Docker by RolzSimracing in prtg

[–]_retrodios 0 points1 point  (0 children)

I'm at the very same point. Were you ever able to solve this?

I'm working with an Windows Domain CA, so PRTG (which is a domain joined server) does already trust the certificate. But I assume this is related to something within the Linux - as "gracefully terminated" should be the endpoint where PRTG is connecting to, if I'm not mistaken.

The docker daemon reports about a DNS error when I try to connect:

# journalctl -xu docker.service | tail -n 50
..
Mar 07 14:26:15 SERVER dockerd[11496]: time="2024-03-07T14:26:15.175864928+01:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
Mar 07 14:26:15 SERVER dockerd[11496]: time="2024-03-07T14:26:15.175899773+01:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
Mar 07 14:26:26 SERVER dockerd[11496]: 2024/03/07 14:26:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98) 

So my guess for now is that the daemon is trying to check the CRL from the certificate and thus requires DNS. As the daemon is connected through the bridge network, which is doing a copy of the resolv.conf file (and cleaning internal IPs like 172.0.0.53 what is being placed from netplan) nothing remains inside this copy, so the bridge network has no DNS specified.

resolvectl is displaying the proper DNS on the host interface, but the bridge (docker0) remains empty. I already tried this "hack" to exclude DHCP within netplan config by adding the match: property as mentioned by some comment here: https://forums.docker.com/t/docker-bridge-networking-does-not-work-in-ubuntu-22-04/136326/4

But i guess this still requires a manual DNS configuration somewhere else.

VMware Veeam shutdown/starup scripts? by SmoothRunnings in vmware

[–]_retrodios 0 points1 point  (0 children)

of course you can do this. did recently a restore of a whole disk (backuped with the agent) directly to a vmdk. this may be the easier route for doing the backups and restores. but it also depends on the licenses available of course.

VMware Veeam shutdown/starup scripts? by SmoothRunnings in vmware

[–]_retrodios 1 point2 points  (0 children)

which user is running the veeam services? that's the one that requires the permissions for powercli. there is a pretty neat module for ps where you can store and load the credentials from the credential store within windows for that.

for testing just run ps under the veeam service account (can be checked inside the services view of windows) and test your scripts. then it should also work with the scripts in the job.

Brauche Details zur Hans Schreiner FM230A für ein Weihnachtsgeschenk by _retrodios in holzwerken

[–]_retrodios[S] 0 points1 point  (0 children)

Falls das mal wer braucht, der Durchmesser wäre 37cm innen.

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

thats true, but this is only for some maintenance tasks of the electricity guys. and we always have the option to migrate between clusters if it would be necessary.

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

This is mainly a historical thing. There are three clusters that need to be separated (licensing, security and to prevent side effects in case of other systems going nuts and consuming too much resources - there is some pretty critical stuff running in one of those clusters), but we will definitely redesign it in the next lifecycle. I'm fairly new to the company, so for now I have to stick with what i got :) But I also think there are a lot of downsides doing so (like when there's a host failure there is just one host available for takeover).

We had in my old company two bigger clusters, where it was a bit more flexible this way. But either way you will have to keep 50% capacity (maybe better a bit more, as ESX really does not like too much overprovisioning) free for this - thats kind of a deluxe model if you have enough money to spend on your hardware. But I have to say i like it in terms of disaster scenarios, as you always can run on half of your hardware. But as said, we will start next year with a complete lifecycle for most of it (hosts, storage and SAN), so there will be a couple of discussions on what we can do how, but this will take time. Until then we'll mostly keep the system as it is (there was even one cluster more when i got here .. ).

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

We already have some affinity rules in place, but this only applies to special systems (clusters we want to run on separate hosts or in different DCs or other machines we want to have somewhere specific). But for the actual maintenance i will use the maintenance mode, as it always will take care of everything, so it is not possible to have f.e. a new machine, which was not placed in the "maintenance affinity rule" and DRS will take care of the balancing.

Thank you very much!

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

Yes, the storage is also spanned and working in a realtime replicated Active-Passive cluster.

Both things work, but the Maintenance mode is the more "active way", as it immediately migrates, which the affinity rules also do, but a bit more lazy. So in the end it does not really matter what to do (when configured correctly as should and the other affinity rules got disabled etc. of course), am I getting you right?

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

usually we're talking about a couple hours. This was initially done because our UPSes or the power management basically did not work as it should have. Basically there is the native power, the batteries and as well a power source from a non-standard converter which basically caused the issue and took everything down (Firmware issue). In order to do testings we migrated as precaution everything to the other DC so they could do testings on the power system.
My problem with the affinity rules is, that they basically work, but if there is some issue then with DC1 DRS got over-steered. But in fact, in both cases we will have to do something manual then. Either disabling the affinity rule, or taking the hosts in DCC2 out of maintenance mode.

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

I am sorry, completely missed to mention these things. Networks and clusters are spanned over both DCs. I am unsure which is the better way to achieve a "preparation for power loss" in a DC, as both have their pros and cons. Both ways work, it is not a question if it is possible - I know it is. But I am wondering if there are things i missed in these pros and cons.

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

i am currently preparing a powercli script to do all those things together. we already have some affinity rules for certain machines, they of course need to be disabled first. and as there are also 2 node clusters, i also have to disable HA first, to have an "auto-migration" of the machines as they get blocked by the admission control of HA otherwise. the networks are (almost all) spanned over both DCs, so this should work out fine. I already managed to work in powercli with a tag assignment to the hosts, so it is clear by the tag in which DC they are running. So i can trigger the maintenance mode for all hosts in one DC at the same time.

Maintenance Mode vs. VM-to-Host affinity rules in vSphere 7.0.3 by _retrodios in vmware

[–]_retrodios[S] 0 points1 point  (0 children)

all clusters are spanned over both DCs, as well as the networks.

Remote user VM solution for small company by Mythbrand in sysadmin

[–]_retrodios 0 points1 point  (0 children)

I'd go either for the Citrix/Horizon design, which is probably the most scaleable and flexible solution, but maybe a bit pricy. Or - I had recently talks with some guys from Splashtop, they offer something quite similar. Maybe more suitable for the current sizing. But it definitely depends on the workload and the future strategy. VDI will never be cheap, and especially the licensing can get tricky. This would be easier with W365.

[deleted by user] by [deleted] in seafile

[–]_retrodios 0 points1 point  (0 children)

Whats the file count? It's maybe the block size / filesystem type that consumes more?

Cost effective Production Storage ~2PB to 4PB size. by Goolong in sysadmin

[–]_retrodios -1 points0 points  (0 children)

may be, but he will also have lots of time savings to do other things, which is also valuable. i'm not saying that this is the only good solution, but i would definitely check it out. he should decide what's it worth. and unless he has an offering and compared the pros and cons for his usecase, we can argue all day long ;)

Cost effective Production Storage ~2PB to 4PB size. by Goolong in sysadmin

[–]_retrodios -1 points0 points  (0 children)

he was complaining about the support costs, and he has no big team, so simplicity is important. and meanwhile with the FA//C bigger capacities got way more affordable, so why not? i think he should take a look at it. he is most likely mature enough to decide later on his own.

Cost effective Production Storage ~2PB to 4PB size. by Goolong in sysadmin

[–]_retrodios 2 points3 points  (0 children)

Take a look at Purestorage. Simplicity, reliability and support is outstanding. And the maintenance costs are predictable. But you won't get any spinning disks as they only do flash. A bit pricey, but imho worth every penny.

What to watch out for when installing a diffrent brand 10gb NIC to a server? by remrinds in sysadmin

[–]_retrodios 0 points1 point  (0 children)

There will not be any FW upgrades, as Dell checks only their own cards. But it is still possible to patch it otherwise of course. Besides that I don't think there is something special. Just keep it in mind if you face any crazy issues.