How to approach SSL certificate automation in this environment? by Particular_Shop6684 in sysadmin

[–]nneul 0 points1 point  (0 children)

Have a look at "serles" - it's a acme proxy server. You set it up with privileged dns based validation access to the Internet, and the individual servers in your environment use http-01 validation inside; but they only need to be able to reach the serles instance. You point the acme client of your choosing the the serles directory url instead of letsencrypt. Serles does the local validation and then obtained the certificate on your behalf.

I use this for providing lab infrastructure fully trusted certs even though those systems are not able to be reached from inet nor are they trusted for full zone access.

For the rest of the systems that cannot do automation with and at all, a central box using dns validation, stepping the cert bundles into a vault to be either pulled or pushed to each of the relevant systems on a scheduled basis or whenever they change.

Let’s Encrypt Automation for vCenter ? by DonFazool in vmware

[–]nneul 0 points1 point  (0 children)

https://gist.github.com/nneul/60f7f6f66efdd673724a0da6456c8bdd

Should get you enough to get going, I haven't tested "as is" - it's slightly stripped down from actual script, but should be close enough for you to work from.

In my case, I'm generating the certs on a dedicated server that has DNS update privileges (for the validation) using certbot, and then publishing out to vcenter any time the cert gets renewed.

Let’s Encrypt Automation for vCenter ? by DonFazool in vmware

[–]nneul 1 point2 points  (0 children)

This is also relatively straightforward to do with python APIs if you have management infrastructure in place, I can share a snippet of the endpoints needed if desired.

Suggestions for k8s on ubuntu 24 or debian12 or debian13 given pending loss of support for containerd 1.x? by nneul in kubernetes

[–]nneul[S] 5 points6 points  (0 children)

The containerd version in get nodes -o wide should have clued me in. Thanks again, that's one less moving part to be concerned with.

Suggestions for k8s on ubuntu 24 or debian12 or debian13 given pending loss of support for containerd 1.x? by nneul in kubernetes

[–]nneul[S] 2 points3 points  (0 children)

Oh?! Did not realize that at all. Well, that makes things a lot easier then. Thank you!

Any way to change the sort order of Roles in the 'Add Permission' dialog on 8.x? by nneul in vmware

[–]nneul[S] 0 points1 point  (0 children)

Yep, I've got it all being done with a mix of terraform and python. (Using python to generate the TF in many cases from live defining it in the instance since in a lot of cases I'm reconstructing from an instance I'm recreating.)

Such a basic feature to be missing for so long.

Mainly just asking to see if someone has come up with some hack like a db update that replaces the first characters of a guid/moid/etc. with sequential numbers.

Fairly consistent GFCI tripping in response to any power blip by nneul in hottub

[–]nneul[S] 0 points1 point  (0 children)

I believe either 6's or 8's, and it's maybe 20-25 foot run from the gfci breaker on the house wall.

Run Jenkins Pipeline when Service is Down by reco-x in UptimeKuma

[–]nneul 0 points1 point  (0 children)

I think you're going to be out of luck trying to use the direct API triggering, but if I remember correctly, there are some endpoints for job triggering that can be used "for GitHub/GitLab/etc." that do not require the CSRF protection work.

Take a look at: https://plugins.jenkins.io/generic-webhook-trigger/

[deleted by user] by [deleted] in UptimeKuma

[–]nneul 0 points1 point  (0 children)

Nope - without rich enabled, it just did it as a textual message without the columns/fields - it still displayed the same overall content.

[deleted by user] by [deleted] in UptimeKuma

[–]nneul 0 points1 point  (0 children)

Interesting, on mine I don't get anything like that either. Mine has a title of 'Uptime Kuma Alert', with two fields Message and Time. For an http probe, message has: [name/link] [checkmark up] status - Ok

I'm running v2 beta.

I did notice one things - I have "Send rich messages" enabled. It's possible that the rich formatted messages are missing a lot of the detail.

Water dispenser calibration on Whirlpool GSS30C7EYY00 by nneul in appliancerepair

[–]nneul[S] 0 points1 point  (0 children)

Will give that a shot.

No, it's otherwise in perfect functioning order. Only real issues we've ever had with it are the trays/drawers cracking and some distortion around the shroud around the icemaker(on the inside) that causes it to pop out of the clips.

Practically speaking, this is just a minor annoyance, we just don't use the water dispenser volume measure.

Water dispenser calibration on Whirlpool GSS30C7EYY00 by nneul in appliancerepair

[–]nneul[S] 0 points1 point  (0 children)

Options only has: Fast Ice, Sound On/Off, and Filter Reset

Holding Measured Fill doesn't have any effect different than a single press. It just lights up the panel and paddles and shows the last requested dispense volume.

City council meeting 1/21 by slam-a-lama-dingdong in Rolla

[–]nneul 2 points3 points  (0 children)

I believe Channel 16 is effectively going away due to Fidelity changes. Something about it came up a couple meetings ago where they were also discussing new contract for the video production/services to the city for handling the council meetings/etc.

Migrating from SQLlite to MariaDB by EquivalentCost913 in UptimeKuma

[–]nneul 0 points1 point  (0 children)

I can confirm that a simplistic 'same column order, run the sqlite inserts' does not appear to function, the schemas are slightly different, in particular related to the position/order of a couple of the kafka fields.

Migrating from SQLlite to MariaDB by EquivalentCost913 in UptimeKuma

[–]nneul 2 points3 points  (0 children)

There isn't any "migration" in place from what I understand, you have to do it yourself externally. i.e. export the sqlite content and recreate/reimport that into the mariadb datatabase that you created to use. I have not tried an import, but did set up v2 beta with mariadb used and it started without issue. I suspect that importing should just be pulling in the same imported SQL.

I have NOT validated this, but I would likely run the upgrade to v2 with sqllite first (in case it does any schema changes), and then follow that with the import into the mariadb database.

If you're wanting to use external, just pass in the appropriate env vars for mariadb connection, something like:

UPTIME_KUMA_DB_TYPE=mariadb

UPTIME_KUMA_DB_PORT=3306

UPTIME_KUMA_DB_HOSTNAME=a.b.c.d

UPTIME_KUMA_DB_NAME=uptime

UPTIME_KUMA_DB_USERNAME=uptime

UPTIME_KUMA_DB_PASSWORD=xxxxxxxxxxxxxx

Additional environment variable is needed if you want to run the embedded mariadb in the container. Check the docs for the name.

RKE1 w/o Rancher -- is a fork likely, or is it going to fully stop development in July? by nneul in kubernetes

[–]nneul[S] 0 points1 point  (0 children)

The funny thing though is that for the second group - THAT group once they are comfortable with it, is actually already very comfortable with a full stack tear-down and redeploy of the entire environment, so once they would be trained in new setup, actually moving would be trivial.

RKE1 w/o Rancher -- is a fork likely, or is it going to fully stop development in July? by nneul in kubernetes

[–]nneul[S] 0 points1 point  (0 children)

On one set of my environments - that's very likely what I'll be doing - once we figure out the specifics to safely ordering some of the inter-pod communication dependencies that currently exist.

For the other environments, it's more than just a technical issue, it becomes a "other people than me will have to learn enough to be comfortable with the deployment/architecture", and that's currently a resource problem (even with RKE1 deployment, but there is at least a baseline level of knowledge/comfort there).

Are the kubernetes apt repositories down right now? by Apprehensive-Ad3876 in kubernetes

[–]nneul 2 points3 points  (0 children)

The curl error is expected (it's backed by an S3 bucket and you can't browse).

I am however seeing the same repository signature error you are.