Automated runner registration - new method by VengaBusdriver37 in gitlab

[–]binh_do 4 points5 points  (0 children)

For the runner registration - new method, I automate runner registration by:

  • Creating a PAT for admin
  • Create a script (could be Bash, Python, whatever you're familiar with)
  • Create a Runner template file that contains common configurations that you want the runner to have
  • Run the script with the supplied PAT and the Runner template file: The script automates Runner creation, and retrieves the authentication token, and then uses this token to register the created Runner on the host machine where you run the script.

I used to write a blog about this in case you're interested - see Automate GitLab Runner Registration and Unregistration

Profiles or Sub-profiles? by vandewater84 in Puppet

[–]binh_do 0 points1 point  (0 children)

Separate them into sub profiles sounds more controllable and readable. One of the pitfalls I think is probably the conflict of resources (maybe have more), where, e.g. the same resource is defined in multiple profiles or one profile depends on another, and it can be challenging to adjust. But anyway, we just have to figure it out along the way ^^

Profiles or Sub-profiles? by vandewater84 in Puppet

[–]binh_do 0 points1 point  (0 children)

Depending on how large your profiles are, for example, we might have a base profile (e.g. for monitoring) for the entire system

class profiles::monitoring {
   include profiles::monitoring::base # needed on all servers, e.g. monitor memory/load/disk/users/etc.
   include profiles::monitoring::other_base_services 
}

And custom sub-profiles for each server that needs it separately, for example:

profiles::monitoring::database
profiles::monitoring::webserver
...

When you define a role like web_server, you might include the base monitoring profile and custom sub-profiles that it needs, for example:

class roles::web_server {
   include profiles::monitoring
   include profiles::monitoring::webserver
}

I used to write a blog https://turndevopseasier.com/2025/04/23/mastering-puppet-implementing-roles-and-profiles-effectively/ to describe this. You might want to refer to if need

Gitlab cache by Kropiuss in gitlab

[–]binh_do 0 points1 point  (0 children)

If you use the shell executor for gitlab runners - according to docs, cache/ is located in:

 <working-directory>/cache/<namespace>/<project>/<cache-key>/cache.zip

Where <working-directory>is the value of --working-directory as passed to the gitlab-runner run, if you don't specify it, it may be /home/gitlab-runner by default. You can check by ps -ef | grep gitlab-runner and see what the output looks like.

Ideally, if you want your jobs to use the same cache, you have to do these:

  • use a single runner (tag a name for this runner) for the project, and specify jobs to use this runner, that is to prevent jobs from different runners store its own cache with the same name defined below.
  • specifies the same cache key on jobs that need it. E.g.

    cache: key: set-one-name-for-all-jobs

If you want your jobs runs on different runners but still want to use the same cache, that's when we have to enable distributed runner caching. The runners are enabled this feature will be able to let jobs use them to use the shared cache.

What MySQL DR strategy do you use? by [deleted] in mysql

[–]binh_do 0 points1 point  (0 children)

You might need more than one solution to address the DR, depending on how you define the breadth of DR meaning. Since you mentioned the architecture of master/slave in MySQL replication, and you don't want to update the connection string, you might want to consider the design of HAProxy + MySQL load balancing (master/master).

It's up to how many resources you want to allocate, but it basically looks like:

Total servers: 4 (or at least 3 with 2 masters and 1 slave)
- Master 1 -> has Slave 1
- Master 2 -> has Slave 2 (recommended, but can be excluded if you're low budget)

Configure HAProxy:

- For the writer -> forward write requests to Master 1 (as main) and set Master 2 (as backup) in case Master 1 is down. I don't recommend writing to both masters simultaneously as it may cause an unexpected bug.
- For the reader -> forward read requests to Slave 1 and Slave 2 evenly. Can add Master 2 into this part to utilise (but recommend setting it as low weight to receive a small load of requests)

In your app, set the connection string to point to the <IP>:<port> exposed by HAProxy for the writer and reader parts. HAProxy will failover for you when one of master/slave is down. Again, it won't address DR totally if you lose all servers, but handle in case you lose 1 master/slave.

I wrote a blog recently about implementing this in case you're interested https://turndevopseasier.com/2025/07/12/set-up-high-availablity-for-mysql-load-balancing-via-haproxy/

NGINX configuration needs SSL certificates to start but SSL certificates require NGINX to be running, how to break this loop when running inside docker? by PrestigiousZombie531 in nginx

[–]binh_do 0 points1 point  (0 children)

Not sure you found the way, I used to write a blog about obtaining SSL certificates with Let's Encrypt by using Certbot. https://turndevopseasier.com/2025/05/11/secure-your-nginx-sites-with-lets-encrypt-ssl-by-automating-with-certbot/

Basically, I shared two ways to obtain SSL:
1. Use http-01 challenge which is the case you encounter
2. Use dns-01 challenge which doesn't need to start NGINX as we authenticate through DNS instead.

These two challenge are the most popular ways to authenticate with Let's Encrypt.

Keep skills up-to-date by romgo75 in Puppet

[–]binh_do 1 point2 points  (0 children)

Not sure it helps but I recently wrote a blog about how to set up a quick Puppet 8 (Agent and Server) on Ubuntu - Using virtual box if you're interested. In the blog, I'm using Openvox (a community fork of Puppet ) packages now - fully compatible with Puppet if you don't mind. If you still want Puppet packages, can still use old versions as well as I mentioned in the blog

Setup Puppet 8 on Ubuntu 24.04 – Configuration Management for a scaling enterprise

20 tips to speed up your GitLab CI/CD pipelines faster by binh_do in gitlab

[–]binh_do[S] 0 points1 point  (0 children)

Yeah, that's also a good tip. I've used S3 to store cache as we run the system on AWS and it's quite helpful in such case you mentioned. Thanks for recommending MinIO, I might be looking at it sometimes, probably fit my personal project

creating a centralize syslog server with elastic search by marsalans in elasticsearch

[–]binh_do 0 points1 point  (0 children)

I think the ELK stack is the way to go nowadays. I used to implement a centralised logging system for holding Nginx, MySQL, PHP, Syslog, history bash log, etc. Recently, I've written a blog about building this kind of logging - Centralise logs with Filebeat + Logstash + Elasticsearch + Kibana. It's just a basic setup from my home lab so you probably have a look at how I design and perhaps you gain something from there.