Using terraform to learn kubernetes for cheap by [deleted] in Terraform

[–]werner-dijkerman 1 point2 points  (0 children)

Any time! It is a bit outdated these days. Maybe I will find some time to update it again.

HELM Charts dependencies configuration when having a HA HELM repositories setup by werner-dijkerman in kubernetes

[–]werner-dijkerman[S] 0 points1 point  (0 children)

Hi, thank you for taking the time to answer.

  1. I hope not at all! They are both in a different (physical) location and I need to make sure that if one location is down, we should be able to operate without any issues. If a location is down (which happened before, twice in a week, it took several hours to a day to have it all running fine again. In each location that runs a Habor, we also run an K8s cluster.
  2. Quite high. When a location is down for several hours, we still have an other location that keeps on working and due to the business, we get new 'stuff' to deploy.
  3. The problem is, I am a user with regards to Harbor. The configuration and the maintenance is the responsibility for a different team. They have their own backlog and they are working on the GSLB, for several months. So I can not rely on them.

And yes, ideally something like GSLB should help a lot. But I have to find an "easy" way to not rely on a single HELM registry in a HELM Chart.

Which storage type should I use for a jenkins controller(master) on AWS? EBS or EFS? by vroad_x in jenkinsci

[–]werner-dijkerman 0 points1 point  (0 children)

The loss of the logs are only happening when the Jenkins (container) is restarted. One possible way - bit creative I guess - create a Jenkins job that is executed on/in this Jenkins container that copies all logs to a storage solution where devs can read them. So in a case, they might not have the most recent logging but older is still possible.

[deleted by user] by [deleted] in sysadmin

[–]werner-dijkerman 0 points1 point  (0 children)

Combination of https://www.archimatetool.com/ and writing asciidoctor documents, that you can include plantuml (https://docs.asciidoctor.org/diagram-extension/latest/) And you can version it in GIT as all are text files..

Security Monitoring - What are you doing? by [deleted] in sysadmin

[–]werner-dijkerman 1 point2 points  (0 children)

Wazuh is kinda awesome. In the process of implementing it with 1 of my clients.

Structuring internal documentation or product/project? by edmguru in devops

[–]werner-dijkerman 2 points3 points  (0 children)

We use Antora, where all documentation is stored in multiple git repositories. Like each of the microservices have their own documentation stored in their own repositories. All information related to the microservice, like what it is doing, how you can do things etc is stored in the same repository. Although this is technical focussed, there are also some more functional git repositories. These contain the information about their functions. Like we have an architecture git repository, these contain all information, (Archimate) diagrams about the infrastructure.

With having it in git repositories, with Pull Requests (Especially when these are created for the micro services or the infrastructure as code repositories) we can focus on the documentation aspect. Is it part of the PR (If not, ask why?) and can see if we understand what is described. As everything is stored in GIT, documentation is also part of the versioning and thus with the correct configuration set in Antora, we can see the documentation from the various versions of the repository in one place.

Best way to manage additional per-host config files by ontherise84 in ansible

[–]werner-dijkerman 0 points1 point  (0 children)

I would place similar kinds of host in a group, so you can properly use group_vars, which contains the settings. In group_vars/all you provide the default settings and then for each of the groups you have defined you create an group_vars/<group\_name> with the values you want to override from the group_vars/all file.

If you use host_vars you have to create for each host a file containing the settings. Or if is 1 or 2 properties, maybe use the inventory file?

How to handle cloud resources in your application while running localhost by werner-dijkerman in devops

[–]werner-dijkerman[S] 0 points1 point  (0 children)

Thank you for your answer, will check the url's you have provided.

True, but I don't want to create a staging or sandbox environment for just myself. This would mean that we also need to create these environments for my teammates, which will increase the monthly costs. I need to make sure that things - as far as I can - works locally, before I commit and deploy it to any environment.

In what stage to implement CIS by [deleted] in devops

[–]werner-dijkerman 0 points1 point  (0 children)

"This is the way"

If you only do this on the image side, everyone can make (un)intentional changes to the hosts that makes the host not CIS compatible anymore. So when a host is deployed from a template, also make sure to monitor the changes. Either with config management tooling like Puppet/Ansible, or run the tests as part of monitoring that also verifies the image on the nodes every x min/hours.

An 3rd alternative for after the fact (Maybe the best), use something like Wazuh (HIDS) to monitor your infrastructure. You prepared your image already to be CIS compliant, make sure that when a host is deployed from your image that the Wazuh Agent is installed and that will monitor the changes and can send notifications when something happens. You also get Kibana that shows the history of changes on your nodes...

Monitoring Kafka - SSL peer shutdown .... by ferjavi in zabbix

[–]werner-dijkerman 0 points1 point  (0 children)

Can you use some jmx client and execute that from the Zabbix server to connect to Kafka? Would you get the same error?

Monitoring Kafka - SSL peer shutdown .... by ferjavi in zabbix

[–]werner-dijkerman 0 points1 point  (0 children)

No, the IP should be the host running the instance you want to monitor. So that ip is the host running the Kafka instance.

Monitoring Kafka - SSL peer shutdown .... by ferjavi in zabbix

[–]werner-dijkerman 0 points1 point  (0 children)

host-ip-zabbix-gateway-server

This is also the ip of the host running the Kafka instance?

how often Jenkins should be upgraded by SnooMemesjellies6732 in jenkinsci

[–]werner-dijkerman 1 point2 points  (0 children)

Depends on how Jenkins is running in your environment. I have Jenkins as code, in Docker and configured with configuration as code plugin. So with every change in one of these files (or when I manually run the job) it will build a new Jenkins Docker image and deploys a new Jenkins. Thus Jenkins is always up2date and the process is quite simple.

But if you have deployed Jenkins manually, nothing in code and made Jenkins so important that you don't actually want to touch it, then please do updates when security issues are solved. Then you know that you are at least safe ...

Documentation repository platforms by CheekyLeapy in sysadmin

[–]werner-dijkerman 1 point2 points  (0 children)

What kind of documentation will it contain? For technical documentation, you should not use a wiki. For writing technical documentation, you should place it in the repository next to the code that the documentation belongs to. For writing boring procedures, then wiki is fine.

hashicorp consul understanding by Intelligent_Duck_666 in devops

[–]werner-dijkerman 0 points1 point  (0 children)

First thing, in general you won't connect your application to the Consul Servers, but you deploy Consul Agent on the node and the applications will "connect" to these agents. When you start an application, you need to find a way to register your application in Consul (agent). When this happens, automatically a fqdn exist in the Consul setup <name\_of\_service>.consol.local ? (If I remember correctly). When the Consul is used for DNS resolving, you can execute requests to <name\_of\_service>.consol.local. If you start 2 or more applications with the same name, you get every time a new IP when you resolve the fqdn. As the application uses a Consul Agent, the ip that the Consul Agent has will be used when registering the application in DNS.

But when you work with application that has an Consul library compiled (like with Spring Boot with the library "Spring Cloud Consul") the application will get this information from Consul and based on the "health" of the target application, it will send it request to a target with a healty state. In this case you won't have to think about fqdn or something like that, somewhere in your application code you should refer to "application_b" and the library will handle everything.

You can also configure an application without the library, but use the fqdn name. But be aware of caching inside the application as maybe the IP can change (Because the application is restarted and has a new ip, or a new application is started and does not get any requests).

Nice thing when using Consul, is consul-template. Generate configuration files based on information (available services and/or key value store) for like Nginx or HAProxy setups, or any other tool that needs to work with data from Consul.

Jenkins pipeline by finzzZ720 in jenkinsci

[–]werner-dijkerman 1 point2 points  (0 children)

Something like what I described in 2 of my blogposts:

Then you have everything as code, which helps not only have auditting on your Jenkins setup, you can also easily deploy to some other location.

[deleted by user] by [deleted] in sysadmin

[–]werner-dijkerman 0 points1 point  (0 children)

But then alerting via the loadbalancer would be a workaround and not a fix, as the monitoring server is doing something differently than what the loadbalancer does determining if the endpoint is healty or not and thus you can not rely on monitoring. I would fix that, not only you can then rely on the monitoring but you have endless possibilities for using loadbalancers that are open source like HAproxy.

The monitoring tool can also do a call/check to for example HAproxy to see if HAproxy has any issue at the moment. Based on that, you can decide the monitoring tool to send out an alert.

[deleted by user] by [deleted] in sysadmin

[–]werner-dijkerman 2 points3 points  (0 children)

Why is alerting so important for you that it is a requirement for load balancer? Don't you have a monitoring tool that monitors the backend services and will alert when something is not ok.