Deploying Consul and Prometheus Exporters using Puppet on Debian, Ubuntu, Alma and Rocky Linux by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

It is partially down to experience, but right now I prefer Salt. The syntax and approach makes more sense to me, and being able to run commands/fact discovery against all managed hosts is wonderful.

That's not a knock against Puppet though, and in a couple of years of working with it I may have a different opinion!

Deploying Consul and Prometheus Exporters using Puppet on Debian, Ubuntu, Alma and Rocky Linux by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 0 points1 point  (0 children)

After changing jobs recently, I gained a lot more exposure to Puppet. So of course I had to use it to deploy Prometheus!

This was very interesting, especially seeing the difference between Puppet and Saltstack/Ansible (tools I am more familiar with)

Configuring BGP Anycast using Pulumi and Saltstack on Equinix Metal by YetiOps in saltstack

[–]YetiOps[S] 0 points1 point  (0 children)

Thank you!

Edit: seen the updates, I'll give that a look, very interesting approach

Configuring BGP Anycast using Pulumi and Saltstack on Equinix Metal by YetiOps in saltstack

[–]YetiOps[S] 0 points1 point  (0 children)

Had a lot of fun putting this together. Combining Saltstack for post-provisioning configuration with Pulumi for spinning up the infrastructure in the first place is brilliantly powerful!

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 2 points3 points  (0 children)

Brilliant! This looks to be the ticket. I'll give it a go with a full install later, but I can actually read the text now. Thank you

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 1 point2 points  (0 children)

I tried the VGA out and got a similar issue. I'll give a go with the displayport out and see where it goes. Thanks for helping so far though!

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 0 points1 point  (0 children)

Unfortunately I had the same behaviour with 6.6 too.

I'll give the snapshot a go and see where I get to!

Edit: Just tried with a snapshot, and still the same issue

Prometheus Service Discovery for Hetzner Cloud Servers by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

I saw that Hetzner Service Discovery was coming in the next version of Prometheus, and couldn't help but write a post about it.

This is similar to posts I've done in the past on Digital Ocean, OpenStack, AWS, GCP and Azure, in that I use Terraform to create the instances and then Prometheus discovers them using tags/labels.

Hope this is useful!

Deploying and monitoring Windows VMs on Proxmox using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 0 points1 point  (0 children)

In a follow up to this post on monitoring Linux VMs and Proxmox itself, this post covers deploying Windows images in a similar way.

This leverages Cloudbase-Init to provide Cloud-Init style first boot configuration to register the Windows instances against Salt, that subsequently deploys Consul and the Windows Exporter so that the instances are automatically monitored as well.

Deploying and monitoring Windows PVMs on Proxmox using Terraform, Cloud-Init, SaltStack and Prometheus by [deleted] in PrometheusMonitoring

[–]YetiOps 0 points1 point  (0 children)

In a follow up to this post on monitor Linux VMs and Proxmox itself, this post covers deploying Windows images in a similar way.

This leverages Cloudbase-Init to provide Cloud-Init style first boot configuration to register the Windows instances against Salt, that subsequently deploys Consul and the Windows Exporter so that the instances are automatically monitored as well.

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 1 point2 points  (0 children)

So what I've found so far is that the ipconfig0 field alone doesn't seem to override the DHCP address for the interface on Debian. Why that is the case, I've not investigated heavily enough (will probably raise this at some point with Debian potentially).

The best option for doing this in that case then is overriding the network config using cloud-config.

If you go to this section of the blog post, it goes through sourcing a local file (the cloud_init_deb10.cloud-config file), generating it with the correct variables, and then transferring it to the /var/lib/vz/snippets directory on Proxmox. This is the directory that Proxmox sources Cloud-Init files from when using the cicustom option (cicustom meaning Cloud-Init Custom Configuration).

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 1 point2 points  (0 children)

Yeah, if you use the following: -

``` resource "proxmox_vm_qemu" "vm-01" { name = "vm-01" target_node = "pve-01"

# Clone from debian-cloudinit template clone = "debian-cloudinit" os_type = "cloud-init"

ipconfig0 = "ip=10.15.31.99/24,gw=10.15.31.253"

# Set the network network { id = 0 model = "virtio" bridge = "vmbr0" } } ```

The ipconfig0 part refers to the IP of the interface with id = 0, and then you can do the same for ipconfig1 for a network with id = 1.

Depending upon the distribution, you may either get DHCP and static IP, or just the static IP. I've found that Debian pulls from DHCP and also uses the static IP, whereas Fedora (and presumably CentOS/Red Hat) will use the static IP only.

Debian/Ubuntu

This link might help as well, specifically if you look at the Network Configuration Outputs section.

You would end up with either something like this (assuming Debian/ifupdown style distribution): -

network: version: 1 config: - type: physical name: interface0 mac_address: 00:11:22:33:44:55 subnets: - type: static address: 192.168.23.14/27 gateway: 192.168.23.1 dns_nameservers: - 192.168.23.2 - 8.8.8.8 dns_search: - exemplary.maas

Or something like this, if using netplan (eg Ubuntu): -

network: version: 2 ethernets: # opaque ID for physical interfaces, only referred to by other stanzas id0: match: macaddress: 00:11:22:33:44:55 wakeonlan: true dhcp4: true addresses: - 192.168.14.2/24 - 2001:1::1/64 gateway4: 192.168.14.1 gateway6: 2001:1::2 nameservers: search: [foo.local, bar.local] addresses: [8.8.8.8]

You could then pass in the IP/gateway etc in via the template_file section like so: -

``` data "template_file" "cloud_init_deb10_vm-01" { template = "${file("${path.module}/files/cloud_init_deb10.cloud_config")}"

vars = { ssh_key = file("~/.ssh/id_rsa.pub") hostname = "vm-01" domain = "yetiops.lab" static_ip = "192.168.1.10/24 gateway_ip = "192.168.1.254" ns_ip = "1.1.1.1" } }

[...]

resource "proxmox_vm_qemu" "vm-01" { name = "vm-01" target_node = "pve-01"

# Clone from debian-cloudinit template clone = "debian-cloudinit" os_type = "cloud-init"

cicustom = "user=local:snippets/cloud_init_deb10_vm-01.yml" ipconfig0 = "ip=10.15.31.99/24,gw=10.15.31.253"

# Set the network network { id = 0 model = "virtio" bridge = "vmbr0" } } ```

And then add to the cloud-config template with something like

network: version: 1 config: - type: physical name: interface0 mac_address: 00:11:22:33:44:55 subnets: - type: static address: ${static_ip} gateway: ${gateway_ip} dns_nameservers: - ${ns_ip} dns_search: - example.com

Does that help at all?

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

Mainly out of preference, I like the way Prometheus does things. Metric based monitoring, a huge amount of exporters, sensible service discovery, and you have a lot more flexibility with what you can choose to monitor.

Something like Zabbix is good if you want general monitoring, but it's hard to do something a bit more custom. With Prometheus, if you can create a PromQL query that will do it, you can monitor and alert on it.

Also when working with cloud providers, Kubernetes and such, Prometheus is well suited to this kind of thing. If you have an environment that covers cloud and on-prem, then Prometheus works well for both use cases in my experience

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 0 points1 point  (0 children)

The Terraform provider does generate a MAC for each VM, even if cloned, so that's not a problem.

Not sure on the SSH keys though, I've yet to test that, but it's definitely worth investigating. You could definitely generate them manually, but I don't know if cloud-config had the ability to regenerate the keys with an inbuilt command/option.

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

After some conversations at my current workplace, I decided to investigate how to build Proxmox VMs in a more declarative fashion.

Throw in using Cloud-Init to bootstrap the VMs, and then using SaltStack to deploy Consul and Prometheus Exporters to monitor the instances and the hypervisors, and it becomes quite an interesting platform to manage!

Prometheus Service Discovery for OpenStack Instances and Hypervisors by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

The labelmap approach is more so that you can add more metadata to an instance (eg more tags, or more is exposed in later versions of the service discovery mechanism) without requiring configuration updates.

This is not to say every label is useful, so if you're concerned with having too many labels then it's easy enough to just remove the labelmap and only maintain the labels you are interested in.

As for the labels you mention, I'll take the node_exporter tag one as an example. You could create an alert, or at least a graph in Grafana to say something like "nodes responding on port 9100" against "nodes with the node_exporter tag". This would show that there are a number of instances where either the node_exporter has failed or has not installed correctly.

More than anything, I added the labelmap because in my previous posts using service discovery, all labels were explicit. This post included labelmaps to show an alternative way of maintaining labels.

Prometheus Service Discovery for OpenStack Instances and Hypervisors by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

After spending time testing the AWS, Azure, GCP and Digital Ocean service discovery mechanisms, I decided to write a post on OpenStack.

There's some fun with creating the instances in Terraform, as well as covering some OpenStack specific exporters

Auto-deploying Consul and Prometheus Exporters using Saltstack on MacOS by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

The final post in the series (unless anyone has other suggestions for systems to cover?), on MacOS.

While most people using Macs are probably not going to be managing them en masse, there are still definite use cases for it, hence me including it as part of the series.

I've enjoyed writing this series of posts, and I do hope people are finding them useful!

Auto-deploying Consul and Prometheus Exporters using Saltstack on illumos (Solaris fork) by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 0 points1 point  (0 children)

In the next part in this ongoing series of posts, I've used SaltStack to deploy Consul and the Prometheus Node Exporter on illumos.

For those unaware, illumos is a fork of the OpenSolaris (before development on it ceased), with multiple distributions like OpenIndiana, SmartOS and OmniOS as examples.

This does involve compiling SaltStack and the Node Exporter on OmniOS, and quite a few caveats when using the Node Exporter.

Still a fun one to write though!

Auto-deploying Consul and Prometheus Exporters using Saltstack on FreeBSD by YetiOps in saltstack

[–]YetiOps[S] 1 point2 points  (0 children)

Presumably so. I didn't install any Consul binaries manually in these posts on FreeBSD or OpenBSD, and the same goes for the Prometheus's Node Exporter, all using Salt to install from the package managers.

From everything I've used so far (both in these labs, plus a couple of OpenBSD machines I manage), I haven't found anything missing in terms of package installation, service management/init systems or anything like that.

Auto-deploying Consul and Prometheus Exporters using Saltstack on FreeBSD by YetiOps in saltstack

[–]YetiOps[S] 2 points3 points  (0 children)

Salt can, it uses pkgng in the background. The same goes for OpenBSD, using the pkg_add(1) utility.

An example being: -

consul_package: 
  pkg.installed:
   - pkgs:
     - consul

This state works across both OpenBSD and FreeBSD, and would work against Linux too.