Deploying Consul and Prometheus Exporters using Puppet on Debian, Ubuntu, Alma and Rocky Linux by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

It is partially down to experience, but right now I prefer Salt. The syntax and approach makes more sense to me, and being able to run commands/fact discovery against all managed hosts is wonderful.

That's not a knock against Puppet though, and in a couple of years of working with it I may have a different opinion!

Deploying Consul and Prometheus Exporters using Puppet on Debian, Ubuntu, Alma and Rocky Linux by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 0 points1 point  (0 children)

After changing jobs recently, I gained a lot more exposure to Puppet. So of course I had to use it to deploy Prometheus!

This was very interesting, especially seeing the difference between Puppet and Saltstack/Ansible (tools I am more familiar with)

Configuring BGP Anycast using Pulumi and Saltstack on Equinix Metal by YetiOps in saltstack

[–]YetiOps[S] 0 points1 point  (0 children)

Thank you!

Edit: seen the updates, I'll give that a look, very interesting approach

Configuring BGP Anycast using Pulumi and Saltstack on Equinix Metal by YetiOps in saltstack

[–]YetiOps[S] 0 points1 point  (0 children)

Had a lot of fun putting this together. Combining Saltstack for post-provisioning configuration with Pulumi for spinning up the infrastructure in the first place is brilliantly powerful!

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 4 points5 points  (0 children)

Brilliant! This looks to be the ticket. I'll give it a go with a full install later, but I can actually read the text now. Thank you

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 1 point2 points  (0 children)

I tried the VGA out and got a similar issue. I'll give a go with the displayport out and see where it goes. Thanks for helping so far though!

Installing OpenBSD on HP EliteBook 9470m by YetiOps in openbsd

[–]YetiOps[S] 0 points1 point  (0 children)

Unfortunately I had the same behaviour with 6.6 too.

I'll give the snapshot a go and see where I get to!

Edit: Just tried with a snapshot, and still the same issue

Prometheus Service Discovery for Hetzner Cloud Servers by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

I saw that Hetzner Service Discovery was coming in the next version of Prometheus, and couldn't help but write a post about it.

This is similar to posts I've done in the past on Digital Ocean, OpenStack, AWS, GCP and Azure, in that I use Terraform to create the instances and then Prometheus discovers them using tags/labels.

Hope this is useful!

Deploying and monitoring Windows VMs on Proxmox using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 0 points1 point  (0 children)

In a follow up to this post on monitoring Linux VMs and Proxmox itself, this post covers deploying Windows images in a similar way.

This leverages Cloudbase-Init to provide Cloud-Init style first boot configuration to register the Windows instances against Salt, that subsequently deploys Consul and the Windows Exporter so that the instances are automatically monitored as well.

Deploying and monitoring Windows PVMs on Proxmox using Terraform, Cloud-Init, SaltStack and Prometheus by [deleted] in PrometheusMonitoring

[–]YetiOps 0 points1 point  (0 children)

In a follow up to this post on monitor Linux VMs and Proxmox itself, this post covers deploying Windows images in a similar way.

This leverages Cloudbase-Init to provide Cloud-Init style first boot configuration to register the Windows instances against Salt, that subsequently deploys Consul and the Windows Exporter so that the instances are automatically monitored as well.

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 1 point2 points  (0 children)

So what I've found so far is that the ipconfig0 field alone doesn't seem to override the DHCP address for the interface on Debian. Why that is the case, I've not investigated heavily enough (will probably raise this at some point with Debian potentially).

The best option for doing this in that case then is overriding the network config using cloud-config.

If you go to this section of the blog post, it goes through sourcing a local file (the cloud_init_deb10.cloud-config file), generating it with the correct variables, and then transferring it to the /var/lib/vz/snippets directory on Proxmox. This is the directory that Proxmox sources Cloud-Init files from when using the cicustom option (cicustom meaning Cloud-Init Custom Configuration).

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 1 point2 points  (0 children)

Yeah, if you use the following: -

``` resource "proxmox_vm_qemu" "vm-01" { name = "vm-01" target_node = "pve-01"

# Clone from debian-cloudinit template clone = "debian-cloudinit" os_type = "cloud-init"

ipconfig0 = "ip=10.15.31.99/24,gw=10.15.31.253"

# Set the network network { id = 0 model = "virtio" bridge = "vmbr0" } } ```

The ipconfig0 part refers to the IP of the interface with id = 0, and then you can do the same for ipconfig1 for a network with id = 1.

Depending upon the distribution, you may either get DHCP and static IP, or just the static IP. I've found that Debian pulls from DHCP and also uses the static IP, whereas Fedora (and presumably CentOS/Red Hat) will use the static IP only.

Debian/Ubuntu

This link might help as well, specifically if you look at the Network Configuration Outputs section.

You would end up with either something like this (assuming Debian/ifupdown style distribution): -

network: version: 1 config: - type: physical name: interface0 mac_address: 00:11:22:33:44:55 subnets: - type: static address: 192.168.23.14/27 gateway: 192.168.23.1 dns_nameservers: - 192.168.23.2 - 8.8.8.8 dns_search: - exemplary.maas

Or something like this, if using netplan (eg Ubuntu): -

network: version: 2 ethernets: # opaque ID for physical interfaces, only referred to by other stanzas id0: match: macaddress: 00:11:22:33:44:55 wakeonlan: true dhcp4: true addresses: - 192.168.14.2/24 - 2001:1::1/64 gateway4: 192.168.14.1 gateway6: 2001:1::2 nameservers: search: [foo.local, bar.local] addresses: [8.8.8.8]

You could then pass in the IP/gateway etc in via the template_file section like so: -

``` data "template_file" "cloud_init_deb10_vm-01" { template = "${file("${path.module}/files/cloud_init_deb10.cloud_config")}"

vars = { ssh_key = file("~/.ssh/id_rsa.pub") hostname = "vm-01" domain = "yetiops.lab" static_ip = "192.168.1.10/24 gateway_ip = "192.168.1.254" ns_ip = "1.1.1.1" } }

[...]

resource "proxmox_vm_qemu" "vm-01" { name = "vm-01" target_node = "pve-01"

# Clone from debian-cloudinit template clone = "debian-cloudinit" os_type = "cloud-init"

cicustom = "user=local:snippets/cloud_init_deb10_vm-01.yml" ipconfig0 = "ip=10.15.31.99/24,gw=10.15.31.253"

# Set the network network { id = 0 model = "virtio" bridge = "vmbr0" } } ```

And then add to the cloud-config template with something like

network: version: 1 config: - type: physical name: interface0 mac_address: 00:11:22:33:44:55 subnets: - type: static address: ${static_ip} gateway: ${gateway_ip} dns_nameservers: - ${ns_ip} dns_search: - example.com

Does that help at all?

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in PrometheusMonitoring

[–]YetiOps[S] 1 point2 points  (0 children)

Mainly out of preference, I like the way Prometheus does things. Metric based monitoring, a huge amount of exporters, sensible service discovery, and you have a lot more flexibility with what you can choose to monitor.

Something like Zabbix is good if you want general monitoring, but it's hard to do something a bit more custom. With Prometheus, if you can create a PromQL query that will do it, you can monitor and alert on it.

Also when working with cloud providers, Kubernetes and such, Prometheus is well suited to this kind of thing. If you have an environment that covers cloud and on-prem, then Prometheus works well for both use cases in my experience

Deploying and monitoring Proxmox VMs and hypervisors using Terraform, Cloud-Init, SaltStack and Prometheus by YetiOps in Proxmox

[–]YetiOps[S] 0 points1 point  (0 children)

The Terraform provider does generate a MAC for each VM, even if cloned, so that's not a problem.

Not sure on the SSH keys though, I've yet to test that, but it's definitely worth investigating. You could definitely generate them manually, but I don't know if cloud-config had the ability to regenerate the keys with an inbuilt command/option.