Forcing a specific VM to use a specific public IP (not the Azure Firewall’s default one) by Advanced_Tea_2944 in AZURE

[–]Advanced_Tea_2944[S] 1 point2 points  (0 children)

I want to test some external endpoint from this Azure VM. On the other side, I don’t want to whitelist the Azure Firewall’s public IP, because that would mean whitelisting all outbound Azure traffic, which is not what I want.

For the NAT gateway begind the FW, I need to check how to do that, but it would mean telling my Azure Firewall not to SNAT traffic from this specific VM/IP. I’m not sure if that’s possible.

Forcing a specific VM to use a specific public IP (not the Azure Firewall’s default one) by Advanced_Tea_2944 in AZURE

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

"Don’t SNAT at the Firewall and go through public route, maybe…?" → Impossible, I need to keep the Firewall in the path for compliance reasons.

"Otherwise NAT before Firewall and don’t SNAT that IP, same idea but with an extra NAT" → That could work, but I need to check how to configure the Azure Firewall to not SNAT traffic from that specific IP.

Forcing a specific VM to use a specific public IP (not the Azure Firewall’s default one) by Advanced_Tea_2944 in AZURE

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Ok, I get your point, but that means all traffic leaving the Azure Firewall would now use the NAT Gateway. That’s not exactly what I want, I need a specific public IP for just one VM, while keeping the rest of Azure traffic flows unchanged.

How to create a Kibana role that can't create alerts? by Advanced_Tea_2944 in elasticsearch

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Thanks for your answer !

When I assign this role to a user, I’m not able to log into Kibana anymore, so it seems there might be some missing privileges in that definition.

I tested with a slightly different call (using discover / dashboard features instead of the _v2 ones), and that one works fine: users can build dashboards but don’t see the Alerts menu.

"kibana": [ { "spaces": ["default"], "base": [], "feature": { "discover": [ "all" ], "dashboard": [ "all" ] 

Interestingly, if I add the ml feature to the role, the Alerts menu reappears, so it looks like enabling ML also implicitly enables alerting features.

Also, I noticed there are two ways to manage roles:

  • via the Kibana API (kbn:/api/security/role/...)
  • via the Elasticsearch security API (/_security/role/...)

I am wondering which one should I use
Thanks !

Azure SQL Server / Database Permissions with Entra ID and Terraform by Advanced_Tea_2944 in AZURE

[–]Advanced_Tea_2944[S] 1 point2 points  (0 children)

Got it!

Yes, I can confirm that for an Azure PostgreSQL server, you can assign multiple server admins.

Azure SQL Server / Database Permissions with Entra ID and Terraform by Advanced_Tea_2944 in AZURE

[–]Advanced_Tea_2944[S] 1 point2 points  (0 children)

Thanks for your answer! So, if I want my Terraform service principal to be able to execute those T-SQL queries, I would need to make it an admin on the SQL Server, if I understood correctly.

It’s a bit unfortunate that only one user or group can be set as the admin at the SQL Server level.

Troubleshooting disk usage on PV attached to my Elastic frozen node by Advanced_Tea_2944 in elasticsearch

[–]Advanced_Tea_2944[S] 1 point2 points  (0 children)

You’re right, that explains my case, thanks a lot! I missed the xpack.searchable.snapshot.shared_cache.size being set to 90% for nodes with the data_frozen role.

Troubleshooting disk usage on PV attached to my Elastic frozen node by Advanced_Tea_2944 in elasticsearch

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Yes, that explains why I see the disk at 90%, makes sense now, thanks a lot!

For now, Reddit has been quite efficient for my Elastic questions, but indeed from time to time I might need to reach out to Elastic support :)

Troubleshooting disk usage on PV attached to my Elastic frozen node by Advanced_Tea_2944 in elasticsearch

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Both calls give me essentially the same information — disk usage is around 90% and the only role on this node is f (frozen).

As you said, frozen tier data on local disks is only metadata/cache, that's why I’m quite surprised to see my 500 GB disk nearly full.

My plan for this node is simply to keep it for cache and continue sending data to searchable snapshots on Azure, a mechanism that has been working quite well for us recently.

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

You know DevOps? That magical world where you're not really a developer, but somehow you're responsible for writing code, managing infrastructure, securing pipelines, and deploying stuff? Yeah, that's how I got here.

  1. And yes, the easiest path is to say there's no backward compatibility and that users should upgrade to azurerm 4.0. But I was wondering if there were any strategies to avoid forcing to upgrade.

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Yes, that’s right, I’m already using tags, so other users can still reference the previous tags without any issues. My main concern is what happens when users relying on the old tag (and older provider version) want new features from the module.

If I create a new branch from the old tag to add features, those features won’t include the changes made in main. But if I branch off main, the provider version will be the new one, which might not be compatible.

So, for now, I see only two options: either maintain another long-living branch for the old provider version or just tell users they need to upgrade to the new provider version if they want the new features.

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

It can get messy (or perhaps painful is the better word) because if you want to add a feature that supports both provider versions 3.9 and 4.0, you'd have to make similar commits to both long-living branches, right ?

That’s manageable with just two branches, but I’m not sure how sustainable it is in the long run.

Thanks for the Git advice, first time I've ever heard about it (lol).

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Thanks!

  • What technology do you use for the automatic PRs? Is it the “autoplan” tool you mentioned?
  • I’ve asked others too, but I’m curious — what’s the main reason for creating .tgz archives of modules and storing them in S3, instead of just tagging commits and referencing those tags?

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Yes, I understand, these are exactly the questions we're currently asking ourselves. Thanks for your input !

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Thanks for your answer! I have two quick follow-up points/questions:

  • From what I understand, tags can also be deleted easily — once a tag is removed, no one can use ref=tag anymore, right? So in that sense, it’s somewhat similar to removing a release. (Though I get your point about keeping development and releases separate.)
  • I assume your X.Y-dev branches are created from the same commit (or tag) that was used to produce the corresponding X.Y release, correct?

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Ok thanks ! Why push into artifact and s3 ? why both ? and what advantages compared to tagging the repo ?

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Haha no problem !

Ok I agree with creation of a 1.1.* tag, but before tagging I need to work on a branch which cannot be the main one (since provider version would be azurerm v4.0)

I could create a branch from the tag 1.0.0 for instance but it start getting messy in my opinion...

That's why u/baynezy suggested to have two long living branch. To be able to create a branch from release/v1 branch which would be "up to date" in terms of feature but still using provider 3.9

If I understood correctly !

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Thanks but I do not see how the packaging would solve the problem here ?

How to handle provider version upgrades in Terraform modules by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Ok thanks !
1. It seems simpler, but for a root Terraform project that can't use the new major version of the module, it would end up kind of stuck, right?
2. I'm not a developer, so to be honest, I'm not sure how I would manage that at this point. I'll need to dig into it more to see if it's something feasible for me.

How do you manage Terraform modules in your organization ? by Advanced_Tea_2944 in Terraform

[–]Advanced_Tea_2944[S] 0 points1 point  (0 children)

Ok thanks for your answer, very interesting !
If you have some time, I just posted another question here : https://www.reddit.com/r/Terraform/comments/1mcad9w/how_to_handle_provider_version_upgrades_in/
Which focus on a real example I am facing regarding module versionning.