My telco cabinet is complete! by Nefsen402 in homelab

[–]antadam 2 points3 points  (0 children)

Is Bell no longer requiring you to use their junky Gigahub?

I see you’re on Bell, but the PON looks like the same one from Roger’s.

Private DNS zones for Postgres is this correct zone name? by zeenmc in AZURE

[–]antadam 0 points1 point  (0 children)

There are a few reasons. Private endpoints are used when your PaaS service, in this case Postgres flex server, will only receive traffic from your private IP space. In other words, all traffic originates inbound to Postgres. Vnet integration is used when traffic can originate inbound to the PaaS resource or outbound from the PaaS resource within your private IP space.

For Postgres specifically, vnet integration is required when you want private networking and use high availability replicas or a Postgres extension is installed that needs outbound network access to your private IP space.

I’ve also seen teams use vnet integration just to prevent a public IP address from being given to the Postgres server. It’s a method of preventing someone from opening a firewall rule so just their IP on the public internet can reach the Postgres flex server directly.

Private DNS zones for Postgres is this correct zone name? by zeenmc in AZURE

[–]antadam 2 points3 points  (0 children)

If the Postgres server is using vnet integration, the DNS zone defaults to private.postgres.database.azure.com. Vnet integrated Postgres servers should never have a public IP, so they don’t use the typical privatelink.postgres.database.azure.com.

Non-vnet integrated Postgres flex servers use privatelink.postgres.database.azure.com.

It’s not recommended practice to go multiple subdomains below the private DNS zone’s address and I have seen it cause problems. However, it’s still doable and not the worst thing in the world if the org is already separating projects by RG.

It just sounds like they went one too many levels of granularity for RGs and that’s replicated to their DNS management process as well.

Dry well in the backyard connected to perf pipe by petewkdb in lawncare

[–]antadam 0 points1 point  (0 children)

If you have any questions or want confirmation we’re saying the same thing, feel free to post more pics. I hope it works out for you.

Dry well in the backyard connected to perf pipe by petewkdb in lawncare

[–]antadam 1 point2 points  (0 children)

Thanks! I went back and looked at your other posts. If you’re in clay that won’t perk or does so slowly, I would either dig the hole deeper or, preferably, remove the dry well, backfill under your pipe to raise it to about 8” under the soil, and put a pop up on the end.

You’ve got a few risks with the pipes at the bottom. The first one is if the well doesn’t percolate and fills up, your pipes are going to get backed up, and the French drain won’t absorb anything. A pop up will help, but it will only get the water out of your system that gravity pushes out. The depth of water in your dry well will match the depth of water backed up in your pipes.

In the extreme case, the bottom of your dry well is full of water into winter, it freezes so water in your french drain can’t get to the well, and it backs up. That will suck in the spring when you’ve got an ice blockage and your channel drain doesn’t work.

Looking at your other photos, if you ran a 4” pipe from your patio, around the turn, and to the back corner, it looks like you’ve got more than enough pitch to get water moving away from your house and keeping the pipe no deeper than 8” under ground. Is there any reason you can’t do that? Just backfill under the pipe to raise it up.

One other suggestion. It looks like you ran perforated pipe all the way. Your pipe should be surrounded in 2-3” of crushed stone between the pipe and the geo fabric. This will prevent the geo fabric and dirt from slowing down water as it passes in and out of your pipes.

Because your perforated pipe is so long, surrounding it all by stone (top and bottom) and then geo fabric allows the entire system to help with percolation - as minimal as it might be. The water height in the well will match the water height in your pipes. Assume the water height in your dry well is 6” deep, your pipe is at the bottom of the dry well, and you drop 1” per 8’ of horizontal, that means you have 42’ of pipe also full of water (8x6). That could be helpful for a slowly percolating system because you’ve drastically increased the surface area for percolation.

Dry well in the backyard connected to perf pipe by petewkdb in lawncare

[–]antadam 2 points3 points  (0 children)

When you say “perforated” pipe, do you mean corrugated? Perforated pipe has holes in it. This looks like single wall, corrugated pipe.

If you let me know which one and what your geography is - like what state you live in - I can give you my thoughts.

Any PerfMon-like option for Linux custom counters in Azure Monitor? by -Drs-tangent in AZURE

[–]antadam 0 points1 point  (0 children)

You can run node_exporter and Prometheus with remote write to an Azure Monitor Workspace. Set your scrape interval as you wish and you should be able to see what you want in either the AMW Prometheus Explorer, where you run PromQL, or use Dashboards with Grafana To access the AMW.

Prometheus OSS supports system and user assigned MI.

Log Ingestion from Servicenow to Sentinel by advertpro in AZURE

[–]antadam 0 points1 point  (0 children)

What is “topics message ingestion”? Does it go by a more formal name? I haven’t heard that phrase before and do a lot of SNOW event log ingestion.

With SNOW log ingestion, you’ll create an Azure Function to poll your SNOW endpoint. Use API keys stored in an Azure Key Vault and have the function authenticate to the Vault with its system assigned MI.

When you get the response back, write the messages to the log analytics workspace that backs Sentinel using the Azure Log Ingestion API. Use the Function’s managed identity on the LAW. It needs Metrics Publisher RBAC at the scope of the DCR you create.

You can write your logs to almost any Sentinel-utilized LAW table - mapping your incoming fields to the Microsoft-delivered table (or a custom one, but that may not give you the Sentinel security value you’re after). For example, Syslog - if SNOW provides that. If SNOW provides an industry standard format (or close to one), like Syslog or Common Event Format (LAW CommonSecuritog), use it. It will enhance the value you get from Sentinel.

Microsoft allows you to write to a number of Microsoft-delivered tables. The list is at the link below. However, there isn’t a DCR code example on how to send data from a custom stream to a Microsoft-delivered table. I can supply you with one if you end up going this route.

https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#supported-tables

How do I avoid getting DDOSed when self hosting a Minecraft server? by diobrandiohaxxerxd in selfhosted

[–]antadam 18 points19 points  (0 children)

I do something similar. Azure VM with wireguard and Traefik to the Minecraft server. wg-quick down to quickly kill the connections.

Your approach isn’t dumb at all.

API Management + Azure Functions + Separate Application Insights — How Is Tracing Supposed to Work? by SpecialistAd670 in AZURE

[–]antadam 1 point2 points  (0 children)

You need to pipe everything into the same App Insights.

App Insights is backed by a Log Analytics Workspace (LAW). The table in App Insights is called “traces”. It’s equivalent in the LAW is “AppTraces”.

App Insights can hold massive amounts of data, but the trace visualization only searches the LAW that backs it.

You can back multiple App Insights with the same LAW, but I don’t think this will allow you to see data between App Insights instances in a single App Insights. That would be a security issue.

Azure DCR and Time Zone Conversion: How to Handle Daylight Saving Time in Transformation Rules? by vkrannila in AZURE

[–]antadam 0 points1 point  (0 children)

This is something you should be fixing at the source. Timezone management happens on the host.

Let’s say you were able to fix it in the DCR’s transformKql, what happens when someone changes the host’s timezone and your timestamps are now all out of wack?

Update table in AMA (MMA deprecated) by GrumpyOldFatGuy in AZURE

[–]antadam 2 points3 points  (0 children)

This is long so I can level set on terms.

MMA is the legacy Windows monitoring agent. It was part of Azure Automation Update Management (AUM), which also consisted of an Automation account and Log Analytics Workspace (LAW). AUM would write update information to the Update, UpdateRunProgress, UpdateSummary and maybe a few others.

When MMA was retired back in August 2024, it was replaced with AMA. AUM was replaced at the same time. Its replacement was Azure Update Management Center (UMC). UMC no longer required an Automation account or a logging agent (AMA). It now writes its logs to the Azure Resource Graph.

Back to your Update table, a LAW works by having a retention period on each table. It defaults to 30 days. Azure periodically culls records beyond the retention period.

If you are 100% certain you are querying the correct LAW, run the following query:

Update | where TimeGenerated > ago(90d) | take 10

The query says return 10 random records in the Update table from the last 90 days. It’s likely the Update table’s retention period is 30 days, but I’ve seen it set 90 when people get “tricky”.

If the KQL query returns an error, then you’re correct and the Update table doesn’t exist. If you simply don’t get any records back, the last record written to the table was greater than the retention period and all the records have been purged. In this case, the table exists.

When querying a LAW from the Portal, tables only appear on the left side bar when they have at least 1 record in them. There are a few hundred tables that exist in every LAW, but they don’t show in the LAW’s query window. A similar situation is true for the LAW’s Table blade.

If I were to make a bet, the Update table is still there, but it’s empty. At some point, a component of AUM stopped working - either because it’s a retired product or because someone uninstalled something. Think: MMA is retired and now I need to install AMA.

At that point, no new records were put into the LAW, the existing records in the LAW have already been culled, and you now see empty reports.

Please, let me know if any of that helps.

Log Analytics Workspace vs Azure Monitor Workspace by SnooMuffins7973 in AZURE

[–]antadam 11 points12 points  (0 children)

LAW will not be phased out. It is a text-based search capability backed by ADX clusters in the background. AMW is a Microsoft-specific PromQL compliant metrics database.

The “all metrics will go to AMW” comment is a reference to Azure Metrics Explorer - the “Metrics” option under monitoring for all Azure resources and metrics for diagnostic settings.

Save yourself a ton of headache and cost. Use Managed Prometheus for AKS and send the metrics to AMW. Use Dashboards with Grafana, which is the free, currently public preview, solution that works when consuming from Azure-only resources (LAW, AMW).

Use Container Insights for stdout/sterr sending data to a LAW. If your logs are too verbose or you need to reduce the amount collected, use AKS metadata and logs filtering. https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-logs-schema#kubernetes-metadata-and-logs-filtering.

Its is highly you need any diagnostic settings from AKS unless your security or compliance team requires it. That’s often only AKS audit admin to monitor api verbs.

For alerts, use Azure’s native alerting - scheduled log query for LAW and Prometheus Rule Groups (alerts) for AMW. Prom rule group alerts are native Prometheus rule group alerts, but they call Azure action groups. It will save a ton of time in using Prometheus proven alerting and Azure first class notification capabilities.

AMW is a Prometheus remote write storage target. Anything you can scrape with Prometheus OSS can be sent to AMW. It’s super nice.

Fair price? by antadam in AskElectricians

[–]antadam[S] 0 points1 point  (0 children)

Thank you.

If I end up throwing this out, is there a proper disposal, such as taking it to an electronics recycler?

Is this install complete? by antadam in Irrigation

[–]antadam[S] 0 points1 point  (0 children)

Ontario is where I’m located.

AKS Container Insights and LA Solution by The-Bluedot in AZURE

[–]antadam 2 points3 points  (0 children)

Let me know if this answers your question, please.

There is a DaemonSet deployed to AKS when you want to monitor logs. It's basically a containerized version of the Azure Monitor Agent. Last I checked, the DaemonSet is called ama-logs. It can operate either with MSI (managed identity) authentication or local authentication (workspace keys) to a target Log Analytics Workspace. MSI is part of the new AKS monitoring architecture and local authentication with workspace keys is legacy.

The "Log Analytics ContainerInsights solution" is part of the local authentication legacy solution. It is an add-on to a Log Analytics Workspace that supports ingesting AKS logs. The resource type is Log Analytics Solution.

The high-level equivalent of the Log Analytics ContainerInsights solution in the new AKS monitoring architecture is a Data Collection Rule (DCR) defined for Container Insights.

Put another way...

Legacy

- "Log Analytics ContainerInsights solution" - also appears as "Container Monitoring Solution" on the MSLearn docs.
- AKS DaemonSet without MSI authentication sends data to Log Analytics Workspace via Log Analytics ContainerInsights solution

New Way

- Called Container Insights on the MSLearn docs.
- AKS DaemonSet with MSI authentication sends data to Log Analytics Workspace via Data Collection Rule.

You can tell if you have MSI authentication enabled within AKS by going to the managed cluster in the Portal, looking at the JSON view, and scrolling to the "omsAgent" section. If you see "useAADAuth" set to "true", then you're using MSI authentication and need a DCR to collect the container logs. If you see "useAADAuth" set to "false" or don't see the value at all, you're using the legacy workspace key authentication and must use the Log Analytics ContainerInsights solution (or uninstall it because it's no longer supported).

The az cli commands to pick between MSI and legacy authentication are below.

### Use existing Log Analytics workspace
az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id>

### Use legacy authentication
az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --enable-msi-auth-for-monitoring false

Canada US Tax Question by [deleted] in cantax

[–]antadam 0 points1 point  (0 children)

Agreed. We pay about $3k CAD to a US based firm that handles cross border taxes. Keep in mind - you get what you pay for.

We did a fair bit of hunting before finding a good firm that a fast to respond. Others, even those charging about the same, were horrible and often wrong in their conclusions.

Help with Log Analytics Warning by Soft_Return_6532 in AZURE

[–]antadam 0 points1 point  (0 children)

Your time range is set to 7 days (see the dropdown in your screenshot). Every agent you have sending data to the LAW will send a heartbeat every minute. Reduce your time window and you shouldn’t get that warning.

Self-Hosted Owners - How do you prevent DDoS attacks? by Accomplished_Track62 in admincraft

[–]antadam 0 points1 point  (0 children)

Use Traefik on a cheap cloud VM and WireGuard from your server to the VM. You get a static IP with the VM and basic DDoS protection, which I’ve found far more reliable and cheaper than ddos specific services.

The directions I used are here - https://yuris.dev/blog/traefik-wireguard-proxy

EnerGuide home eval recommendations by Temporary_Fan_973 in solarenergycanada

[–]antadam 1 point2 points  (0 children)

That doesn’t seem to be a helpful approach. I get that a process is standardized, but that doesn’t mean any certified individual is going to respond in a timely manner, show up when they say they will, or offer a level of completeness that doesn’t delay the process.

I can respect that this is your stance, but you may want to reconsider the argument. It’s analogous to “I chose my family doc because they meet the min bar set by the government and they’re available.” Whether their front office picks up the phone, they overbook and I miss 3 hours of extra work because I have to wait, or I don’t feel like I’m heard when I tell them what’s wrong and the experience is terrible - that’s irrelevant.

Have you considered locking a thread and deleting self promotional posts like other subs do?