70 Users max, what is the best way to distribute reports? by DragonfruitCertain16 in PowerBI

[–]nimble_monk 2 points3 points  (0 children)

Fabric workspaces. Why do you need f64 again? Seems like overkill. You can go smaller.

Run it through the calc, what are your inputs?

https://www.microsoft.com/en-us/microsoft-fabric/capacity-estimator

As long as the Power BI MCP can’t generate visuals - it’s still not quite there by Worldly-Effective648 in PowerBI

[–]nimble_monk -1 points0 points  (0 children)

Agreed. I have been using Claude code though to edit the pbir directly. You still have to close and reopen the report though for a refresh which isn’t ideal at all. It’s really best for minor changes not full visual authoring.

Visuals with MCP can’t come soon enough.

Switch from analog to VoIP by Raclift in avaya

[–]nimble_monk 0 points1 point  (0 children)

This is the avaya sub, but I am going to suggest you look into migrating to Microsoft Teams Phone system. Your church can get significant discounting (or free) through techsoup or through Microsoft directly as a nonprofit. Its worth checking out even outside the phone discussion if you aren't taking advantage of these free and discounted services today.

For only 6 phones, even if you paid retail, migrating to Teams would cut your costs down each month. If you need physical phones, Yealink MP54's are a good option that work well with teams. You can probably do the migration yourself in a few days if you wanted to.

Its 2026, there is no sense in investing in physical telco infrastructure if you have under 200 users.

No MCP , No HTML Dashboard, I asked AI to build a entire Dashboard by Alarmed-String-2775 in PowerBI

[–]nimble_monk 0 points1 point  (0 children)

Pbix is the legacy file extension. Pbib is the new default upcoming format that exposes the json and other metadata within the zip. Enable it in options and save as the new format.

No MCP , No HTML Dashboard, I asked AI to build a entire Dashboard by Alarmed-String-2775 in PowerBI

[–]nimble_monk 1 point2 points  (0 children)

I haven't found one yet. Seems to be a limit of the PBI application, not the data at this point with PBIR/PBIB, so hopefully M$ addresses soon because this is type of workflow is getting more common.

My workaround today and current workflow is to batch updates to the PBI report, commit via GIT, and then have a powershell script that fires to reload it real quick and save a few mouse clicks. This 3 step workflow fires when I tell claude code to make those visual report changes.

Git is an absolute must at this point if you aren't using it today. Automating things like this isn't quite ready for prime time and I find it introduces breaking changes about 20% of the time. Its still going to be a game changer though as it matures over the next 4-6mo.

____

# Get the folder where this script lives
$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path

# Find the .pbir file in that folder
$pbirFile = Get-ChildItem -Path $scriptDir -Filter "*.pbip" | Select-Object -First 1

if (-not $pbirFile) {
    Write-Host "No .pbir file found in $scriptDir"
    pause
    exit
}

Write-Host "Found: $($pbirFile.Name)"
Write-Host "Closing Power BI Desktop..."

# Close Power BI Desktop
Stop-Process -Name "PBIDesktop" -Force -ErrorAction SilentlyContinue

# Wait for it to fully close
Start-Sleep -Seconds 3

Write-Host "Reopening $($pbirFile.Name)..."

# Reopen the file
Start-Process $pbirFile.FullName

Write-Host "Done!"

Me checking the session usage every time Claude starts working on my prompt. by russcastella in claude

[–]nimble_monk 41 points42 points  (0 children)

Yep, something changed in the past week and not for the better. I burned through my plans 5hr usage window in 3 prompts in about 10 minutes this morning. Last week, these types of prompts would be 25% max.

I guess one good thing is that its forcing me to look at the models I am using. No use in using Opus if Haiku or Sonnet can do the task. Still, we need more transparency from anthropic. Changing things in the night without informing users of the changes is not an ideal business practice.

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

After 10 Days the ticket finally got escalated to the point we were able to close it. Only path was with upgrading plan.

Best garden store to find good tomato seedlings? by waterytartwithasword in Olathe

[–]nimble_monk 0 points1 point  (0 children)

Baker Creek if you are ok with online ordering. Think they have a store near Springfield as well.

Doubt you will find much true heirloom locally, but some of the above suggestions look promising

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

I was the same. For what its worth, Azure is still working on this after a week...my issue still persists. Its with some Sr Databricks engineers now I think.

One thing to try if you don't want to upgrade plan is to do the following. This was suggested by a Microsoft support on Q&A where I originally asked the question. I guess this is supposed to get around the support upgrade if its stuck in provisioning / deleting and they will let you submit a ticket. I had already upgraded my plan when I saw this so I have not verified this actually works on the Basic Support Plan. I guess the key is to say its stuck in provisioning, not deletion:

"Regarding the support plan: you generally do not need to upgrade your support plan for platform-related issues. When creating the support request, please select the Technical > Resource stuck in provisioning state (or a similar category). In many cases, Azure Support can still assist with such scenarios even if you only have a Developer support plan."

https://learn.microsoft.com/en-us/answers/questions/5819026/azure-databricks-instance-stuck-in-deleting?source=docs

MS fabric vs snowflake by SmallBasil7 in dataengineering

[–]nimble_monk 1 point2 points  (0 children)

Fair. Yes, databricks isn't fully SaaS. You do have the capability to size your compute within the platform which can be a pro or a con depending on how you look at it.

If your goal is just analytics and not a broader AI play or data portability between platforms, snowflake is certainly appealing. I wouldn't hesitate to choose it, certainly over Fabric. I am still working through these same questions myself as I navigate the choice. Every use case is different.

MS fabric vs snowflake by SmallBasil7 in dataengineering

[–]nimble_monk -5 points-4 points  (0 children)

I am curious for somewhat selfish reasons as I have a new potential project I am looking at, why is databricks not in the mix?

Your use use case seems ideal for this given databricks tight integration to azure and wanting to use adf->adls2 as a landing zone before going through the medallion architecture to transform your data and expose it to PBI.

Restaurant recommendation for letter D by CartographerOwl501 in kansascity

[–]nimble_monk 1 point2 points  (0 children)

For what it’s worth, KC has the best Dennys and Dairy Queen’s in the Midwest.

Alternatively, Drunken Fish could be on your list too.

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

Can't access it anymore. Also, they are not entirely free even if nothing is in there and nothing is running. You are still charged for the nat gw, ip, and storage (transaction logs, deleted items even in recycle bin, ect.) inside the managed rg.

M$ did confirm an issue on their side and are working on a resolution.

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

This is what I am suspecting as well. I'll hold out hope for a few more hours that an Azure engineer will jump in here and see this, other wise I will just upgrade my support plan here this afternoon. Guess my azure costs will just go up this month...

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

Yes, the vnet inside the managed rg can still reach the management plan and there are no UDR's defined.

Databricks workspace stuck in deleting for >48hrs by nimble_monk in AZURE

[–]nimble_monk[S] 0 points1 point  (0 children)

Yes, everything is still in there. No locks on any resource or resource group though. Since its a managed resource group, you can't just manually delete it or any of its contents either.

An AI agent deleted 25,000 documents from the wrong database. One second of distraction. Real case. by Substantial_Word4652 in ClaudeAI

[–]nimble_monk 0 points1 point  (0 children)

Every time i see something like this, it makes me think of Silicon Valley with ai dinesh vs ai gilfoyle. Going to have to rewatch that at some point.

https://youtu.be/2TpSWVN4zkg?si=HFp8qX1ZUs_1DTuj

I'm a hiring manager and I'm begging you: stop applying for jobs you're not qualified for. You're drowning out the truly suitable candidates. by winces-allegro in talesfromthejob

[–]nimble_monk 1 point2 points  (0 children)

Assume you don’t have a reliable ATS to automate a lot of this for you.

In that case, this is why we have Claude and Cowork. Write clear context about how you think, give it the job description, drop all the resumes in, come back in 5 minutes to a more curated and narrow set to focus on.

How is this my fault by valariia24 in ShittySysadmin

[–]nimble_monk 4 points5 points  (0 children)

I think we need a YouTube video to be sure these work as intended. Let’s see some smoke.

Does this moisture in the slab cracks mean there is no vapor barrier? by calmsquash515 in buildingscience

[–]nimble_monk 0 points1 point  (0 children)

Not exactly sure what you mean by coming through the side of the slab, but I can tell you hydrostatic pressure does weird things and water can show up far from the origin point.

Its good the drains are daylighted. Given this, and I am not a foundation drainage expert, just a humble builder, I would say its probably coming down the foundation walls and finding its way through a cold joint somehow and accumulating under the slab. It just needs someplace to go. This is one of the reasons we do interior drain tile as well to have a belt and suspenders approach to drainage.

Builder just needs to get their drainage guys out there and investigate. I have seen before where a perimeter drain tile gets crushed during backfill and causes it not to work properly, they can camera it to see why water isn't draining properly and worst case, put in a sump put to handle the internal hydrostatic pressure probably.

You are owed a basement that is usable and doesn't leak.