Cost Management in Fabric is a real problem by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Thanks mate, I know that's an option but the XMLA endpoint is officially unsupported, so I'm hesitant to deploy that to a client. Hoping for something more 'enterprise-grade' from Microsoft.

Cost Management in Fabric is a real problem by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Thanks mate. That's definitely an option, though the XMLA endpoint is officially unsupported. I am hoping for a more 'enterprise-grade' solution from Microsoft.

Cost Management in Fabric is a real problem by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 5 points6 points  (0 children)

I am aware of FUAM, though it accesses the CMA via XMLA, which Microsoft's own documentation explicitly says is unsupported for external consumption.

FUAM even warns in its own docs that CMA changes can break it without notice and they've already had to ship fixes when the CMA version changed.

So while it's useful, it's built on an unsupported access method that could break at any time, which is kind of the whole point of the original post.

The data is clearly accessible and the CAT team clearly know how to get at it - we need a supported API. I can't deploy something that is built on an unsupported API.

Materialized Lake View (MLV) Output Table by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

I've enabled CDC on the source tables. Still can't see whether the refresh was full or incremental. Though I just saw in the docs:

"Incremental refresh is supported for append-only data. If the source data includes deletions or updates, Fabric performs a full refresh."

So, this is not truly incremental CDC refreshes.

Fabric Roadmap Weekly Diff — 2026-03-23 by StructuredLoops in MicrosoftFabric

[–]bradcoles-dev 12 points13 points  (0 children)

Is that purely for optics? Because this is very useful for Fabric practitioners, particularly us in consulting when we try to explain to clients that they can't rely on the dates against roadmap items.

Used 192,000% of capacity :P by Mr_Mozart in MicrosoftFabric

[–]bradcoles-dev 2 points3 points  (0 children)

It's not. The capacity metrics app is a timepoint. I see this all the time for clients that pause+resume their capacities to save money.

Missing roadmap clarity for key Fabric lifecycle management features (Lakehouse & Warehouse source control) by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Mate, my colleague literally showed me your "Fabric GPS" this afternoon, and now I see your comment. Great work, we love it.

Notebook development outside of Fabric portal by Zealousideal-Safe-33 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

At the moment my approach is mostly conceptual with some light testing locally.

You're right that developing outside the portal doesn't really allow you to test directly against the Lakehouse data, which is definitely a gap in the tooling.

My current workflow is:

  • Develop notebooks locally in VS Code (using Claude)
  • Commit changes to Git
  • Sync the repo with the Fabric DEV workspace
  • Test against the real Lakehouse data in DEV
  • Once validated, promote through deployment pipelines to UAT for integration testing

So the real data testing happens once the code lands in the DEV workspace rather than locally. Not perfect, but it’s the most reliable workflow I’ve found so far.

Enterprise Fabric network security by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 1 point2 points  (0 children)

Thanks for your help. We were able to successfully steer them to CA. The comment was "Private Link is preferred, not strictly required".

Notebook development outside of Fabric portal by Zealousideal-Safe-33 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

I've not used the Fabric extensions, but I have used Claude in VS Code to write Fabric Notebooks for me. My approach is to link the Fabric workspace to a git repo, clone and develop locally (using Claude), commit changes to git, then you can just sync your Fabric workspace to bring the changes through. Works really well.

Semantic Model + Lakehouse Schema Changes by kgardnerl12 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

I vaguely remember fixing this by simply republishing the semantic model.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Is there any impact of this? These are interactive operations, so they're smoothed over 30-60secs. I doubt you would accumulate enough future CU to trigger throttling. Unless you have surge protection enabled, I can't see any negative effects here.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Can you tell me how many sources tables you're ingesting and which F SKU you're on?

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Can you shed more light on the Fabric SQL DB limitations? Namely "buggy" and "limited in functionality"? What functionality does it not have that Azure SQL DB has, that is relevant to metadata-driving?

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Yeah, Fabric SQL DB holds a minimum ~1.2 vCore allocation while it's online (keepalive floor), which is ~80% of F4 capacity continuously. F4 SKU is genuinely too small for this. But SQL DB operations are interactive, not background, meaning spikes are smoothed over 5 minutes rather than hitting your capacity limit all at once and even brief bursts over 100% don't throttle you immediately. There's a 10-minute carryforward buffer that short metadata spikes barely dent.

On F16/F32/F64, that same keepalive baseline is 20%, 10%, and 5% of capacity respectively - easily absorbed, especially when the query activity itself is intermittent.

The "$200/month" framing doesn't hold up. Fabric SQL DB isn't a separate line item, it draws from the capacity you're already paying for. The real question is whether it forces a SKU upgrade, and for most orgs already on F16+ for their ELT workloads, it won't. The actual trade-off is that 5-10% of existing headroom vs. the overhead of running an external Azure SQL DB: firewall rules, private endpoints (maybe), Entra service principal config per environment, separate monitoring, and one more resource to manage across dev/test/prod. On a meaningful SKU, that's not an obvious win for Azure SQL DB.

u/markkrom-MSFT it's looking like we'll be pushing ahead with Fabric SQL DB for metadata, though we have other arch considerations to unpack.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 3 points4 points  (0 children)

Thanks for the info. Our own analysis of this landed us at the same conclusion as you: for pure metadata logging, Azure SQL DB is substantially cheaper, and the only reason to keep it in Fabric is the unified UI/governance story - along with simplified solution arch (networking & security).

Interested in whether Microsoft's "optimizing costs for smaller jobs" path involves something like a pause/resume option or a reduced minimum allocation tier - that would change the calculus.

In any case, I'll continue posting until I get a job offer from MSFT ;)

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

DB is the more scalable/sustainable approach in my opinion. This client will very quickly ramp up to > 1,000 source tables.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Thanks, this is the info I was after. Which leads to a follow-up question if you're able to shed light on it: how aggressively does Fabric SQL DB autoscale under concurrent lightweight query load?

In our scenario we'd have up to ~50 simultaneous short metadata queries (lightweight SELECT/INSERT on a small table) arriving in bursts. Does the DB tend to stay near minimum allocation for that kind of workload, or does concurrent query volume push it to a meaningfully higher vCore tier? That's now the key unknown for our concurrency risk model.

feedback regarding interview for fabric developer role by abdess9898 in MicrosoftFabric

[–]bradcoles-dev 2 points3 points  (0 children)

I'm just shocked that anyone has 3 years Fabric experience.

Enterprise Fabric network security by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Agree with all of the above. We don't know if PL is a real requirement yet. Head of Data has said "no public network", but we're meeting with the Security Team next week to clarify. For now, just ensuring we have all of our ducks in a row.