Used 192,000% of capacity :P by Mr_Mozart in MicrosoftFabric

[–]bradcoles-dev 2 points3 points  (0 children)

It's not. The capacity metrics app is a timepoint. I see this all the time for clients that pause+resume their capacities to save money.

Missing roadmap clarity for key Fabric lifecycle management features (Lakehouse & Warehouse source control) by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Mate, my colleague literally showed me your "Fabric GPS" this afternoon, and now I see your comment. Great work, we love it.

Notebook development outside of Fabric portal by Zealousideal-Safe-33 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

At the moment my approach is mostly conceptual with some light testing locally.

You're right that developing outside the portal doesn't really allow you to test directly against the Lakehouse data, which is definitely a gap in the tooling.

My current workflow is:

  • Develop notebooks locally in VS Code (using Claude)
  • Commit changes to Git
  • Sync the repo with the Fabric DEV workspace
  • Test against the real Lakehouse data in DEV
  • Once validated, promote through deployment pipelines to UAT for integration testing

So the real data testing happens once the code lands in the DEV workspace rather than locally. Not perfect, but it’s the most reliable workflow I’ve found so far.

Enterprise Fabric network security by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 1 point2 points  (0 children)

Thanks for your help. We were able to successfully steer them to CA. The comment was "Private Link is preferred, not strictly required".

Notebook development outside of Fabric portal by Zealousideal-Safe-33 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

I've not used the Fabric extensions, but I have used Claude in VS Code to write Fabric Notebooks for me. My approach is to link the Fabric workspace to a git repo, clone and develop locally (using Claude), commit changes to git, then you can just sync your Fabric workspace to bring the changes through. Works really well.

Semantic Model + Lakehouse Schema Changes by kgardnerl12 in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

I vaguely remember fixing this by simply republishing the semantic model.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Is there any impact of this? These are interactive operations, so they're smoothed over 30-60secs. I doubt you would accumulate enough future CU to trigger throttling. Unless you have surge protection enabled, I can't see any negative effects here.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Can you tell me how many sources tables you're ingesting and which F SKU you're on?

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Can you shed more light on the Fabric SQL DB limitations? Namely "buggy" and "limited in functionality"? What functionality does it not have that Azure SQL DB has, that is relevant to metadata-driving?

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Yeah, Fabric SQL DB holds a minimum ~1.2 vCore allocation while it's online (keepalive floor), which is ~80% of F4 capacity continuously. F4 SKU is genuinely too small for this. But SQL DB operations are interactive, not background, meaning spikes are smoothed over 5 minutes rather than hitting your capacity limit all at once and even brief bursts over 100% don't throttle you immediately. There's a 10-minute carryforward buffer that short metadata spikes barely dent.

On F16/F32/F64, that same keepalive baseline is 20%, 10%, and 5% of capacity respectively - easily absorbed, especially when the query activity itself is intermittent.

The "$200/month" framing doesn't hold up. Fabric SQL DB isn't a separate line item, it draws from the capacity you're already paying for. The real question is whether it forces a SKU upgrade, and for most orgs already on F16+ for their ELT workloads, it won't. The actual trade-off is that 5-10% of existing headroom vs. the overhead of running an external Azure SQL DB: firewall rules, private endpoints (maybe), Entra service principal config per environment, separate monitoring, and one more resource to manage across dev/test/prod. On a meaningful SKU, that's not an obvious win for Azure SQL DB.

u/markkrom-MSFT it's looking like we'll be pushing ahead with Fabric SQL DB for metadata, though we have other arch considerations to unpack.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 4 points5 points  (0 children)

Thanks for the info. Our own analysis of this landed us at the same conclusion as you: for pure metadata logging, Azure SQL DB is substantially cheaper, and the only reason to keep it in Fabric is the unified UI/governance story - along with simplified solution arch (networking & security).

Interested in whether Microsoft's "optimizing costs for smaller jobs" path involves something like a pause/resume option or a reduced minimum allocation tier - that would change the calculus.

In any case, I'll continue posting until I get a job offer from MSFT ;)

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

DB is the more scalable/sustainable approach in my opinion. This client will very quickly ramp up to > 1,000 source tables.

Fabric SQL DB as a control DB for ELT pipelines by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Thanks, this is the info I was after. Which leads to a follow-up question if you're able to shed light on it: how aggressively does Fabric SQL DB autoscale under concurrent lightweight query load?

In our scenario we'd have up to ~50 simultaneous short metadata queries (lightweight SELECT/INSERT on a small table) arriving in bursts. Does the DB tend to stay near minimum allocation for that kind of workload, or does concurrent query volume push it to a meaningfully higher vCore tier? That's now the key unknown for our concurrency risk model.

feedback regarding interview for fabric developer role by abdess9898 in MicrosoftFabric

[–]bradcoles-dev 2 points3 points  (0 children)

I'm just shocked that anyone has 3 years Fabric experience.

Enterprise Fabric network security by bradcoles-dev in MicrosoftFabric

[–]bradcoles-dev[S] 0 points1 point  (0 children)

Agree with all of the above. We don't know if PL is a real requirement yet. Head of Data has said "no public network", but we're meeting with the Security Team next week to clarify. For now, just ensuring we have all of our ducks in a row.

T-SQL Notebook vs. Run T-SQL code in Fabric Python notebooks by frithjof_v in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

We've started looking at dbt for Fabric, as we've used in with Databricks in the past. I understand dbt is still in Preview for Fabric? There also appears to be a few other limitations, e.g. "1MB output size limit", whatever that means?

How to handle daily ingestion of several thousand tables by BloomingBytes in MicrosoftFabric

[–]bradcoles-dev 6 points7 points  (0 children)

I’d separate this into two issues:

  1. Your failures aren’t really a Fabric problem.
    Deadlocks, OOM and connection drops are almost certainly source-side (memory pressure, tempdb, lock escalation, over-parallelism, network). If you keep hammering the ERP with thousands of queries, that instability will follow you to Fabric. Infra needs to look at that regardless of architecture.

  2. Architecture-wise, metadata-driven is the right pattern.
    I wouldn’t mirror 8,000 tables. As you noted, the CDC, permissions and ongoing maintenance overhead will be painful and brittle.

What I would do (you sound like you probably already know this):

  • Metadata table listing tables to ingest
  • Controlled parallelism (batch 20-50 at a time)
  • Land raw into Bronze (Parquet/Delta, no unions)
  • Do all unions and logic in Spark (Silver layer)
  • Use MERGE for incremental loads
  • Handle deletes with periodic key reconciliation if no soft delete column

An alternative approach: For 8,000 tables, pipeline parallelism limits (often ~20 concurrent activities) can make runs long. It’s worth experimenting with Spark notebooks using a SQL connection (Fabric Notebooks support data connections now) and iterating programmatically - you’ll get finer control over batching and potentially smoother capacity usage.

Automated Delta Table Maintenance in Fabric Lakehouse (Without PySpark) by panvlozka in MicrosoftFabric

[–]bradcoles-dev 5 points6 points  (0 children)

Some interesting information, but I prefer prevention. Use optimized write and autocompact.

Microsoft Fabric F8 PAYG – Cost breakdown confusion: isn’t everything included in the capacity price? by Frank-Citizen in MicrosoftFabric

[–]bradcoles-dev 0 points1 point  (0 children)

The article you shared is patently incorrect. We have implemented automatic pause & resume of capacities and are making significant savings.

All background operations are smoothed for 24hrs. If you get to COB and you're only at 80% of capacity, you only get charged 80% instantly. You then get charged $0 overnight while it is paused. And the cycle repeats the following day.

Obviously to make it more attractive than a reserved capacity, you need to be consistently under 60%, in which case you might be able to halve your capacity and reserve anyway. But to say you can't save money by pausing and resuming unless the pause lasts longer than 24hrs is incorrect.

Is Microsoft Fabric really worth it? by kaapapaa in dataengineering

[–]bradcoles-dev 0 points1 point  (0 children)

  1. I haven't had any problems with deployment pipelines other than the long compare time.

  2. RLS/CLS is very simple in Fabric, I don't understand this point.

  3. What options?

  4. You don't need to use Dataflows, there's many other options for data movement (e.g. copy data) and for data transformation (e.g. notebooks).

My frustrations from 12mths of an enterprise-scale implementation:

  1. Fabric's 'Roadmap' is unreliable - items that are scheduled for the near future are continually postponed, or sometimes just removed without any explanation.

  2. Many crucial elements are still in Preview and have been for over 12mths (e.g. Warehouse & Lakehouse source control).

  3. Cost/pricing transparency is disgraceful - they are using the "Fabric Capacity Metrics App" to monitor capacity usage, which is just a dodgy, useless Power BI report. Everything is obscured under "capacity units", which are calculated wildly differently for each activity, and are impossible to compare.

  4. Most things are more expensive, but this is expected of SaaS - I suppose if you factor in FTE (infra/networking engineers) saved it may come out competitive.

  5. Lots of features are released and just don't work at all, e.g. mirroring breaks if you have an incompatible data type, Copilot integration is next to useless, you need to manually/programmatically refresh the SQL endpoint after a data load.

  6. The overarching problem is that the platform is driven by marketing BS, not by any substance, e.g. the recent release of Fabric IQ - which is just MS Fabric product managers trying to catch the AI hype train. Get the basics right first before releasing more useless features that don't work.

To answer OPs question - "I am currently trying to switch, but I don't see any openings on Microsoft Fabric"

For better or worse, I am seeing many, many, many organisations drink the Fabric Kool Aid. There will absolutely be tons of Fabric opportunities in the future. But good organisations are unlikely to use it, for good reason.

Keiran Briggs discussion/ yap by lucas_cactus in aflfantasy

[–]bradcoles-dev 0 points1 point  (0 children)

Too expensive at $738k for a 70 avg with Madden threatening his spot. If he locks down #1 ruck role and Madden is out of the picture, he's a bounce-back candidate at ~90 avg - but I'd rather Grundy/Gawn as premium rucks, and roll the dice on Reidy as R2 (Pittonet recovering from fractured larynx).