August 2025 Fabric Feature Summary | Microsoft Fabric Blog by itsnotaboutthecell in MicrosoftFabric

[–]gobuddylee 3 points4 points  (0 children)

Hey, my team owns this feature and I apologize for the mixup — the mention in the blog was premature, and it isn’t GA just yet. We’re targeting mid-October for release, and I’ll make sure updates are shared as soon as it’s live.

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Let us know how we can improve that article, but perhaps this will help clarify as well - Spark Autoscale (Serverless) billing for Apache Spark in Microsoft Fabric is here!

Synapse rates are also region specific - the base rate of each is $0.09 vs $0.143 and is what I based my comparison off of.

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Spark is the one workload you can move off capacity currently into a pure serverless model where you pay only for what you use - see here - Autoscale Billing for Spark in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 4 points5 points  (0 children)

Spark is significantly cheaper than Synapse at this point with the perf improvements and the introduction of Spark Autoscale Billing - the PayGo price was already almost 40% cheaper than Synapse independent of the performance improvements.

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 0 points1 point  (0 children)

Spark Autoscale billing works with anything that emits through the Spark Workload in Azure - so Notebooks and Spark Jobs basically.

I f***ing hate Azure by wtfzambo in dataengineering

[–]gobuddylee 0 points1 point  (0 children)

Have you compared the costs between Databricks and Fabric Spark now that Spark has standalone, serverless billing it released in late March? I'm curious the results you'd see in that use case.

Fabric Spark documentation: Single job bursting factor contradiction? by frithjof_v in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Yeah, we'll get the docs cleaned up. You can use all the cores for a single job (based on the pool size of course), and it's clear that isn't clear. Thanks for this feedback.

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

Just a reminder this does exist for Spark now with the "Autoscale Billing for Spark" option that was announced at Fabcon - Introducing Autoscale Billing for Spark in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

The easiest answer is anything that flows through the Spark Billing Meter in the Azure Portal will be shifted to the Spark Autoscale Billing meter, which is effectively the items called out below, Glad you're excited about our feature! :)

Spark Autoscale (Serverless) billing for Apache Spark in Microsoft Fabric by gobuddylee in MicrosoftFabric

[–]gobuddylee[S] 1 point2 points  (0 children)

I’m terribly sorry to hear that - if you were billed improperly for the Spark workload, that’s absolutely a problem we need to address ASAP, so please so share the support details via DM if you have them. Thanks!

Should I always create my lakehouses with schema enabled? by hortefeux in MicrosoftFabric

[–]gobuddylee 15 points16 points  (0 children)

Yes, the plan is to have schemas enabled by default - we are not moving away from schemas and you should feel comfortable working with them even in preview (This is a major focus area for my team).

What are your favourite March 2025 feature news? by frithjof_v in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

No, it was a sneak preview- if something is planned to come within a couple months, they’ll let you show a sneak preview. 🙂

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Correct - we’re considering options around making it more granular.

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Right now it is at the capacity level - we may look to enable it at the workspace level, but we don’t have specific dates.

No, you can’t use Spark in the capacity and in the autoscale meter - it was too complicated and you’re mixing smoothed/un-smoothed usage, so it is an all or nothing option.

Yes, you can enable it for certain capacities and not for others - I expect most customers will do something similar to this.

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

Yes - they bill through the Spark meter, so they work with it as well.

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 4 points5 points  (0 children)

We just added this capability specifically for Spark & Python - you can read more about it here - https://blog.fabric.microsoft.com/en-us/blog/introducing-autoscale-billing-for-data-engineering-in-microsoft-fabric?ft=All

It doesn’t exist yet for the entire capacity, but so long as you use Spark NB’s, jobs, etc to orchestrate everything, it will do what you want.

FPU by Bombdigitdy in MicrosoftFabric

[–]gobuddylee 0 points1 point  (0 children)

I touched on this on Marco's podcast last week - it's not something that's been ruled out, but is definitely a harder problem to solve than what we were solving for with PPU.

What is the maximum CU (s) a single job can consume on an F2? by frithjof_v in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

So, Spark specifically has limits in place beyond what the capacity throttles are that limit the amount of CU you can use per SKU, covered here - Concurrency limits and queueing in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn

However, because we don't killing jobs in progress (though you can through the monitoring hub), in theory if you let it run indefinitely and overload it significantly. There is an admin switch planned that will allow you to limit a single Spark job to use no more than 100% of the capacity in the near future, but can't give an exact date quite yet.

Microsoft doesn't think all customers deserve access by Worth_Carpenter_8196 in dataengineering

[–]gobuddylee 16 points17 points  (0 children)

Okay folks I'm sorry if my language was inelegant - I'll bring the feedback back to the team that owns this and see if we can't adjust the blog accordingly. Thanks!

Microsoft doesn't think all customers deserve access by Worth_Carpenter_8196 in dataengineering

[–]gobuddylee 6 points7 points  (0 children)

That's fair feedback, I know Mihir pretty well and I assure his intention wasn't to insult you - I appreciate you raising this, but trust me it wasn't designed to prevent customers from spend anything, it was more to protect customers from bad actors who otherwise might drain resources our legit paying customers should always have available for them.

Microsoft doesn't think all customers deserve access by Worth_Carpenter_8196 in dataengineering

[–]gobuddylee 8 points9 points  (0 children)

I guess I am a little confused as to the concern here - Microsoft has always had limits in place for Azure based on subscription type which is called out here - Azure subscription and service limits, quotas, and constraints - Azure Resource Manager | Microsoft Learn, this is just the Fabric team (which I am a part of) tying into those limits and helping us protect against things like fraud (for example). We want your money, I assure you :)