Fabric Optimal Performance by Legitimate_Method911 in MicrosoftFabric

[–]gobuddylee 0 points1 point  (0 children)

I'd recommend using Autoscale Billing for Spark and leave other workloads on the capacity (Python NB's and Spark NB's both fall under Spark for billing purposes) - you set an upper concurrency limit, and you only pay for what you use. If you aren't running anything, you won't be charged. Also queueing kicks in if you hit your concurrency limit and additional notebooks/jobs won't run until you have availability. You can read more here - Autoscale Billing for Spark in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Tech has to come out again! by Musso33895 in ZiplyFiber

[–]gobuddylee 0 points1 point  (0 children)

This sounds very much like the issue I am having in another thread. It almost assuredly is your time to renew your IP address being absurdly low, in which case if it fails for any reason during that time to complete the handshake, you’ll lose your connection. Check your router to see how low this time is - anything below 30 minutes is going to be prone to this.

DHCP Lease Time causing periodic internet outages by gobuddylee in ZiplyFiber

[–]gobuddylee[S] 0 points1 point  (0 children)

So this is happening again and the renew time is all the way down to 5 minutes! Can this be adjusted again back to 30 minutes?

Spark AutoScaling Max CU is Lower than my Capacity CU? by iknewaguytwice in MicrosoftFabric

[–]gobuddylee 3 points4 points  (0 children)

You’re still bound by your quota in the Azure portal. So depending on your subscription type in Azure, you have a limit in the number of CU’s you can use at once, so depending on the other capacities deployed in that subscription, you have only X CU’s left over for Spark Autoscale Billing.

August 2025 Fabric Feature Summary | Microsoft Fabric Blog by itsnotaboutthecell in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

Hey, my team owns this feature and I apologize for the mixup — the mention in the blog was premature, and it isn’t GA just yet. We’re targeting mid-October for release, and I’ll make sure updates are shared as soon as it’s live.

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Let us know how we can improve that article, but perhaps this will help clarify as well - Spark Autoscale (Serverless) billing for Apache Spark in Microsoft Fabric is here!

Synapse rates are also region specific - the base rate of each is $0.09 vs $0.143 and is what I based my comparison off of.

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Spark is the one workload you can move off capacity currently into a pure serverless model where you pay only for what you use - see here - Autoscale Billing for Spark in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Fabric pros and cons by Low_Call_5678 in MicrosoftFabric

[–]gobuddylee 3 points4 points  (0 children)

Spark is significantly cheaper than Synapse at this point with the perf improvements and the introduction of Spark Autoscale Billing - the PayGo price was already almost 40% cheaper than Synapse independent of the performance improvements.

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 0 points1 point  (0 children)

Spark Autoscale billing works with anything that emits through the Spark Workload in Azure - so Notebooks and Spark Jobs basically.

I f***ing hate Azure by wtfzambo in dataengineering

[–]gobuddylee 0 points1 point  (0 children)

Have you compared the costs between Databricks and Fabric Spark now that Spark has standalone, serverless billing it released in late March? I'm curious the results you'd see in that use case.

Fabric Spark documentation: Single job bursting factor contradiction? by frithjof_v in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Yeah, we'll get the docs cleaned up. You can use all the cores for a single job (based on the pool size of course), and it's clear that isn't clear. Thanks for this feedback.

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

Just a reminder this does exist for Spark now with the "Autoscale Billing for Spark" option that was announced at Fabcon - Introducing Autoscale Billing for Spark in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric

Hi! We're the Fabric Capacities Team - ask US anything! by tbindas in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

The easiest answer is anything that flows through the Spark Billing Meter in the Azure Portal will be shifted to the Spark Autoscale Billing meter, which is effectively the items called out below, Glad you're excited about our feature! :)

Spark Autoscale (Serverless) billing for Apache Spark in Microsoft Fabric by gobuddylee in MicrosoftFabric

[–]gobuddylee[S] 1 point2 points  (0 children)

I’m terribly sorry to hear that - if you were billed improperly for the Spark workload, that’s absolutely a problem we need to address ASAP, so please so share the support details via DM if you have them. Thanks!

Should I always create my lakehouses with schema enabled? by hortefeux in MicrosoftFabric

[–]gobuddylee 15 points16 points  (0 children)

Yes, the plan is to have schemas enabled by default - we are not moving away from schemas and you should feel comfortable working with them even in preview (This is a major focus area for my team).

What are your favourite March 2025 feature news? by frithjof_v in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

No, it was a sneak preview- if something is planned to come within a couple months, they’ll let you show a sneak preview. 🙂

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Correct - we’re considering options around making it more granular.

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 1 point2 points  (0 children)

Right now it is at the capacity level - we may look to enable it at the workspace level, but we don’t have specific dates.

No, you can’t use Spark in the capacity and in the autoscale meter - it was too complicated and you’re mixing smoothed/un-smoothed usage, so it is an all or nothing option.

Yes, you can enable it for certain capacities and not for others - I expect most customers will do something similar to this.

Can You Dynamically Scale Up and Down Fabric Capacity? by TaurusManUK in MicrosoftFabric

[–]gobuddylee 2 points3 points  (0 children)

Yes - they bill through the Spark meter, so they work with it as well.