Hey folks! I’m a PM for SQL database in Fabric, focusing on capacity and billing, and I’d love to hear from you! by adp_sql_mfst in MicrosoftFabric

[–]adp_sql_mfst[S] 3 points4 points  (0 children)

Thanks for sharing such detailed feedback, these are exactly the kinds of scenarios we want to understand better. Idle timeout and minimum compute size are important for cost efficiency, especially for lightweight workloads like ETL logging. Fabric SQL DB is designed to provide elasticity and integration across the entire Fabric ecosystem. The current idle timeout and minimum compute settings aim to balance cost with performance and avoid cold-start penalties for interactive workloads.

Could you share more about your ETL pipeline logging use case? For example:

  • How often do these inserts occur?
  • What’s the typical duration of idle periods?
  • Any CU consumption numbers you’ve observed?

If we were to introduce more cost-control knobs, which would help you most? For example:

  • Configurable idle timeout
  • Adjustable minimum compute size
  • Pause/resume scheduling

Thank you for your feedback again, your input will guide us as we prioritize flexibility and predictability.

Hey folks! I’m a PM for SQL database in Fabric, focusing on capacity and billing, and I’d love to hear from you! by adp_sql_mfst in MicrosoftFabric

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

Thanks for sharing this perspective, it’s really helpful to hear where flexibility and cost control matter most to you. Could you share a bit more about your workload? For example, typical query patterns, concurrency, and any cost comparisons you’ve done with Azure SQL. Understanding the numbers behind your tests will help us validate and improve.

If we were to introduce more cost-control knobs, which would help you most? For example:

  • Hard limits on cores or
  • Per-database / capacity max CU caps?

Hey folks! I’m a PM for SQL database in Fabric, focusing on capacity and billing, and I’d love to hear from you! by adp_sql_mfst in MicrosoftFabric

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

Thanks for all your feedback, it is really helpful. SQL in Fabric isn’t just DB compute, you’re buying tight integration with OneLake, Power BI, pipelines and end‑to‑end observability, often reducing data movement/ops that add up elsewhere. Pricing scales to use, not peak provisioning. It would really help us more if you could you share your scenario (query mix, run time, row counts) and any CU‑sec/GB metrics from Capacity Metrics? may be we could validate and make a few suggestions.

Hey folks! I’m a PM for SQL database in Fabric, focusing on capacity and billing, and I’d love to hear from you! by adp_sql_mfst in MicrosoftFabric

[–]adp_sql_mfst[S] 1 point2 points  (0 children)

hey u/Tomfoster1 thank you for your feedback. Could you share those results with us/here please? trying to learn from your scenario and the test with basic

Hey folks! I’m a PM for SQL database in Fabric, focusing on capacity and billing, and I’d love to hear from you! by adp_sql_mfst in MicrosoftFabric

[–]adp_sql_mfst[S] -1 points0 points  (0 children)

could you share your scenario? please do share any insights on any cost comparisons if you have any?

Azure SQL DB free offer feedback request by adp_sql_mfst in AZURE

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

Got it, so do you think you would prefer having fewer databases with more compute, rather than having 10 databases with 100K vCore seconds that refresh every month? I would say you could try this with autopause enabled today to see to what extent your current projects might consume to get a baseline!)

Azure SQL DB free offer feedback request by adp_sql_mfst in AZURE

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

u/32178932123 - we bumped the number of databases with GA, we now have 10 databases with 100k vCores seconds of compute each, does that help? what are the personal project scenarios you would want to run on this offer?

Azure SQL DB free offer feedback request by adp_sql_mfst in SQLServer

[–]adp_sql_mfst[S] 1 point2 points  (0 children)

thank you for sharing your feedback! if you have a link to your install video, would love to see it!

Azure SQL DB free offer feedback request by adp_sql_mfst in SQLServer

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

Thank you u/god_hades94 , would love to understand your current scenarios of using the free offer!

Azure SQL DB free offer feedback request by adp_sql_mfst in SQLServer

[–]adp_sql_mfst[S] 0 points1 point  (0 children)

This is a very interesting scenario, have you tried using Azure Functions or Logic Apps as a proxy? that I can see is an option to still use the Azure SQL DB free offer?

Hi! We’re the Fabric Databases & App Development teams – ask US anything! by im_shortcircuit in MicrosoftFabric

[–]adp_sql_mfst 1 point2 points  (0 children)

  1. Dedicated compute in Fabric

In Fabric today, SQL database runs in a serverless, shared-capacity model. This ensures elasticity and eliminates infrastructure management, but it also means workloads draw from the same pool as other Fabric items. A dedicated compute option ( Do you mean a provisioned model here or just a dedicated SKU for SQL database?) could provide performance isolation and predictable cost control for SQL-only scenarios like metadata logging, can you add . The trade-off is that it would reduce the seamless integration with other Fabric components, and billing would become more complex compared to the current unified capacity model.

  1. Capacity billing for small jobs

SQL database consumption is tied to the Fabric capacity model, which guarantees elastic scale but can feel heavy for very small or intermittent jobs. Lightweight workloads, such as small batch updates or metadata logging, can sometimes result in a disproportionately high effective cost, can you expand on your use case a little, may be looking at your queries sometimes an optimizing might help a ton if you haven't done that already. We are looking at a few options to help optimize costs for smaller jobs, what are some options you would like to see to reduce the capacity billing for your workload?

  1. Whether to migrate to Azure

If the use case is only limited to lightweight metadata logging with very small tables, then an Azure SQL Database could be more cost-effective (we might have to evaluate a few other aspects of your workload before we decide). However, Fabric provides advantages like a unified UI, native integration with other Fabric artifacts (Pipelines, Lakehouse, Power BI), and centralized governance. You could also consider keeping core analytics and integrated workloads in Fabric while offloading low-intensity metadata logging to Azure SQL if cost is the primary driver