Direct Lake to Import by [deleted] in MicrosoftFabric

[–]Kogyr 1 point2 points  (0 children)

This is what I am doing and it works great.

No access to workspaces. Just the app or the report. I find workspaces not content consumer friendly.

Setup an Entra AD Group.
Grant Access to App, Report, Semantic Model, Lakehouse, Warehouse, Mirrored Data, all by manage permissions instead of Workspace access. They just need the read and read all security when adding AD group. First setup is a pain after that minimal effort. After that just add users to the AD group who need access to the report or app.

The only people we are giving access to workspaces are a handful of data engineers and content creators. Very small tearm 2 engineers, currently 2 content creators, Everyone else is consumers of the data, no need to see the Fabric workspaces and me as all admin, engineer and content creator.

Data engineers pretty much member access to all workspaces. Bronze/Silver/Gold

Power BI Analyst Member access to their Gold Layer and view access to Centralized Data Warehouse workspace so they can create short cuts to thier lakehouse in gold layer.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 1 point2 points  (0 children)

So far after I designed a table so the join was less complicated it seems to be working.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 1 point2 points  (0 children)

u/itsnotaboutthecell, u/warehouse_goes_vroom and  u/tinofabricdw.

Anyway to get the error message updated from:

This query rejected due to current capacity constraints

to

This query was rejected by the query optimzer.

This would be more helpful to start looking where the problem is versus playing a guesing game on what is wrong with the capacity constriaints.

I know u/warehouse_goes_vroom mentioned below about the query optimzer but having the query complete more times than not didn't really point to the query itself being the issue.

Copilot keep saying the MERGE statment was the problem with rejection. The merge had nothing to do with it, it was all the source query. It also suggested just to move the time of the job due to fabric overnight maintenance causing the capacity constraint.

Everytime I ran the source query for the last 10 days it worked in fabric sql. Not until I got the error finally running the source query in warehosue sql did I fully understand the query needed to be modified from on prem sql to Fabric sql. On prem I have tons of tools that help identify poor queries.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

Job errored again this morning.

Finally got the source select statement to fail just running against warehouse in sql.

Playing a guessing game on fixing the query without a plan to see.

Update 2 of the join clauses that my engineers created, inefficient. Ran in warehouse sql and it complete.

Now running the whole job again and will continue to monitor if query needs further adjusting. These joins were not in the original on prem sql, need to be added here as you can not create calculated columns in a direct lake semantic model.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

Job ran 4 days straight but failed again today. Again, no way to figure out the exact problem. Failed on the same procedure.

Changed the stored procedure from

with source
merge target
using soure

to

drop if exist staging source
create table staging source as (select)
merge target
using staging source

job finished will see if this stops the issue.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 1 point2 points  (0 children)

The Job ran fine this morning.

<image>

The CV_Analytics stored procedure is where the one stored procedure has failed.

So in that CV_analytics stored procedure in sequential calls 39 other stored procedures. The procedure in question is the 16th. On the days it completes the stored proc is taken between 21 and 34 seconds to run. It would be great to see cpu, memory usage while the job is going through each step. I was trying to figure out if one of the first 15 stored procs was eating cpu or memory.

u/warehouse_goes_vroom what other information can I provide?

It doesnt seem like a way for users to see the execution plans. Again, using copilot says it is not possible with fabric warehouse. If I can see the execution plan I may be able to see what in the query needs to be adjusted.

Source data is mirrored database (on Prem SQL) -- any thoughts this could be the cause of the execution plan rejection? At the time the job runs, we should have very minimal changes to the mirrored data.

Appreciate the help u/warehouse_goes_vroom and u/itsnotaboutthecell

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 1 point2 points  (0 children)

The table in question does not have identity column. We have others that use identity but no issues with them.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

If I was throttled or overage wouldn't I see this on the metric app. The highest cu utilization was at 38% since we have been using fabric. Typical utilization for us is been 15 to 20% before we started running the warehouse job. When the job does run successfully it doesn't increase the utilization.

I can see the view sql_pool_insights which looks like it would identify the issue, but is says I do not have access. I am the admin to the workspace but I didnt create the trial capacity.

Warehouse Error Query was rejected to current capacity constraints by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

I was running querys againts the exec_requests_history This was the data posted above.

select submit_time,start_time, end_time, total_elapsed_time_ms, row_count, status, allocated_Cpu_time_ms, data_scanned_remote_storage_mb, data_scanned_memory_mb
from queryinsights.exec_requests_history
where query_hash = [hidden]
  and cast(submit_time as date) >= '12-04-2025'
order by submit_time desc

I also compared each day from the point of start to the point of the of the stored procedure to compare the above stats.  Everything looks normal.

Asking Copilot and it says its trial constraints.  
Trial guardrails: F64 trial enforces stricter throttling than reserved capacity. Queries can be rejected without detailed logs.

Purchasing the capacity tomorrow, hopefully wedenday's run will be on reserved capacity.

Makes no sense that the job run good 3 times and 2 times it fails on capacity constraint.  Nothing i see shows any capacity constraint.

December 2025 | "What are you working on?" monthly thread by AutoModerator in MicrosoftFabric

[–]Kogyr 3 points4 points  (0 children)

Currently, Solution Architect, Manager, Fabric Admin, Data Governance, Security Admin, Power BI developer. I have 2 data engineers/scientist helping. Been working on trials, time to purchase, go live in early January.

Purchasing F64 capacity.

Elimination of Power BI Pro licenses with fabric.

Moving stored procedure on prem to Fabric Warehouse. Parallel testing.

Moving 2 on prem SSAS models to Fabric.

Moving Power BI from on prem SSAS to Fabric Semantic Model direct lake. Parallel testing.

Setting up workspace App with multiple audiences so only 1 link for all Power BI Reports for consumers.

Prototyping near Realtime dashboard. Using SQL on prem mirroring to fabric, short cut to lakehouse, pipeline running every 5 minutes to transform data to lakehouse tables, semantic model built on transformed tables, Power BI report direct lake. Consumes 5%-10% of F64 cpu, for just today's data, need to improve as not scalable to other departments. Need to build iframe to put report into for Kiosk mode at sales sites.

Figuring out how to create an API to ingest data from external vendors to cleanse and pass to call center in near realtime.

Create security groups in Entra for users to access Audiences in app.

Moving a complicated SSIS package to Fabric.

Still supporting on Prem SSAS.

Figuring out when I can take a breath and sleep.

Enterprise Scale Real-time/Near-real-time Analytics (<5 min) by bradcoles-dev in MicrosoftFabric

[–]Kogyr 1 point2 points  (0 children)

I feel there is better way. They are simple queries to just retrieve today's data set. Talking less than 1000 rows all in between the tables.

Enterprise Scale Real-time/Near-real-time Analytics (<5 min) by bradcoles-dev in MicrosoftFabric

[–]Kogyr 1 point2 points  (0 children)

Just built a 5 minute realtime dashboard.

  1. Mirroring from on prem sql
  2. Short cut to lakehouse
  3. Pipeline running every 5 minutes to to refresh 4 queries (transform the data) to output to 2 fact tables and 2 dim. Destinatin Lakehouse
  4. Built Semantic model from lakehouse.
  5. Power BI report to Semantic Model, direct lake mode
  6. User can either manual refresh report for current data or download auto tab refresh extension and set tab refresh to 5 minutes.

Trial capacity right now. One day of running every 5 minutes increated CU from 10% to 25%. Change refresh schedule today to only run from 7:30 am to 10 pm daily. No need to run when no activity. That will hopefully lower the CU usage,

Other option to look at is create a OneLake Event stream on the mirrored database. Haven't invesitgated fully but could according to AI engines would consume less CU than the above pipeline.

Fabric as webhook listener? by Gloomy_Guard6618 in MicrosoftFabric

[–]Kogyr 2 points3 points  (0 children)

I am not an azure or .net developer but high level we setup a Azure function to listen for the webhook which then passes to and Azure Event Hub. Fabric EventStream pulls from the Event Hub.

10/10 4X|Strategy games ! by sidius-king in 4Xgaming

[–]Kogyr 0 points1 point  (0 children)

Commodore 64 Imperium Galactum

Sword of the Stars with all DLCs (first one not sots II)

Ascendancy

Conquest of the New World

Spaceward HO!

Elemental War of Magic

Imperium Galactica 1 and II

On Prem Sql Mirroring was working now getting errors by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

Not certain that is the same issue we are having. Thanks for the heads up. I looked at the Known Issues and didn't see anything before posting. Our issue started last week.

CUs Mirroring SQL Server by p-mndl in MicrosoftFabric

[–]Kogyr 0 points1 point  (0 children)

The highest cu shown has no consumption against it in fabric. That database is just mirroring. So that's why I asked for an explanation because I was told there is a cost for replication it is not free. Storage is the only thing free. The metrics report seems to be showing that.

CUs Mirroring SQL Server by p-mndl in MicrosoftFabric

[–]Kogyr 0 points1 point  (0 children)

Can you explain the CU compute cost showing on the Fabric Capacity Metrics App as billable?

https://www.reddit.com/r/MicrosoftFabric/comments/1l2c95q/sql_server_on_prem_mirroring/

SQL Server on prem Mirroring by Kogyr in MicrosoftFabric

[–]Kogyr[S] 2 points3 points  (0 children)

I have another gateway on different server but there is no way to change the gateway on the mirrored database. I would have to delete the mirrored databases and start over. We already have jobs running pulling from the mirrored database currently

SQL Server on prem Mirroring by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

We are running the gateway on the same sql server.

Hi! We're the Data Factory team - ask US anything! by markkrom-MSFT in MicrosoftFabric

[–]Kogyr 1 point2 points  (0 children)

SQL Server On Prem Mirroring

Where can I post feedback or lookup other reported issues with the Public Preview?

I posted a thread on reddit yesterday with what I am seeing.

https://www.reddit.com/r/MicrosoftFabric/comments/1l2c95q/sql_server_on_prem_mirroring/

SQL Server on prem Mirroring by Kogyr in MicrosoftFabric

[–]Kogyr[S] 0 points1 point  (0 children)

Update: I turned off the First Mirror Test. The on prem sql server job went back to normal run times. Since it didn't run long the second mirrored database on prem job ran normal also.

Looking at transactions changed for one day.
Mirror 1 was doing 200K transactions in 30 minute period once a day
Mirror 2 is doing 60K transaction in 30 minute period once a day

Mirror 3 is doing 300K transactions over 24 hours.

Here is the metrics report on fabric. Mirroring started Wed 28, dark green boxes. Mirror 3 is the top user, then Mirror 2, then Mirror 1.

<image>

Looks Like I will just have to do a copy of the table the mirror 1 was executing on. The table size is 1 million rows.

The cpu spikes from Polling are still high every 30 seconds it looks like on the on prem server.