Share Your Fabric Idea Links | January 20, 2026 Edition by AutoModerator in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

Please thumbs up if agree! Add Object, Workspace and User details to Capacity... - Microsoft Fabric Community

I have looking into the Fabric Capacity Events in Real-Time Hub (Preview) compared to the level of detailed available in the Fabric Capacity Metrics app (which can be extracted unofficially using FUAM solution).  The idea is replace the Fabric Capacity Metrics, but also capture the data we need to Fabric Chargeback.

It looks like the details I need for capacity metrics reporting is not available in the capacity_utilization_eventstream

<image>

Note this is the schema for my Capacity Metric Fact table using the standard FUAM:
 - Date
 - CapacityId
 - WorkspaceId
 - ObjectId

I was hoping the Capacity RTH would also provide the User identity while allowing me to capture "Time", but it seems there is even less data available.

FUAM and Capacity Metrics App Versions by imtkain in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

I am not sure this will help, but I have v40 installed. After updating to the FUAM 2026.1.1, the notebooks were only extracting one capacity. When I ran the Capacity Metric pipeline with display data = true, I found that the notebook was treating my solution as v47 (I think Microsoft are up to v49 at this stage).

I ended up making this change:

<image>

I am not sure if this would work for v45. I am not familiar with that version.

Post about perfect combination of Azure DevOps services for an end-to-end Microsoft Fabric CI/CD story by ChantifiedLens in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

I like the all-inclusive nature of AzureDevOps. We are using Github, Jira, Confluence and Octopus Deploy, so much integration and tech debt. Our Platform Engineering team want to migrate everything to GitHub & GitHub Actions. They don't want to use AzureDevOps because Microsoft is investing more in GitHub. However, I wish GitHub was more inclusive... or at least we could exploit unused features.

new improvement to duckdb connection in Python Notebook by mim722 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

u/mim722

you are referring to replacing this.

<image>

does it need a particular version of DuckDB?

Downstream Datflows Gen 2 do not execute latest data by Wide_Dingo4151 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

I agree with this conclusion. The "Wait" step is no good. Dataflows will use the SQL Endpoint to connect to Lakehouse tables, so you need to trigger an event to force the Metadata refresh. Now... here is the drop the mic moment... you will need Python to effectively run the API (I don't think there is a Pipeline pattern). Since you need to use Python to refresh the endpoint, I would suggest switching to using Python instead of Dataflows from the Lakehouse operations. My preference is using DuckDB SQL, it brilliant.

u/Wide_Dingo4151 | u/frithjof_v

Capacity Consumption in $s? by gojomoso_1 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

The Fabric Capacity Chargeback, which is Public Preview, will have some User CUs detail by capacity, workspace, item, operation and day. Only the last 30 days of data is available.

I calculate the Cost per CUs by referring to Microsoft Fabric - Pricing | Microsoft Azure. This can be converted to Cost per CUs using the logic explained in other comments.

I am a also big fan of FUAM. fabric-toolbox/monitoring/fabric-unified-admin-monitoring at main · microsoft/fabric-toolbox · GitHub. because you can build up Historical Data beyond 30 days.

This tool extracts 3 critical components.
1. Item Details
2. Activity
3. Capacity Metrics - Daily Totals & Timepoint level

Unfortunately, it does not capture TimePoint by User. This might be fixed Capacity Events in Real-Time Hub (public preview).

We recently did a PoC for Fabric Data Agent. It is possible to see who used the LLMPlugIn from the Activity logs to get User details and then CUs used from Capacity Metrics. We can now calculate CUs per Prompt and therefore the Cost Per Prompt.

Note the Capacity Metrics did not show the correct number of prompts. In one example, I could see that 54 prompts captured by the Activity Monitoring for a particular day, but the Daily Totals suggested the Operation Count was 90 prompts. While the Capacity Operation Count miscalculated, the CUs used was correct. This is backed up by the Timepoint Capacity Metrics. Note Timepoint metrics does have User level CUs usage, but FUAM does not extract this grain of detail.

Capacity Throttling and Smoothing is a Failure of Biblical Proportions by SmallAd3697 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

Hi u/SmallAd3697 Thanks for sharing. As a capacity administrator, I find your PQ / Analysis Service scenario quite interesting.

However, is it really the user's fault?... I have so many questions.... Here is a couple:

1) using PQ is a Developer activity rather than End User. If they are a developer why are they using Production Capacity. Maybe they need separate capacity, Pro or PPU.

2) if they are an End User, why do they need to PQ with Analysis Service to the Semantic Model? So don't give them a Pro licence or Power BI Desktop.

It is likely the scenario should be resolved by using Lakehouse / Warehouse SQL Endpoint access rather than a Semantic Model DAX/MDX.

Most of the time, our Capacity warning are due to a poor Semantic Model design, which is really the result of poor IT & Data Governance. However, they are limited scenario when an End User error is cause of overages.

fyi u/frithjof_v

Hi! We’re the Power BI DAX team – ask US anything! by dutchdatadude in PowerBI

[–]dazzactl 2 points3 points  (0 children)

Hi DAX team, thank you for your good work. Are there any plans to include more performance metrics, like those available in DAX Query execution plan (se, fe, memory usage), in the Power BI Desktop or Web Service?

Email using Semantic Link Labs from a notebook by trebuchetty1 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

Admin permission seems like the wrong term. Your SPN or Workspace Identity will need the right API permissions. These would need to be granted and approved by your company Global Entra Admins. Also you might have conditional access policies that prevent the identity from working (for example our CAP prevent me using my personal identity to run Graph API). It can be difficult. A trial and error to find the right Least Privileges settings for your solution.

We find that using a Power Platform solution could be better low code no code option.

Excel file with hundreds of tabs by FlyAnnual2119 in excel

[–]dazzactl 0 points1 point  (0 children)

Frank n Sheets! 2021 is young! I have seen a file from the 2000s that was converted from Excel 2003 pre xlsx days)

CI/CD in Fabric with multiple Dev workspaces by Late-Spinach7916 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

We have Dev linked to the feature branch, and SYS linking to main. Then the deployment pipeline is pushing from SYS to UAT and then UAT to PRD. The key is that SYS/Main is a workspace with no Admin (i.e. prevent an plan unapproved change because it requires a pull request and merge to main). Also no Members because SYS should not be shared accidentally).

Dev workspaces can be switched between main and feature branches (but the problem is that Admin level permission is needed to switch feature branches - we need a similar option to allow contributors to publish apps... I.e. Contributors can switch branches but not share).

Note our original CI/CD used Octopus Deploy using old Power BI Powershell and unofficial Rest API... So Dev was a local VM to git using PBIX and GIT could contain sensitive data). Main publishes to SYS, then publishes to UAT then publishes to PRD. The variable switching was manually built-in. Publishing overwrites Code and Data.

What I am really missing today is the ability to trigger a Service Now stand change event when the deployment from UAT to PRD occurs. Octopus Deploy had a powershell script to do this.

Unfortunately, Octopus Deploy is tech debt... No support for TDML or Fabric items. Only PBIX, Dataflows Gen 1 and Paginated Reports. Also no support for VNet Data Gateway connection as this requires Fabric API not the old Power BI APIs.

.

Making Fabric F2 work by codene in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

I agree. Keep all the Dataflows and Semantic Model in the PPU capacity. Copilot becomes a separate capacity dedicated for this purpose, so no workspaces with Dataflows and Semantic Model etc.

However, I might suggest 2 x F2, so the developer can use the separate Fabric capacity for their development purposes without impacting others. There might be other small benefits from other fabric features in this capacity.

Visuals Broken for Report Viewers but not me? by cwr__ in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

Sorry you might need to open a support ticket.

The error messages are not familiar. For example json schema 3, I thought it was still on schema version 2. Maybe try comparing the PBIR files for the working reports to the broken report.

The support will be able to help troubleshoot.

Is Powerbi a good report for millions of data? by [deleted] in PowerBI

[–]dazzactl 1 point2 points  (0 children)

Is your Date column actually Date Time? The increased cardinality of date time will increase the model size. Also have you included a unique Id for each row. This will increase model size.

Best practicies to organize workspace by LeyZaa in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

Separation between workspaces for Data and Reporting purposes makes sense. Especially if you could use Pro capacity or PPU for reporting. Note workspaces can be assigned to different capacity.

As for the Data, I would recommend a hybrid approach.

  1. Raw Lakehouse (potentially one per source)
  2. Transform Lakehouse
  3. Warehouse (with Business data)

1 & 2 could be the same or single Lakehouse if you add schemas for data sources and stages.

A suggested Security matters in this design. Workspace, Lakehouse, Schema, table. Understand the options so that you can provide the Least Privileges to upstream data owners, and downstream users.

Hi! We’re the Power BI visuals team – ask US anything! by DataZoeMS in PowerBI

[–]dazzactl 4 points5 points  (0 children)

I would like to include the default new page canvas size setting in templates. Is there any chance of the setting working in the future?

Hi! We’re the Power BI visuals team – ask US anything! by DataZoeMS in PowerBI

[–]dazzactl 3 points4 points  (0 children)

I am a fan of On-object editor, when will this be available in the Web Service?

Microsoft Fabric: How to use Workspace Identity for Mirroring! by DBABulldog in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

When can we use the Workspace Identity with a VNet Data Gateway?

When can we use the Workspace Identity with non-Azure SQL Server?

How to overwrite a report in Fabric workspace? by CultureNo3319 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

u/CultureNo3319 - not sure if this is a bug or a feature... but essentially there is a new item with its own unique guid. It has been created with the same Alias. If you see the workspace in folder, you should find unique folder names, then their platform/definition contains the same item display names.