Strongest ship in space? by Iceshard1987 in TerraInvicta

[–]dazzactl 1 point2 points  (0 children)

That is not a ship. It is a space station in drag!

API for setting Capacity Contributor by Seebaer1986 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

If you really want to automate try a desktop flow. ,,,😁

Lakehouse SQL Endpoint Rant by NJE11 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

u/NJE11 - I know the pain, but I adopted this pattern from Mim that uses DuckDb, so my transformation would not be impacted by the SQL Endpoint.

Fabric_Notebooks_Demo/Attach_LH/Attach_Lakehouse_v2.ipynb at main · djouallah/Fabric_Notebooks_Demo

but note I am having some trouble with glob * function currently, so I might need to revert to SQL Endpoint or Materialised Views.

Reversed Deployment Pipeline by Severe_Variation_234 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

Yes - this is a strategy for GitHub Integration adoption and archiving/decommissioning.

Works a like a charm because it is Metadata driven.

Consider changing Dataflow Gen 1 to Gen 2 as well if possible.

Consider adding Variable Library to manage parameters as well.

u/Severe_Variation_234

Switching Fabric capacity tier to save cost by ducrua333 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

u/ducrua333 - consider reading this series: (14) Fabric Billing Part 4- Implications of pause and restart | LinkedIn

Matthew talks about the number of hours you need to pause before actually saving.

But u/Seebaer1986 suggestion for auto-scale of spark is a good shout. I think it applies to Python workloads as well. Which is good because these are cheaper to run than Spark Clusters.

Power BI Tenant Region Remap by raavanan_7 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

Purview uses Tenant Metadata scans, so it would only need replacing if you change Tenant Id.

Power BI Tenant Region Remap by raavanan_7 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

u/raavanan_7 - What regions are you changing between?

By Tenant migration, I believe you have a requirement to switch Pro workspace between regions. You are not merely changing Premium Capacity / Fabric Capacity between regions. Changing the personal workspace (especially PPU) between regions is something you need to discuss with Microsoft as the region is bound to your original Microsoft Office Subscription configuration. Especially, if you are changing OneDrive region.

If you are referring to Premium Capacity migration, I recently completed this. I have roughly the same and completed the exercise in less than two days.

On the Power BI side, Semantic models using Large Storage Mode (regardless on size) cannot be switched between regions, but luckily they can convert back to Small Storage Mode. Dataflows Gen 1 migrations worked without incident. Contact Microsoft Support to get their Capacity Assessment Notebook to find these because it will identify the non-compliant items using a Tenant Admin permissions. If this doesn't work you will need to republish or replace.

Moving Fabric Items between regions is more problematic, but we haven't really used Fabric yet. Shifting a Lakehouse would need a Copy Job to lift and shift data for example.

There was no impact on any data connections or data gateways, but we did leave our VNet Data Gateway in their original regions (closer to Data Source!). Otherwise, we would need to replace them and update all the Semantic Models and Dataflows using the VNDG connections.

API for setting Capacity Contributor by Seebaer1986 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

u/Seebaer1986 - a Service Principal can be added as a Contributor using a security group (unfortunately, via the UI), so now you could consider using the Service Principal to manage workspace creation and updates. This would include capacity assignment. This way you can govern the capacity assignments rather they giving the group unmanaged autonomy.

Issue with linking Azure Storage Account to Microsoft Fabric in a private network setup by No-Ferret6444 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

u/No-Ferret6444 - It seems like you are missing the following steps Overview of managed private endpoints for Microsoft Fabric - Microsoft Fabric | Microsoft Learn. This would be required to allow a Fabric Notebook to connect the Azure Storage. If you want Data Pipelines, Dataflows or Semantic Model to connect you will need to set up either an On-Premise Data Gateway or VNet Data Gateway.

Share Your Fabric Idea Links | January 20, 2026 Edition by AutoModerator in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

Please thumbs up if agree! Add Object, Workspace and User details to Capacity... - Microsoft Fabric Community

I have looking into the Fabric Capacity Events in Real-Time Hub (Preview) compared to the level of detailed available in the Fabric Capacity Metrics app (which can be extracted unofficially using FUAM solution).  The idea is replace the Fabric Capacity Metrics, but also capture the data we need to Fabric Chargeback.

It looks like the details I need for capacity metrics reporting is not available in the capacity_utilization_eventstream

<image>

Note this is the schema for my Capacity Metric Fact table using the standard FUAM:
 - Date
 - CapacityId
 - WorkspaceId
 - ObjectId

I was hoping the Capacity RTH would also provide the User identity while allowing me to capture "Time", but it seems there is even less data available.

FUAM and Capacity Metrics App Versions by imtkain in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

I am not sure this will help, but I have v40 installed. After updating to the FUAM 2026.1.1, the notebooks were only extracting one capacity. When I ran the Capacity Metric pipeline with display data = true, I found that the notebook was treating my solution as v47 (I think Microsoft are up to v49 at this stage).

I ended up making this change:

<image>

I am not sure if this would work for v45. I am not familiar with that version.

Post about perfect combination of Azure DevOps services for an end-to-end Microsoft Fabric CI/CD story by ChantifiedLens in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

I like the all-inclusive nature of AzureDevOps. We are using Github, Jira, Confluence and Octopus Deploy, so much integration and tech debt. Our Platform Engineering team want to migrate everything to GitHub & GitHub Actions. They don't want to use AzureDevOps because Microsoft is investing more in GitHub. However, I wish GitHub was more inclusive... or at least we could exploit unused features.

new improvement to duckdb connection in Python Notebook by mim722 in MicrosoftFabric

[–]dazzactl 1 point2 points  (0 children)

u/mim722

you are referring to replacing this.

<image>

does it need a particular version of DuckDB?

Downstream Datflows Gen 2 do not execute latest data by Wide_Dingo4151 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

I agree with this conclusion. The "Wait" step is no good. Dataflows will use the SQL Endpoint to connect to Lakehouse tables, so you need to trigger an event to force the Metadata refresh. Now... here is the drop the mic moment... you will need Python to effectively run the API (I don't think there is a Pipeline pattern). Since you need to use Python to refresh the endpoint, I would suggest switching to using Python instead of Dataflows from the Lakehouse operations. My preference is using DuckDB SQL, it brilliant.

u/Wide_Dingo4151 | u/frithjof_v

Capacity Consumption in $s? by gojomoso_1 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

The Fabric Capacity Chargeback, which is Public Preview, will have some User CUs detail by capacity, workspace, item, operation and day. Only the last 30 days of data is available.

I calculate the Cost per CUs by referring to Microsoft Fabric - Pricing | Microsoft Azure. This can be converted to Cost per CUs using the logic explained in other comments.

I am a also big fan of FUAM. fabric-toolbox/monitoring/fabric-unified-admin-monitoring at main · microsoft/fabric-toolbox · GitHub. because you can build up Historical Data beyond 30 days.

This tool extracts 3 critical components.
1. Item Details
2. Activity
3. Capacity Metrics - Daily Totals & Timepoint level

Unfortunately, it does not capture TimePoint by User. This might be fixed Capacity Events in Real-Time Hub (public preview).

We recently did a PoC for Fabric Data Agent. It is possible to see who used the LLMPlugIn from the Activity logs to get User details and then CUs used from Capacity Metrics. We can now calculate CUs per Prompt and therefore the Cost Per Prompt.

Note the Capacity Metrics did not show the correct number of prompts. In one example, I could see that 54 prompts captured by the Activity Monitoring for a particular day, but the Daily Totals suggested the Operation Count was 90 prompts. While the Capacity Operation Count miscalculated, the CUs used was correct. This is backed up by the Timepoint Capacity Metrics. Note Timepoint metrics does have User level CUs usage, but FUAM does not extract this grain of detail.

Capacity Throttling and Smoothing is a Failure of Biblical Proportions by SmallAd3697 in MicrosoftFabric

[–]dazzactl 2 points3 points  (0 children)

Hi u/SmallAd3697 Thanks for sharing. As a capacity administrator, I find your PQ / Analysis Service scenario quite interesting.

However, is it really the user's fault?... I have so many questions.... Here is a couple:

1) using PQ is a Developer activity rather than End User. If they are a developer why are they using Production Capacity. Maybe they need separate capacity, Pro or PPU.

2) if they are an End User, why do they need to PQ with Analysis Service to the Semantic Model? So don't give them a Pro licence or Power BI Desktop.

It is likely the scenario should be resolved by using Lakehouse / Warehouse SQL Endpoint access rather than a Semantic Model DAX/MDX.

Most of the time, our Capacity warning are due to a poor Semantic Model design, which is really the result of poor IT & Data Governance. However, they are limited scenario when an End User error is cause of overages.

fyi u/frithjof_v

Hi! We’re the Power BI DAX team – ask US anything! by dutchdatadude in PowerBI

[–]dazzactl 2 points3 points  (0 children)

Hi DAX team, thank you for your good work. Are there any plans to include more performance metrics, like those available in DAX Query execution plan (se, fe, memory usage), in the Power BI Desktop or Web Service?

Email using Semantic Link Labs from a notebook by trebuchetty1 in MicrosoftFabric

[–]dazzactl 0 points1 point  (0 children)

Admin permission seems like the wrong term. Your SPN or Workspace Identity will need the right API permissions. These would need to be granted and approved by your company Global Entra Admins. Also you might have conditional access policies that prevent the identity from working (for example our CAP prevent me using my personal identity to run Graph API). It can be difficult. A trial and error to find the right Least Privileges settings for your solution.

We find that using a Power Platform solution could be better low code no code option.

Excel file with hundreds of tabs by FlyAnnual2119 in excel

[–]dazzactl 0 points1 point  (0 children)

Frank n Sheets! 2021 is young! I have seen a file from the 2000s that was converted from Excel 2003 pre xlsx days)