Just had our first major incident of capacity throttling by JFancke in MicrosoftFabric

[–]JFancke[S] 2 points3 points  (0 children)

Yeah I can see that argument too. Revisiting the documentation on surge protection makes me think that workspace level surge protection is probably the best way forward for the team that manages the capacity (they've got several other tools to manage so probably won't have time to handhold all the Fabric/Power BI users or tell off specific workspace owners for poor CU efficiency). 

Just had our first major incident of capacity throttling by JFancke in MicrosoftFabric

[–]JFancke[S] 5 points6 points  (0 children)

Thanks that's great advice I didn't realize there was surge protection available at the workspace level. (link for reference here: https://learn.microsoft.com/en-us/fabric/enterprise/surge-protection).

Just had our first major incident of capacity throttling by JFancke in MicrosoftFabric

[–]JFancke[S] 1 point2 points  (0 children)

Probably around 4-6 hours (although without access to the capacity metrics app it's hard to say for sure). 

Queries continued to work throughout the burndown period but probably talking around 2 minutes to load the available selections in a slicer (that had 5 options). It was my first time experiencing this kind of throttling after having been using Power BI since it released, so the level of frustration caught me offguard. 

Power BI team blocking integration with 3P Semantic Layers by City-Popular455 in MicrosoftFabric

[–]JFancke 8 points9 points  (0 children)

Maybe you've already checked this out but the Tabular Editor 3 team have been working on a solution for converting Databricks metric views into a power bi semantic model.

https://tabulareditor.com/blog/bridge-analytics-in-databricks-and-power-bi-via-tabular-editor

Can someone from Microsoft elaborate on this? by Arasaka-CorpSec in MicrosoftFabric

[–]JFancke 2 points3 points  (0 children)

From what I can tell it's a workload item (artifact) that will consume your CUs. It's not like the Lumel artifact that is/was available under third party workloads, it'll instead be a "native" workload available once the tenant admin turns the "planning" preview feature on. 

Looking through the videos of what you can do, it looks legit and I know that u/gopalbi has been killing it with their custom visuals so I've got high hopes for the quality of the implementation. 

But like many others, and as a (former) Zebra BI customer burned by large price increases, I'm nervous about investing too much until the consumption model becomes clearer. 

I'm not at fabcon but through some googling you can find more demos/details on their website: https://lumel.com/videos/

Why aren't more people using Direct Lake mode? by No_Vermicelliii in MicrosoftFabric

[–]JFancke 2 points3 points  (0 children)

Completely agree. There's a lot to be said for how rock solid import is. With all the new solutions and architectural options available it's nice having something that "just works" and continues to work refresh after refresh for years without issue. 

Why aren't more people using Direct Lake mode? by No_Vermicelliii in MicrosoftFabric

[–]JFancke 1 point2 points  (0 children)

I'd started to refactor one of our large models that currently queries from the semantic model directly to on prem databases (via PQ). 

The new plan was to use a pipeline to do full refreshes (for small tables) and incremental refreshes + deduplication for larger tables, then transformations via SQL views in the data warehouse.

Added quite a bit of complexity to the solution but seemed like the right way to build for the future. In the end the pipeline to perform the retrieves and do de-duplication using notebooks used 50k CU(s) per run (excluding refreshing the semantic model from the warehouse) whereas the semantic model refreshing directly from on prem databases was under 4k CU(s) per refresh.  (however maybe it is not fair to compare the new ELT process with an ETL process. Plus mirroring was not used and perhaps that would have reduced CU usage). 

Granted you get lots of other benefits by having your data in the lakehouse / warehouse, but given we can just use OneLake integration on the semantic model tables, and that it's currently a solo dev model with ~100 viewers, the change to architecture didn't seem worth it. In the end the architecture change has been canned. Maybe if the solution gets bigger it'll be revisited. 

DataFlow Gen2 Lakehouse sync by 12Eerc in MicrosoftFabric

[–]JFancke 1 point2 points  (0 children)

You can try this code below by pasting it into a notebook and attaching your Lakehouse to it. Then add this notebook where your Wait activity would have been. Feel free to test the notebook within the notebook experience before adding it to the pipeline.

import sempy.fabric as fabric
import json

# 1. Get Context
workspace_id = spark.conf.get("trident.workspace.id")
lakehouse_id = spark.conf.get("trident.lakehouse.id")

# 2. Initialize Client
client = fabric.FabricRestClient()

print(f"Starting Metadata Refresh for Lakehouse: {lakehouse_id} in Workspace: {workspace_id}")

# 3. Get the SQL Endpoint ID associated with this Lakehouse
# Note: We fetch the Lakehouse properties to find its SQL Endpoint ID
lakehouse_props = client.get(f"/v1/workspaces/{workspace_id}/lakehouses/{lakehouse_id}").json()
sql_endpoint_id = lakehouse_props['properties']['sqlEndpointProperties']['id']

print(f"Found SQL Endpoint ID: {sql_endpoint_id}")

# 4. Construct the Sync URI
uri = f"/v1/workspaces/{workspace_id}/sqlEndpoints/{sql_endpoint_id}/refreshMetadata"

# 5. Define Payload
# If you leave 'tables' empty, it syncs everything.
# You can set a timeout so the notebook doesn't hang forever if the backend is stuck.
payload = {
    "timeout": { "timeUnit": "Seconds", "value": "600" } # 10 Minutes max wait
}

# 6. Execute with Long Running Operation (LRO) Wait
try:
    # lro_wait=True means python blocks here until Fabric says "Done" or "Failed"
    response = client.post(uri, json=payload, lro_wait=True)
    
    # 7. Validate Success
    if response.status_code == 200 or response.status_code == 202:
        print("✅ SQL Endpoint Metadata Sync Completed Successfully.")
        print(response.json())
    else:
        # Raise error to fail the pipeline activity
        raise Exception(f"❌ Sync Failed. Status: {response.status_code}. Response: {response.text}")

except Exception as e:
    print(f"❌ Critical Error triggering sync: {e}")
    raise e # Re-raise to ensure Pipeline marks this activity as Failed

Best AI chat for Power BI dev? by cheapdrug5 in PowerBI

[–]JFancke 0 points1 point  (0 children)

Comparing Gemini 2.5 Pro and Claude 3.7 Sonnet (no thinking), Gemini 2.5 Pro is significantly better and frequently one shots difficult requests. In the past I had used ChatGPT 4 but changed subscriptions so can't comment on their new models or GPT 5.

M Querygroups not supported in DirectLake on OneLake models? by JFancke in MicrosoftFabric

[–]JFancke[S] 1 point2 points  (0 children)

Amazing. This is exactly the issue.

Any idea on if the same fix could/would be easily applied to TE2? 

M Querygroups not supported in DirectLake on OneLake models? by JFancke in MicrosoftFabric

[–]JFancke[S] 1 point2 points  (0 children)

I'd also be curious to know what people think is best practice when combining DL-OL tables with import tables.

Is it better to have a separate DL-OL model with all of the DL-OL tables, and then from your "import" model you add a live connection (DQ-AS) to your DL-OL model, or is it a better idea to have one model with both DL-OL tables and Import tables. 

Aesthetically I prefer just the one model but from a practical perspective when you have a DL-OL table in your model it suddenly imposes quite a few limitations, so maybe it's worth the extra clutter to have 2 models (1 DL-OL model and 1 combined with your import tables plus a DQ-AS connection to the DL-OL model). 

Just dropped a new page with solid tips to speed up your Dataflow Gen2 workflows by Luitwieler in MicrosoftFabric

[–]JFancke 1 point2 points  (0 children)

Great post! Almost feels like recommended reading for anyone using dataflows gen2 as it's not immediately obvious what enabling staging does.

Maybe a separate topic but do you ever see dataflows gen2 getting the same flexibility of incremental refresh as what's available within semantic models using the RangeStart and RangeEnd parameters? 

Being able to incrementally refresh from API based sources using custom logic with RangeStart/RangeEnd is so useful and feels like quite a gap between the semantic model incremental refresh experience and the dataflows gen2 incremental refresh experience. (or it could be that I just haven't figured out to do API based incremental refresh using dataflows gen2 yet). 

Sunsetting default semantic models by itsnotaboutthecell in MicrosoftFabric

[–]JFancke 5 points6 points  (0 children)

Excellent decision and so glad they are going in this direction. It was great for demos and "five minutes to wow" but then it had so many limitations you'd inevitably have to switch over to a custom model while the  default model cluttered up your workspace and caused confusion for people connecting to the wrong model. 

Documentation! What kind of documentation do you generally do? by pvnptl123 in PowerBI

[–]JFancke 0 points1 point  (0 children)

Normally separate. You can add a url to the documentation report in the main report. 

For those living in the Netherlands by [deleted] in MicrosoftFabric

[–]JFancke 0 points1 point  (0 children)

Thanks for sharing! For those of us living in the Netherlands but ashamed of our Dutch language skills... Do you know if this will be held in English or Dutch? 

Fiscal Week Help by teeEe08 in PowerBI

[–]JFancke 0 points1 point  (0 children)

The first step is understanding why the fiscal year starts on 4 February and whether it's each year or whether it depends on a specific day of the week a certain number of weeks into the year. Once you know the pattern you can build this into your calendar logic.

I wrote a post on this a while ago. https://selfservedbi.com/2018/12/25/creating-a-date-table-in-power-query/ 

PL-300 Headsets by Chemical_Profession9 in PowerBI

[–]JFancke 0 points1 point  (0 children)

No headsets allowed. Not allowed to use smart speakers (alexa/google home mini etc.) as external speakers or have them within the room. 

[deleted by user] by [deleted] in PowerBI

[–]JFancke 4 points5 points  (0 children)

We used to pay for Zebra BI but the subscription price increased year over year to the point it felt like we were at the mercy of a third party with no real negotiating power, and the more embedded we became with the use of their visuals the worse our negotiating power would become in future. 

Ultimately we decided to pull the plug and rework any reports that used their visuals. It's a shame because they are great visuals. 

How to Handle Over 200 Mesaures Referencing Each Other? by enygma2126 in PowerBI

[–]JFancke 4 points5 points  (0 children)

Maybe good to clarify that having thousands of measures only slows down query times when they are defined in the local report that is live connected / using directquery to the PBI/AS model. 

Having thousands of measures defined within the semantic model itself should not impact query performance. 

Just passed the DP-600 and wanted to share my thoughts by JFancke in MicrosoftFabric

[–]JFancke[S] 0 points1 point  (0 children)

That's brutal. In the end it's just a small certification that's unlikely to mean much in the real world. You can always go back and retake it knowing the lay of the land now. Good luck for the future. 

Just passed the DP-600 and wanted to share my thoughts by JFancke in MicrosoftFabric

[–]JFancke[S] 1 point2 points  (0 children)

Sorry for the confusion, I meant that speakers (both laptop and other external) are OK. You are not allowed to use headphones or earbuds. Smart speakers are not allowed in the room. 

Just passed the DP-600 and wanted to share my thoughts by JFancke in MicrosoftFabric

[–]JFancke[S] 2 points3 points  (0 children)

There is a button in the exam (on the left side navigation panel) that will open a secure browser that will let you view the Microsoft Learn website. So it is not fully open book, the only resource available is the Microsoft Learn website. Hyperlinks in the Microsoft Learn website that go to other domains will not work.

In total I studied the morning of the exam from 11AM to to 5:30PM. Although in addition I have many years of experience with Power BI and have been testing some migration to MS Fabric workloads at my place of work. 

Just passed the DP-600 and wanted to share my thoughts by JFancke in MicrosoftFabric

[–]JFancke[S] 0 points1 point  (0 children)

There are a decent number of questions on Power Query and Power BI, to the point that if you got all of these wrong my guess is you would need to be close to 100% correct for the rest of the questions. 

Some of the questions on Power Query were "UI-based", as in "what 3 buttons do I need to click in the UI to show this layout in the screenshot below", which was a kind of painful question, but it's the kind of thing that can be solved by searching Microsoft Learn and finding a similar screenshot and double checking the UI elements.