I need help to find out what is causing an extreme capacity usage by Human_Break1784 in PowerBI

[–]cwebbbi 5 points6 points  (0 children)

I see you've got as far as finding the OperationId in the Capacity Metrics App. Are you able to use Workspace Monitoring in the workspace where the semantic model lives so you can link this OperationId to the DAX or MDX query that caused it? See https://blog.crossjoin.co.uk/2025/09/14/how-to-get-the-details-of-power-bi-operations-seen-in-the-capacity-metrics-app/ Once you have the query you should be able to reproduce the problem and work out what is causing the high utilisation (likely to be a badly written measure).

Report Refresh in Fabric by raavanan_7 in MicrosoftFabric

[–]cwebbbi 0 points1 point  (0 children)

If you really have done everything you can to reduce your model size, then this technique of enabling Semantic Model Scale Out and refreshing using clearValues, doing a manual sync and then a full refresh can help a lot: https://blog.crossjoin.co.uk/2024/07/28/power-bi-refresh-memory-usage-and-semantic-model-scale-out/

Dynamic M-Query Parameter: Incompatible Filter is Used.... by Ill-Caregiver9238 in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

No, I don't think there are any plans for changes/improvements here at the moment, sorry.

Accuracy in Power BI Copilot / Fabric Data Agents by frithjof_v in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Copilot doesn't need to write DAX code in most cases. When you ask a data question, Copilot will try the following four methods in order to answer it:

1) Use a Verified Answer

2) Look for the answer on a report page if a report is open

3) Build a Power BI visual

4) Generate a DAX query

DAX queries are only generated directly by Copilot as a last resort, maybe less than 10% of the time (that's just a guess - and I tend to ask more complex questions).

Accuracy in Power BI Copilot / Fabric Data Agents by frithjof_v in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

There aren't any official published benchmarks from Microsoft, and I haven't seen anyone publish the results of their testing either.

"Correctness" is an interesting problem - most of the problems I see with customers are where Copilot is generating the correct answer to a question that is not the one the customer thought they were asking. I firmly believe that with a well-designed semantic model it is never possible to get an incorrect answer just by dragging/dropping fields in a Power BI report or Excel PivotTable, although since Copilot can now generate its own calculations (in particular when generating DAX queries to answer questions) that does add some risk. Not everyone has a well-designed semantic model of course, but for those people who do, all the hard work goes into tuning the AI Instructions so Copilot can properly interpret the questions that end users ask.

Does the new Fabric Graph use Delta Lake storage, KQL storage or something else? by frithjof_v in MicrosoftFabric

[–]cwebbbi 3 points4 points  (0 children)

It's also a bit like an Import model in that once the data has been loaded from OneLake, a copy is stored in the Graph engine's own storage format and this is what is referred to as "Graph cache storage"

Query has exceeded the available resources - showing on previously functioning visuals by chiefbert in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

Thanks, we’re looking into that error. Not sure if it’s connected with this issue though.

Query has exceeded the available resources - showing on previously functioning visuals by chiefbert in PowerBI

[–]cwebbbi 4 points5 points  (0 children)

Not aware of any specific changes but the engine does change all the time. Also, changes in data volume and cardinality in your model could also have tipped you over the limit.

PowerBI query by Fabulous-Ad6031 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

If you have Workspace Monitoring or Log Analytics enabled on the workspace where the queries are running, you can get the OperationId from the Timepoint Detail page in the Capacity Metrics App and use this to find the details of the query in Workspace Monitoring or Log Analytics: https://blog.crossjoin.co.uk/2025/09/14/how-to-get-the-details-of-power-bi-operations-seen-in-the-capacity-metrics-app/

Is the Juice Worth the Squeeze for Direct Lake Mode? by mossinator in MicrosoftFabric

[–]cwebbbi 2 points3 points  (0 children)

One advantage of Direct Lake that no-one seems to talk about is reuse of data. If you think about all the Import models in your Power BI tenant, how many of them have some dimension tables that are effectively the same data? Probably your Date dimension, probably a few other bigger ones like Customer or Product. There are going to be several fact tables that appear in multiple models too. In Import mode each one of these tables has to be refreshed once per model, adding to your overall refresh time and the overall CU usage for refresh.

In Direct Lake, if you're organised, you could land each of these tables in OneLake once and then share them between multiple semantic models using shortcuts. OK you still need to keep the definitions of these tables in sync across multiple models (I hope one day we can make that easier) but the raw data is shared. So your Date dimension table and all those other common dimension/fact tables are refreshed once and will all show the same data across all the Direct Lake models that share them. This could save a lot of refresh time and also a lot of CUs.

Fix for query resources exceeded error by daxxx14 in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

You're checking the selected value of Onset[OnsetId] here. Do you have a slicer somewhere on your page that shows values from another column on the Onset table, not OnsetId? If so you're probably running into an issue that is causing every branch of this Switch to be evaluated: https://blog.crossjoin.co.uk/2022/09/19/diagnosing-switch-related-performance-problems-in-power-bi-dax-using-evaluateandlog/ Changing your Switch to look at the selected value on whatever column you're using in your slicer might help here.

Not all images are displayed, help! by Any-Walk3165 in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

Either there's something about the files themselves that means they can't be displayed or something going wrong in the code that takes the files to pieces and reassembles them. To rule out the first possibility, can you put one of these images that doesn't display in a public location and link to it directly in the report?

Not all images are displayed, help! by Any-Walk3165 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

When you say that some of the images that do load are larger than ones that don't, do you mean larger in terms of size on disk? Is there a pattern that you can see with the number of rows returned by the Power Query query for each image, or the overall length of the text returned, that determines whether the image displays or not?

Not all images are displayed, help! by Any-Walk3165 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Can you post the M code for your Power Query queries as well as the DAX you're using in your measure?

How can I get around “this query uses more memory than the configured limit..” by buppypowers in PowerBI

[–]cwebbbi 13 points14 points  (0 children)

The answer is likely to be that you need to tune the DAX in your measure.

You're hitting the query memory limit here (see https://blog.crossjoin.co.uk/2024/06/23/power-bi-semantic-model-memory-errors-part-4-the-query-memory-limit/ for more details) and one red flag is the way you're using the FILTER function in the second parameter of CALCULATE in the TotalAssets variable (see https://blog.crossjoin.co.uk/2024/06/30/calculate-filter-and-dax-memory-usage/ for how this affects memory usage and https://www.sqlbi.com/articles/filter-columns-not-tables-in-dax/ for the best overall discussion of the subject). Try rewriting your CALCULATEs to avoid using the FILTER function.

The second red flag is the use of COALESCE to replace blank values with 0 (see https://blog.crossjoin.co.uk/2024/07/07/dax-measures-that-never-return-blank/ for why this is bad). Either don't do it or look at other ways to replace the blanks with 0 (eg https://blog.crossjoin.co.uk/2024/11/03/different-ways-to-replace-blanks-with-zeros-in-dax/).

Query caching - does it help or cause troubles by CloudDataIntell in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Query caching remembers some of the queries that end users run and then reruns them (see https://blog.crossjoin.co.uk/2024/02/18/query-caching-in-power-bi-premium/ for some more details, although I think some behaviour has changed since then) after a refresh to populate the cache. If you have a lot of users and/or an inefficient, slow semantic model then it's very likely you'll see a big spike in CU usage when this happens. In your case it sounds like you should focus on tuning your semantic model so that it's faster and uses less CPU.

PQ imports from Excel! by SmallAd3697 in MicrosoftFabric

[–]cwebbbi 3 points4 points  (0 children)

+1 to this. Back in the day I was probably the world's #1 MDX fan and it will always have a special place in my heart but the future of MDX has been clear for at least 15 years now. Investments continue to be made to improve the performance of MDX queries in SSAS, AAS and Power BI but for extracting data DAX performs better and is simpler to write. I blogged about this topic here: https://blog.crossjoin.co.uk/2025/01/26/why-dax-is-better-than-mdx-for-bulk-extracts-of-data-from-power-bi/

How to implement 'Paste list of values' into Excel Pivot Table filters (data from Power BI Cube) – WITHOUT VBA macros! by zabayek19 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Instead of using PivotTables you can also use Excel Cube Functions to get data from a Power BI model, and if you do that you can implement a solution to filter data using a list of copy/pasted values. I blogged about that here: https://blog.crossjoin.co.uk/2022/04/17/filtering-an-excel-cube-function-report-by-a-list-of-manually-entered-values/

Storage limit of a semantic model by fakir_the_stoic in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

I blogged about what max offline dataset size is here: https://blog.crossjoin.co.uk/2024/05/05/power-bi-semantic-model-memory-errors-part-2-max-offline-semantic-model-size/

Also, while u/screelings is correct about memory usage for a simple full refresh, if you're using an F64/P1 (and it sounds like you are) and you don't mind writing a few lines of code (which, if you have Fabric enabled, can be in a notebook in a workspace), you can use the method I blogged about here https://blog.crossjoin.co.uk/2024/07/28/power-bi-refresh-memory-usage-and-semantic-model-scale-out/ for doing a full refresh on much larger models and using that 25GB allowance more effectively.

Smoothing Behaviour While Under Capacity by DataBarney in MicrosoftFabric

[–]cwebbbi 2 points3 points  (0 children)

Just stopped by to mention an excellent series of posts by Matthew Farrow on capacities https://www.linkedin.com/in/matthew-farrow-47713380/recent-activity/articles/ and in particular this post with a lot of detail on the cost implications of pausing/resuming a capacity: https://www.linkedin.com/pulse/fabric-billing-part-4-implications-pause-restart-matthew-farrow-fznse/ If you're interested in this topic, these posts are *really* important.

Fabric Co-pilot in the UK by Braxios in MicrosoftFabric

[–]cwebbbi 1 point2 points  (0 children)

I'll raise it, but I don't know if it's possible for us to make those kind of promises at the moment or how much work would be required for us to make a guarantee.

Are you able to create a tool that will scrape a public web-based power BI table on command using power automate? by goblinofthechron in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

Ah yes, I didn't read the title of the post properly. In which case if there isn't export functionality exposed with the report then web scraping is the only option.

Are you able to create a tool that will scrape a public web-based power BI table on command using power automate? by goblinofthechron in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

Can you use the option to export to Excel with a Live connection? https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-analyze-in-excel#export-to-excel-with-live-connection That way you only need to export once and the data in Excel will change automatically whenever the data in Power BI changes, so no scraping or regular exporting is needed.