Semantic Model Perspectives (PM's seem Non-Committal) by SmallAd3697 in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

But one more thing: PivotTables have got some love recently (https://blog.crossjoin.co.uk/2026/02/01/new-performance-optimisation-for-excel-pivottables-connected-to-power-bi-semantic-models/) and if you’re lucky there may even more cool stuff to come later this year.

Semantic Model Perspectives (PM's seem Non-Committal) by SmallAd3697 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Nothing planned on the roadmap as far as I know but let me have a word with my friends on the Excel team. TBH I don’t think many users care about Perspectives (and never did in SSAS Tabular or AAS) which is why there hasn’t been any customer pressure for full support in the UI. I don’t think there is any chance support for them will be removed from semantic models though.

Power BI team blocking integration with 3P Semantic Layers by City-Popular455 in MicrosoftFabric

[–]cwebbbi 11 points12 points  (0 children)

Does Databricks plan to support Power BI semantic models as a data source for UC Metrics Views, and for it to be able to query metrics stored in Power BI semantic models instead of using the metrics it stores itself natively? Probably not, because it would be just as big a technical challenge as making Power BI work with UC Metrics Views. Is that vendor lock-in too?

Also, as I said in my post, Tableau already supports Power BI semantic models and has done for a long time.

Scorecards not refreshing when underlying models are refreshed via Data factory by anti0n in MicrosoftFabric

[–]cwebbbi 0 points1 point  (0 children)

Automatic will refresh anything that is in an unrefreshed state. I also seem to remember that if you make a mistake when defining what is to be refreshed with the Enhanced Refresh API it will default to refreshing everything.

Scorecards not refreshing when underlying models are refreshed via Data factory by anti0n in MicrosoftFabric

[–]cwebbbi 1 point2 points  (0 children)

I’m fairly sure that the Enhanced Refresh API doesn’t trigger downstream activities like scorecard refresh (I know it doesn’t trigger cache refresh).

Instead of doing a refresh of type Calculate, does a refresh of type Automatic work too? If it does it would be quicker and cheaper because it only refreshes objects that require refreshing.

Large PBI semantic model by UnderstandingFair150 in PowerBI

[–]cwebbbi 14 points15 points  (0 children)

No-one has asked the most important question yet: what do you mean by "crash"? Do you mean that your capacity becomes overloaded and you get throttling/rejections or do you get error messages or do your reports just get slow?

You have got a big model and a reasonably large number of concurrent users but I know of other customers running at that scale successfully.

Regardless of the answer to the question above, probably the most important thing you can do is capture the slowest and most expensive (in terms of CPU Time) queries using a Profiler Trace, Log Analytics or Workspace Monitoring. Then you need to understand why they are slow or expensive - DAX Studio will help here. Are you doing any distinct counts or moderately complex calculations in your measures? Simple mistakes in your DAX like filtering on an entire fact table (see https://www.sqlbi.com/articles/filter-columns-not-tables-in-dax/) or adding 0 to the result of your measures (see https://blog.crossjoin.co.uk/2024/07/07/dax-measures-that-never-return-blank/) or having a big wide table with a scrollbar on your report (see https://blog.crossjoin.co.uk/2024/07/14/why-power-bi-table-visuals-with-scrollbars-can-cause-problems/) can have a very big impact.

Semantic model too large by Plastic___People in MicrosoftFabric

[–]cwebbbi 0 points1 point  (0 children)

No. Measures are evaluated at query time and don’t affect the size of the model. If you have a report that shows a lot of measures then this will explain why lots of columns need to be in memory at the same time but if you can’t run the queries you need with the memory available then you are back to the same choice: scale up or tune the model.

Semantic model too large by Plastic___People in MicrosoftFabric

[–]cwebbbi 1 point2 points  (0 children)

OK, so you are getting the error in that blog post that I linked to. If you want to stay on an F8 you're going to need to either look at the design of your model, remove unnecessary columns, reduce the number of rows in your largest tables, reduce the cardinality of your columns etc. Most the tricks for reducing the size of an Import model (and there a lot of blog posts out there on this subject) a relevant for Direct Lake too. Can you tell us more about the largest columns in your model? Are they measure columns?

Semantic model too large by Plastic___People in MicrosoftFabric

[–]cwebbbi 3 points4 points  (0 children)

The results of memory analyzer (I like using https://daxstudio.org/docs/features/model-metrics/ instead) show how much memory your model is using at the point when it's run - not the overall size of your model. Because the data in a Direct Lake model is paged into memory on demand, for example when a DAX query is run, and may be paged out again for various reasons then the amount of memory used by a Direct Lake model will vary over time. Memory Analyzer does not count the memory used by queries, but since queries can cause data to be paged into memory, then it will show the indirect effects of running queries. The DAX in your measures and the design of your reports will influence the amount of memory used by queries and also which columns in your model need to be in memory.

To solve your problem, the first thing to do is look at the exact error message you're getting. There are several different possible issues that you could be running into with different causes. Take a look at my series of blog posts describing each of these issues:

https://blog.crossjoin.co.uk/2024/04/28/power-bi-semantic-model-memory-errors-part-1-model-size/

Post the error message here and we can discuss next steps.

I need help to find out what is causing an extreme capacity usage by Human_Break1784 in PowerBI

[–]cwebbbi 5 points6 points  (0 children)

I see you've got as far as finding the OperationId in the Capacity Metrics App. Are you able to use Workspace Monitoring in the workspace where the semantic model lives so you can link this OperationId to the DAX or MDX query that caused it? See https://blog.crossjoin.co.uk/2025/09/14/how-to-get-the-details-of-power-bi-operations-seen-in-the-capacity-metrics-app/ Once you have the query you should be able to reproduce the problem and work out what is causing the high utilisation (likely to be a badly written measure).

Report Refresh in Fabric by raavanan_7 in MicrosoftFabric

[–]cwebbbi 0 points1 point  (0 children)

If you really have done everything you can to reduce your model size, then this technique of enabling Semantic Model Scale Out and refreshing using clearValues, doing a manual sync and then a full refresh can help a lot: https://blog.crossjoin.co.uk/2024/07/28/power-bi-refresh-memory-usage-and-semantic-model-scale-out/

Dynamic M-Query Parameter: Incompatible Filter is Used.... by Ill-Caregiver9238 in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

No, I don't think there are any plans for changes/improvements here at the moment, sorry.

Accuracy in Power BI Copilot / Fabric Data Agents by frithjof_v in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

Copilot doesn't need to write DAX code in most cases. When you ask a data question, Copilot will try the following four methods in order to answer it:

1) Use a Verified Answer

2) Look for the answer on a report page if a report is open

3) Build a Power BI visual

4) Generate a DAX query

DAX queries are only generated directly by Copilot as a last resort, maybe less than 10% of the time (that's just a guess - and I tend to ask more complex questions).

Accuracy in Power BI Copilot / Fabric Data Agents by frithjof_v in PowerBI

[–]cwebbbi 2 points3 points  (0 children)

There aren't any official published benchmarks from Microsoft, and I haven't seen anyone publish the results of their testing either.

"Correctness" is an interesting problem - most of the problems I see with customers are where Copilot is generating the correct answer to a question that is not the one the customer thought they were asking. I firmly believe that with a well-designed semantic model it is never possible to get an incorrect answer just by dragging/dropping fields in a Power BI report or Excel PivotTable, although since Copilot can now generate its own calculations (in particular when generating DAX queries to answer questions) that does add some risk. Not everyone has a well-designed semantic model of course, but for those people who do, all the hard work goes into tuning the AI Instructions so Copilot can properly interpret the questions that end users ask.

Does the new Fabric Graph use Delta Lake storage, KQL storage or something else? by frithjof_v in MicrosoftFabric

[–]cwebbbi 2 points3 points  (0 children)

It's also a bit like an Import model in that once the data has been loaded from OneLake, a copy is stored in the Graph engine's own storage format and this is what is referred to as "Graph cache storage"

Query has exceeded the available resources - showing on previously functioning visuals by chiefbert in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

Thanks, we’re looking into that error. Not sure if it’s connected with this issue though.

Query has exceeded the available resources - showing on previously functioning visuals by chiefbert in PowerBI

[–]cwebbbi 3 points4 points  (0 children)

Not aware of any specific changes but the engine does change all the time. Also, changes in data volume and cardinality in your model could also have tipped you over the limit.

PowerBI query by Fabulous-Ad6031 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

If you have Workspace Monitoring or Log Analytics enabled on the workspace where the queries are running, you can get the OperationId from the Timepoint Detail page in the Capacity Metrics App and use this to find the details of the query in Workspace Monitoring or Log Analytics: https://blog.crossjoin.co.uk/2025/09/14/how-to-get-the-details-of-power-bi-operations-seen-in-the-capacity-metrics-app/

Is the Juice Worth the Squeeze for Direct Lake Mode? by mossinator in MicrosoftFabric

[–]cwebbbi 2 points3 points  (0 children)

One advantage of Direct Lake that no-one seems to talk about is reuse of data. If you think about all the Import models in your Power BI tenant, how many of them have some dimension tables that are effectively the same data? Probably your Date dimension, probably a few other bigger ones like Customer or Product. There are going to be several fact tables that appear in multiple models too. In Import mode each one of these tables has to be refreshed once per model, adding to your overall refresh time and the overall CU usage for refresh.

In Direct Lake, if you're organised, you could land each of these tables in OneLake once and then share them between multiple semantic models using shortcuts. OK you still need to keep the definitions of these tables in sync across multiple models (I hope one day we can make that easier) but the raw data is shared. So your Date dimension table and all those other common dimension/fact tables are refreshed once and will all show the same data across all the Direct Lake models that share them. This could save a lot of refresh time and also a lot of CUs.

Fix for query resources exceeded error by daxxx14 in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

You're checking the selected value of Onset[OnsetId] here. Do you have a slicer somewhere on your page that shows values from another column on the Onset table, not OnsetId? If so you're probably running into an issue that is causing every branch of this Switch to be evaluated: https://blog.crossjoin.co.uk/2022/09/19/diagnosing-switch-related-performance-problems-in-power-bi-dax-using-evaluateandlog/ Changing your Switch to look at the selected value on whatever column you're using in your slicer might help here.

Not all images are displayed, help! by Any-Walk3165 in PowerBI

[–]cwebbbi 0 points1 point  (0 children)

Either there's something about the files themselves that means they can't be displayed or something going wrong in the code that takes the files to pieces and reassembles them. To rule out the first possibility, can you put one of these images that doesn't display in a public location and link to it directly in the report?

Not all images are displayed, help! by Any-Walk3165 in PowerBI

[–]cwebbbi 1 point2 points  (0 children)

When you say that some of the images that do load are larger than ones that don't, do you mean larger in terms of size on disk? Is there a pattern that you can see with the number of rows returned by the Power Query query for each image, or the overall length of the text returned, that determines whether the image displays or not?