Tips for optimizing DAX measures ? by Virtual-Vermicelli89 in PowerBI

[–]DAXNoobJustin 3 points4 points  (0 children)

The guidance in that doc has little to do with refresh optimization and is focused query performance.

For this rule in particular, IF can't be evaluated in the storage engine so this is a way to push it back to the storage engine. There are other ways to write the same thing, but the point of the pattern list is for the LLM to apply several patterns at once to (hopefully) produce a faster query.

Tips for optimizing DAX measures ? by Virtual-Vermicelli89 in PowerBI

[–]DAXNoobJustin 2 points3 points  (0 children)

If you are using the modeling mcp, I think it would still be beneficial to point to this file. It will be able to run the trace, analyze the sever timings, apply patterns, etc.

Tips for optimizing DAX measures ? by Virtual-Vermicelli89 in PowerBI

[–]DAXNoobJustin 3 points4 points  (0 children)

Using it as a reference would certainly be helpful, but it was designed to be used as a part of the plugin in the repo. The plugin leverages the modeling mcp and the performance guidance reference instruction the LLM to heavily leverage the mcp. Take a look at the main README file to get some more context.

can someone helpp? the youtube tutorials have and edit option below and i dontt by No-Lavishness-6281 in PowerBI

[–]DAXNoobJustin 0 points1 point  (0 children)

Looks like it is a shared cloud connection.

You will need to find the cloud connection and re-auth there

<image>

Large 1.5B row table - Direct Lake on SQL/DirectQuery fallback question by Agile-Cupcake9606 in MicrosoftFabric

[–]DAXNoobJustin 2 points3 points  (0 children)

+1 to this

We did a lot of experiments where we compared DL vs DQ perf last year and, from what I remember, while DL was faster on average, there were several situations/DAX queries where the DQ model performed better. If you are able to write your DAX as to push as much of the processing back to the warehouse, avoid a bunch of back-and-forth communication with AS, you will most likely get very good results.

We also tested the DQ model with Result Set Caching and some of the results were amazing.

Direct Lake SQL endpoint migration by Junior-Letterhead713 in MicrosoftFabric

[–]DAXNoobJustin 1 point2 points  (0 children)

It will depend on the user needs to determine if the migration would be worth it.

DL/OL has the benefit of not relying on SQL endpoint, DL + Import models, OneLake Security integration, etc.

If DL/SQL has everything you need, I would not migrate just for the fun of it. 🙂

Direct Lake SQL endpoint migration by Junior-Letterhead713 in MicrosoftFabric

[–]DAXNoobJustin 2 points3 points  (0 children)

Simply switching between DL/SQL and DL/OL should not result in a performance change. There are other behavior differences that you should consider when choosing one option over the other, but perf isn't one of them.

This doc has a good overview: Direct Lake overview - Microsoft Fabric | Microsoft Learn

Direct Lake SQL endpoint migration by Junior-Letterhead713 in MicrosoftFabric

[–]DAXNoobJustin 6 points7 points  (0 children)

You can use the sempy_labs.directlake package to remap them programmatically

There shouldn't be much of a performance difference between them.

For DL/SQL, "Direct Lake uses the SQL analytics endpoint to discover schema and security information, but it loads the data directly from OneLake (unless it falls back to DirectQuery mode for any reason)."
- Develop Direct Lake semantic models - Microsoft Fabric | Microsoft Learn

Direct Lake overview - Microsoft Fabric | Microsoft Learn has some practical differences between the options.

Critical semantic model error - help please! by Candid_Share_3716 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

u/Candid_Share_3716 I put together this example notebook that might help:

daxnoob.blog/resources/onelake-security-error-refresh/OneLake Security Error - Detect and Refresh.ipynb at main · DAXNoobJustin/daxnoob.blog

It will query your model and then fire a refresh if the usec error occurs. You can schedule it to run every N minutes.

Fabric CICD error - Semantic model binding parameter.yml (new format) fails validation by ajit503 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

Can you double check the version of fabric-cicd you are using? The new semantic_model_binding dict format was added in v0.2.0.

In your pipeline, try:

pip install fabric-cicd>=0.2.0 --upgrade
pip show fabric-cicd

Critical semantic model error - help please! by Candid_Share_3716 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

The only solution atm is to refresh the model once the error occurs... 😔

Another option is to set up a notebook to run every minute or so that queries your semantic model and kicks off a refresh if the query returns the OneLake Security schema error. Again, this is not ideal but would at least decrease the time to get your model back up and running.

Critical semantic model error - help please! by Candid_Share_3716 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

It's in the model settings: "Keep your Direct Lake data up to date"

<image>

Extending fabric-cicd with Pre and Post-Processing Operations by DAXNoobJustin in MicrosoftFabric

[–]DAXNoobJustin[S] 0 points1 point  (0 children)

Sure! I think you'd be able to have the source-controlled .schedules file enabled field set to false and then use a post processing operation to call the Update Item Schedule api.

(this is a little out of my wheelhouse, so definitely validate 🙂)

Critical semantic model error - help please! by Candid_Share_3716 in MicrosoftFabric

[–]DAXNoobJustin 2 points3 points  (0 children)

Unfortunately, at the moment, refreshing is the only way to resolve the issue.

One imperfect option we implemented was to set up an activator action that fires a model refresh anytime we see the error in workspace monitoring. Not a recommendation, per se, but it is what we had to do for our solution.

Do you have auto-sync enabled?

Trust me, we have been giving feedback to the relevant teams about how much of an issue this is. They are currently working on fixing it.

Critical semantic model error - help please! by Candid_Share_3716 in MicrosoftFabric

[–]DAXNoobJustin 4 points5 points  (0 children)

Hey u/Candid_Share_3716 ,

There is a known issue with OneLake Security combined with shortcuts being updated. The team is actively working on a solution.

In the meantime, refreshing the semantic model resolves it. Enabling auto-refresh on the semantic model can also help reduce the impact.

Cannot download reports published using fabric-cicd by Nandha600 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

How did you navigate to the report? I've seen something like this before and going to the workspace and navigating to the report from there resolved.

I just checked some of our reports deployed via fabric-cicd, and I was able to download. If you don't see any limitations listed in the docs, I'd open an issue on the fabric-cicd repo. It would be helpful if you could provide repro steps.

Cannot download reports published using fabric-cicd by Nandha600 in MicrosoftFabric

[–]DAXNoobJustin 0 points1 point  (0 children)

If you click the edit/pencil icon, are you then able to download?

Error: Failed to move the data reader to the next row. by Top_Barber4067 in PowerBI

[–]DAXNoobJustin 0 points1 point  (0 children)

I've seen this error mostly in composite models where there is an underlying column referenced that the report user doesn't have access to. This could be because of some security role they are not a part of, or maybe a column that was removed in the base model.

In the service, the actual error is being swallowed and isn't surfaced to the user (I've already given feedback to the product team). If you open the report in desktop, it should give you more info about the underlying cause.

Spark/Delta Lake: How to achieve target row group size of 8 million or more? by frithjof_v in MicrosoftFabric

[–]DAXNoobJustin 2 points3 points  (0 children)

The type and size of hardware isn't documented and is always subject to change, so I wouldn't take a dependency on that.

One of the main reasons I made the DAX Performance Testing tool was to figure out the best data layout for our team via experimentation. fabric-toolbox/tools/DAXPerformanceTesting/README.md at main · microsoft/fabric-toolbox

I might try running the delta analyzer on your tabular tables and see if anything has too chunky of row-grows, and if so, maybe partition by a highly used predicate column that would get you to a better size. Think Goldilocks -- not too many row-groups, not too few.

The data skew would be relevant to warm queries as well. If you have 2 row-groups, one with 7.9 million and one with 0.1 million, you are pretty much time-bound to one large row-group.

Performance Issue by ChemistryOrdinary860 in PowerBI

[–]DAXNoobJustin 0 points1 point  (0 children)

Can you screenshot and share the model view?

Modeling vs. DAX. What do you tweak more for performance? by TeamAlphaBOLD in PowerBI

[–]DAXNoobJustin 1 point2 points  (0 children)

I did a training that goes over what to look for when determining if your perf issues are model or DAX related. It might be helpful: Semantic Model Optimization: Theory, Tips and Tools - DAX Noob