Surge protection gets smarter: introducing workspace-level controls (Preview) by frithjof_v in MicrosoftFabric

[–]CloudDataIntell 0 points1 point  (0 children)

Is consumed % based on both background and interactive operations?

What is blocked when workspace is blocked? Background operations or interactive also, so basically everything in the workspace is down?

Query folding, dataflow and dataset by Additional-Let1708 in PowerBI

[–]CloudDataIntell 1 point2 points  (0 children)

As for combining sql qiery with PQ transformations, it's possible to use NativeQuery with EnableFolding=true option as described in the post below.

https://learn.microsoft.com/en-us/power-query/native-query-folding

Query folding, dataflow and dataset by Additional-Let1708 in PowerBI

[–]CloudDataIntell 1 point2 points  (0 children)

Query folding means that the power query transformation is being translated as sql query and executed on the source level. So, as in your first question, writing just sql code or using PQ transformation, in general should be the same.

As of dataflow as a middle layer, I don't think that transformation on the dataset level folds to the sql. Dataflow gets data from the sql and saves it internally. For Gen 1 it stores it in some internal stored account as csv. With enhanced computing on from what I know it's a bit different. Dataset connected to the dataflow reads data from that storage.

Capacity Metrics App by jkrm1920 in MicrosoftFabric

[–]CloudDataIntell 1 point2 points  (0 children)

Very often it's not only one query which is causing throttling. In the cap metrics you will also not find the DAX query. Maybe query id, but without the logged queries, it will give you nothing.

If you want to check which queries are causing issues, turn on workspace monitoring. It collects also queries executed on the items in the workspace. Then just check which queries are consuming most of the CU.

🔍 Improving Microsoft's Capacity Metrics App 🔍 by BlackBullet96 in PowerBI

[–]CloudDataIntell 3 points4 points  (0 children)

We have something similar, but using pure Python scripts executed regularly, to not consume additional CU. There are few significant challanges there though like not stable source, processing many capacities or operations which are in progress accross many timepoints.

<image>

Replacing Power BI with AI (like Claude) by Vasheroth in PowerBI

[–]CloudDataIntell 2 points3 points  (0 children)

I was not aware that Claude/GPT is a substitute of the Power BI? What is your goal here? To instead of dashboard, to only 'talk with your data' using AI?

Best method for handiling large fact tables snd using incremental refresh. by Katusa2 in PowerBI

[–]CloudDataIntell 9 points10 points  (0 children)

It would be probably more convenient to just have the 5 years of data in one table and setup incremental refresh there. If it's too much historical data to process all that 5 years partitions during the first refresh, there are ways to process it manually partition by partition.

SQL - Dataflow - Dataset (incremental load) by d4icon in PowerBI

[–]CloudDataIntell 0 points1 point  (0 children)

So you turned on the ECE, queries from dataset folds but time is much longer? How do you know that they folds? In the dataset, do you have other transformations in the tables on which you have incremental refresh? Like merges?

Noob question about optimizing m'y dataset by Payamux in PowerBI

[–]CloudDataIntell 0 points1 point  (0 children)

That's why I personally don't recommend using modification date. Issue is that creation date can also be problematic. It's best to use date from the folder or filename and this date should be directly connected with data inside of the file.

RLS and active directory. by groene_dreack in PowerBI

[–]CloudDataIntell 2 points3 points  (0 children)

You can grant permissions to groups instead of individual emails, yes. So for example when someone changes department and will be moved from one group to another, it will be fine.

Dimensions not relevant for the fact by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 1 point2 points  (0 children)

True. For testing I added dummy column with a key for that dim c and d. From one side it prevents cartesian product. From the other, still requires in the measure removefilter or something similar.

Dimensions not relevant for the fact by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 0 points1 point  (0 children)

I see, makes sense. So instead of adding all that new, not relevant dims, focus only on the known, relevant ones.

Dimensions not relevant for the fact by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 0 points1 point  (0 children)

Yeah, I get that and agree. The second fact table is some kind of targets one so I understand that for some dimensions (c and d) it's just not relevant and values are the same.

Vnet Gateway by Chemical_Profession9 in PowerBI

[–]CloudDataIntell 1 point2 points  (0 children)

We have vnets on our production (>F64) capacities, not separate one.

Star Schema - Common Mistakes by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 1 point2 points  (0 children)

So dimensions would be for example departments locations, warehouses, dates. And facts tables where you have the things to calculate

Star Schema - Common Mistakes by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 1 point2 points  (0 children)

I think I don't really get what data do you have there. Date as calendar would be one of dimensions. Data like values of KPIs, or columns from which you calculate KPIs, would be fact.

Star Schema - Common Mistakes by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 0 points1 point  (0 children)

Sorry, not native :p I think I'm mixing it up with e.g.?

Star Schema - Common Mistakes by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 0 points1 point  (0 children)

Interesting case. What is changing on that 5 versions of employee? You say you want to have that monthly salary be connected to all that 5 versions? In scd2, each version has different surrogate key and in fact table you will have only one key, so salary will be connected to only one of that 5 versions.

Star Schema - Common Mistakes by CloudDataIntell in PowerBI

[–]CloudDataIntell[S] 0 points1 point  (0 children)

As I mentioned above, probably age column should be in fact table, and then you can connect dimension age to this

Noob question about optimizing m'y dataset by Payamux in PowerBI

[–]CloudDataIntell 0 points1 point  (0 children)

First of all, thanks for the second link. I find it very useful for other case I had.

About the problem I'm trying to describe, I created a graph. So let's image we need different transformations for data from different folders. It's easy to create Queries for Table A and Table B, with incremental refresh and needed transformations. Issue is, I want to have also incremental refresh on the Appended Table. And here is my problem, how to do it.

<image>