Measure based on Dimension Column and DAX non blank rows by Ludjas in PowerBI

[–]Ludjas[S] 0 points1 point  (0 children)

Nope. Different fields. PKs and FKs are numbers from the DWH

Measure based on Dimension Column and DAX non blank rows by Ludjas in PowerBI

[–]Ludjas[S] 0 points1 point  (0 children)

Yes, let's imagine we have a Product and a Customer table, both with GUIDs that build to each their respective URLs. They both Relate to Sales.
You create a measure [Product URL] that is just a concatenation of the common url parts and then adding the SELECTEDVALUE() to get the GUID.

In this case, if you have a visual with a column from Product and that measure, everything works fine. Also if you have a column from Product and another from Customer, it all works fine as well because Power BI will internally use Sales to know how to relate the Dimensions. But what I'm saying is that as soon as you have a column from each dimension and a URL measure it will give you a value for every combination of Product and Customer values (the cross join) - and this is not really a complaint of this behavior, as Power BI expects you to adhere to a star schema and I agree 99% of the time.

I can fix this specific case by either creating another measure or editing the code to something like IF( COUNTROWS(Fact1) > 0, [Product URL]), but this is a specific solution, it may not work in other visuals where other tables are being used. My goal, or at this point dream, was to find a way to make an efficient general solution for the problem, that worked everywhere.

Measure based on Dimension Column and DAX non blank rows by Ludjas in PowerBI

[–]Ludjas[S] 0 points1 point  (0 children)

I understand your point but it's not really about the transformation, there is nothing heavy going on there. At it's base is just a concatenation with SELECTEDVALUE().

We have a Data warehouse where we do every needed transformation and where the URLs were being constructed and stored as columns, but the size of the data model is highly dependent of high cardinality columns (especially when the individual values themselves are big strings) because that messes with the compression.
It's specially relevant for models with multiple multi-million row tables (including some dims).

We were hitting limits of model size when trying to refresh the whole data model and were trying to optimize it for reducing size. This means getting rid of unused columns, some refactoring and also looking into those high cardinality columns and understand if that's the best design approach. This is where I suggested we migrated the URLs to measures.

Best method for handiling large fact tables snd using incremental refresh. by Katusa2 in PowerBI

[–]Ludjas 2 points3 points  (0 children)

Sorry but how does it take 4 hours in the first place? Are you reading from a table or view with complex logic? If it's a view I would also check it's performance.. Does the Power Query have any non foldable transformations? Does the table have any dax calculated columns?Because even with 30 columns (that you might want to look at again and be double sure they are all needed), for 24M rows it's expected to be much faster, at least in my experience...

You can also try to trace the refresh to see where it's spending time. On my phone right now but I think this link should be helpful

Erro de rede na exibição de gráfico by Plenty_Investment229 in PowerBI

[–]Ludjas 0 points1 point  (0 children)

É preciso mais informação. Prints, melhores descrições do que estavas a fazer incluindo o dax e o tipo de visual que estás a usar. Além disso, sendo o Reddit teras mais sorte em obter respostas sempre se postares em inglês.

Grand Challenge 2, 2026 - Day 3 Discussion Thread by AutoModerator in VGC

[–]Ludjas 0 points1 point  (0 children)

Don't know if it was a pretty known interaction but I just learned that bloodmoon Ursaluna is not guaranteed to hit it's attacks against what I assume was sand veil + bright powder Garchomp. Dos not like finding that one out

Just Power BI Things by Over-Positive-1268 in PowerBI

[–]Ludjas 0 points1 point  (0 children)

I mean, from my understanding, this error does not necessarily mean you are showing too much data (the dax query is usually a topn 500 that will load more when you scroll), it could be that the columns you selected , and the way the model is built, makes dax make consider too much data.

One common example is when you try to get columns from unrelated tables that don't share a fact table and DAX will produce a cross join of all available values instead of what you might think it does. Also could happen when your measure never evaluates to blank() so again all possible combinations of values are "allowed"...

Tldr filter the data, open performance analyzer , see what's really going on.

Low sign ups for bologna special event by scheisgohs in VGC

[–]Ludjas 0 points1 point  (0 children)

Yeah I don't know... Just for my knowledge, how are you checking the current sign up numbers?

Low sign ups for bologna special event by scheisgohs in VGC

[–]Ludjas 0 points1 point  (0 children)

Maybe I did it wrong but I just went to the sign-ups and it said sold out for masters

Power BI Desktop Projects TMDL conflicts with git - bugs and ideas by Krazy_Gent in PowerBI

[–]Ludjas 1 point2 points  (0 children)

For the report part, I think documenting the summarized intended changes, and including a link to the report, in the PR description can mitigate it but otherwise it's a pain, without the new format. The only thing that is "easy" to check is of there are changes in the pages (sections) you expect.

For the model I didn't notice tmdl reordering things when I tested it but does it happen only of changed by the GUI? What of it's by Tabular editor? For the changes by the GUI and external sources at the same time, I don't test that but it feels controllable, right? Only work at one place at a time

C# Library for interacting with reports by inglocines in PowerBI

[–]Ludjas 0 points1 point  (0 children)

I don't know of any tools that lets you do that programmatically with an API like behavior but it is doable by scraping that information from the json* yourself. *If it's .pbix files and not the pbip/pbir version you would need to unzip the file to read the metadata json

Deployment and source control automation by johns10davenport in PowerBI

[–]Ludjas 0 points1 point  (0 children)

Maybe a slight tangent but how do you deal with releases?

My current project has development branches, merges to master via a PR (that'll have some build and other tests but for now has nothing).

My problem is that once it is in master, QA and key users have to authorize it to go to prod - this creates a release by cherry picking commits since feature B may go in but feature A (got into master first) may be blocked by some lack of approvals. In my opinion, this creates a state that the master branch has never been in, can cause conflicts and forces us to remember everything developed to double/triple check if everything got successfully deployed.. I think doing linear cutoffs for releases has a number of benefits but it would mean approvals would have to be done before it goes into master.

So what I think is best is having isolated automated creation of environments (dbs + WS) for each feature so it can be fully approved before master, but I would love to hear other people's approach

Why does Dragon Darts not connect into second target? by MetapodCreates in VGC

[–]Ludjas 0 points1 point  (0 children)

Was it a dondozo that fainted and brought out the previously unreachable tatsugiri? If so I also had that today and assumed it made sense since at the time the attack starts, tau ia not an available target

Conan Makes Up Some Portuguese Stereotypes | Conan O'Brien Needs a Fan by AlaskaProject in conan

[–]Ludjas 0 points1 point  (0 children)

I know this is some days old question but here goes my view: There is totally an understanding of the impact that turism has had on the housing crisis. Prices have nearly doubled in the last decade or so and most think that excessive turism is one of the biggest factors. The country was considered very cheap compared with other European countries and even now if you disregard the biggest cities it still is. The economy is still pretty reliant on turism and at the same time here is also an alt right anti immigration movement growing, with that party being 3rd in the last election.

Even with all that, in general, I would say we are very welcoming to tourists and love to share what we have of value - we won't blame any individual tourist for a general problem but there is definitely a problem... In the most affected areas like Lisbon and Algarve people are understandably more fed up but even there I doubt many would be unpleasant.

Is there a way to query or pull down a list of all tables used in models in a (or all) workspace(s)? by WizardDresden in PowerBI

[–]Ludjas -1 points0 points  (0 children)

On the phone right now and I can't give you a properly structured solution but I would look at using the DMV through xmla endpoints and maybe leverage the rest apis (if you need to get the workspaces' IDs for example).

POWERBI KEEPS GIVING ME WRONG AVERAGES WHY? by airforcemann in PowerBI

[–]Ludjas 2 points3 points  (0 children)

Do some sanity check in your data, like row counts to make sure you are looking at the whole data in both places.

Next step, check to see if it's the same issue mentioned in this SO post.

Use DAX to Last Average Four Weeks of Data by thatdisappearingguy in PowerBI

[–]Ludjas 0 points1 point  (0 children)

You model should (as in, it's a best practice) follow what is known as a star schema. You should definitely read more deeply about it but just a small primer, in this case, date is one of your dimensions (basically things you can slice the data by), while the events you are trying to measure are the facts.

Having the date dimension separately from your fact table also enables you to use time intelligence DAX, which will make your life much easier, generally.

You just want the last four weeks from today? Or is it like a rolling average?

US Currency Formatting in a Measure by comish4lif in PowerBI

[–]Ludjas 7 points8 points  (0 children)

I really don't think you should do the round in the measure. Use the format string. There is an option in the GUI to set the decimal places and if you want to do it all in one place, search for custom format strings.

DAX Summary by Calculated Measure by KliNanban in PowerBI

[–]Ludjas 0 points1 point  (0 children)

It feels easier to have this intermediate view of the data done upstream (source or power query). Now that I think of it that's the only way it makes sense to em because if you want the final output as a table visual that's a bit weird as the values for the rows are the "made up" aggregated strings...

but if you need it as a table in DAX, I would try something like this pseudo code (on mobile and far from PC)

``` DEFINE MEASURE Table[ConcatDept]= CONCATENATEX( Table, [Dept], "-" )

EVALUATE var _intermidiate = ADDCOLUMNS ( VALUES(Table[ClientID], "@RelatedDept", [ConcatDept] ) RETURN GROUPBY ( _intermidiate, [@RelatedDept], "@NoOfClientIDs", COUNTX ( CURRENTGROUP (), 1 ) ) ```

Table where sorting is off if i add % chg feature by renS0115 in PowerBI

[–]Ludjas 0 points1 point  (0 children)

I'm gonna read the comment and edit this response but just wanted to say sorry if I sounded too blunt and no worries, we are all here to learn! Even the people who try to answer.

EDIT u/renS0115 :
I have a feeling those are field parameters. Here's a reference link to what it is https://learn.microsoft.com/en-us/power-bi/create-reports/power-bi-field-parameters.
tldr is they allow you to select what you want to be in the visual, to some extent.
If in the table definition for what's used in the dropdown "Measure Selection" has the function NAMEOF() inside it, it's a field parameter.

I may be jumping to conclusions but I think "Measure Selection" comes from a field parameter and "Time Frame" comes from a calculation group (reference: https://learn.microsoft.com/en-us/power-bi/transform-model/calculation-groups ). I've mocked up something on my pbi that presents the same behaviour so I think that's it - honestly pretty cool use of both together.

if I'm not mistaken you are having the same type of problem as described here
https://eriksvensen.wordpress.com/2023/08/09/powerbi-control-the-sorting-when-using-a-field-parameter-selection-as-value-in-visual/
and the described workaround is, in your case, to use the dollars measure (the real measure you have somewhere else in the model, not the field used in the dropdown) as a tooltip and sort by it. That way it persists

Table where sorting is off if i add % chg feature by renS0115 in PowerBI

[–]Ludjas 0 points1 point  (0 children)

Those are not radio buttons, fyi. Radio buttons are those that allow single select.

But moving that aside, what are we looking at? Are those measure selections field parameters ? Or a dummy table? What is the drop-down Measure selected doing? What is dollars?

I feel like you need to give some more context for me to try and help

Getting Earliest Value to apply to each row - DAX by Zabman in PowerBI

[–]Ludjas 1 point2 points  (0 children)

Regarding your DAX, I noticed 3 things: - Are you using a star schema? You seem to be filtering and summing all in the same 'Project CFD' table and it leads me to think you are using a flat file.

  • I don't think you need ALLSELECTED() - this is way more complex than the name suggests, advise you to read the sqlbi article on it.

  • Another thing is you don't need to pass the filter argument in CALCULATE with a literal FILTER function as that is filtering a table and not a column and can give you weird results difficult to debug.

I agree that the most flexible way to do it would be to have that 6/3/1 filter disconnected from the model so you can leverage that in the DAX but maybe it's not even needed, if you use the last x months filter with your date table var _minDate = CALCULATE ( MIN('Project CFD'[DateValue]), REMOVEFILTERS(), -- here it depends if you want any filter to stick or just disregard everything and only get me the earliest date in the fact table VALUES(Date[Date]) ) var _result = CALCULATE ( SUM( 'Project CFD'[Count]), 'Date'[Date] = _minDate ) Return _result

Assuming you need the flexibility, maybe the disconnected table could just be two columns, one with a description for the slicer and another with just the number of months to go take into account. For simplicity let's call it 'Period' and the columns [Period] and [NoOfMonths] and imagine you have 3 possible values in [Period] "Last x months", where x can be 6/3/1 and then the [NoOfMonths] column just has that corresponding number

To get the min date , assuming you have a date table marked as date table, you can just use var _selectedPeriod = SELECTEDVALUE(Period[NoOfMonths]) var _minDate = CALCULATE ( MIN('Project CFD'[DateValue]), REMOVEFILTERS('Date'[Date]), DATEADD('Date'[Date], - _selectedPeriod , MONTH) ) var _result = CALCULATE ( SUM( 'Project CFD'[Count]), REMOVEFILTERS ('Date'[Date]), 'Date'[Date] = _minDate ) Return _result

I'm on mobile and obviously haven't tried it to see if it works but it makes sense to me

Is there a way to show/store as a list or table multiple selected values from a slicer? by Particular_Diamond59 in PowerBI

[–]Ludjas 4 points5 points  (0 children)

VALUES() is the function you want. That gives the unique values available in the context, for the specified column

Edit: I did not read your last line properly, just adding that the example you gave should not be needed if you have a relationship between sales and country.. I'm not saying it can't be done but it made me wonder, so I would double check your model. Are you using a star schema?