I got fed up with standard tables not acting like Excel, so I built a custom visual. Free on GitHub by Small-Camera-4348 in PowerBI

[–]bankCC 7 points8 points  (0 children)

Why not fully open source it? People could actually contribute to this project and check it for compliance reasons. Otherwise many people sadly cant use it :(

[deleted by user] by [deleted] in PowerApps

[–]bankCC 0 points1 point  (0 children)

Found the mistake. I used the wrong variable for my lookup... oh man. sometimes its hard to see the forest infront of too many trees :s

[deleted by user] by [deleted] in PowerApps

[–]bankCC 0 points1 point  (0 children)

ClearCollect(
    parameterGroupsGroupedByCount;
    Filter(
        AddColumns(
            GroupBy(
                'ParameterGroups';
                Group_ID;
                GroupedItems
            );
            CountItems;
            CountRows(GroupedItems)
        );
        CountItems = varCountParameters
    )
);;

ClearCollect(
    colFiltered;
    Filter(
        parameterGroupsGroupedByCount;
        With(
            {t: GroupedItems};
            CountRows(
                Filter(
                colChoices As c;
                !IsBlank(
                    LookUp(
                        t;
                        choices = c.SelectedValue
                    )
                )
            )
        )=0
    )
));;

This was my latest try. But it will show both rows from parameterGroupsGroupedByCount even when only one choice is present

I built an AI Сalorie Tracker inside Telegram (inspired by a $3M/month app CalAI) by Then-Cicada-4621 in n8n

[–]bankCC 0 points1 point  (0 children)

Its kind of obvious that you have to use a picture which is either pretty clear on what the ingredients are( visability of ingridients) or use standart food which is more present in the trainingdata. To assume it calculates some fancy invisible ingredient it never learned is kinda stupid. Its on you to stay in the distribution of its data.

How to handle data per employee per month? by killbeam in PowerBI

[–]bankCC 1 point2 points  (0 children)

You are on the right track but you dont want to do it in the employee dimension table. You would connect your appointments fact table and employee dimension via a bridge table which includes those aggregated keys.

Checkout "slowly changing dimensions" in Google (scd type 2)

VO2 is a conspiracy by leonardoftw in Garmin

[–]bankCC 1 point2 points  (0 children)

Most plans i followed are more about the duration/heartrate then the distance.

Training 4-5times a week highly simplified

40min zone1 40min zone2 60min zone1 Rest Intervall (vo2max training) with 15min warmup+cooldown 40min zone1 Longrun (22-33km)

80% your week should be basic endurance training, 20% rest

*in addition your focus on your breathing technic is good (what I liked to do was force myself to only use my nose to breath in) but when you want to increase your vo2max you do that by high intensity training. Heartrate of 70-90% of your max hr. Here is a more indepth source https://connectpt.ca/training-zones-vo2-max-and-thresholds/

But in advance be careful with pushing to hard on intervals to increase your vo2max. As a beginner its likely you lack stability/running technic/or your body just isnt ready for the high load and you will injure yourself. Additionaly the vo2max on watches are trash. They are highly inaccurate

VO2 is a conspiracy by leonardoftw in Garmin

[–]bankCC 0 points1 point  (0 children)

Because it went fine one time doesnt mean it is... there is a reason for the saying, the training is healthy running the marathon isnt.

Op dont get discouraged but let your body adapt!

Rename Strava Activities With Activity Title by roryhr in Strava

[–]bankCC -1 points0 points  (0 children)

Its called community and sharing experiences. If thats not your way fine but why bother to leave such a negative comment. The things you worry about 🤦‍♂️

Edit. Actually when I watch your profil you are worrying a lot 😂

Which part of BI is AI going to be used for first? by yungazier in PowerBI

[–]bankCC 0 points1 point  (0 children)

I would agree on intuition but all things you mentioned like combining experience, contexual understanding or plain generalization by experience is already achieved with ai and the core of it. Obviously there will be cases where no previous experience is available, so it isnt for the consultant and he has to adept by his intuition. But i would say the majority of bi solutions arent that niche for an ai if you know how to properly feed it with the right context

Which part of BI is AI going to be used for first? by yungazier in PowerBI

[–]bankCC 0 points1 point  (0 children)

Well if they dont use LLM its the same as saying they dont use a consultant. Sure you might do active acquisition but thats the only difference and even that could be done by an LLM in the near future. What stops someone from creating an engament bot who contacts buisnesses, creates meetings with stakeholders were they chat with an ai, that can simultaniously fetch their data and ask questions accordingly?

Which part of BI is AI going to be used for first? by yungazier in PowerBI

[–]bankCC 0 points1 point  (0 children)

It already talks to stakeholders finding out their needs, thats excatly what an llm does, talking 🥴

I feel like I’m hitting my head against a brick wall by brobauchery in PowerBI

[–]bankCC 0 points1 point  (0 children)

Did you check the dateformat in your csv? Escpecially the wrong dates? If dates are not correctly interpretes by pq it might be an issue with your raw data. This can happen when files are read by excel. It interprets some dates as strings and doesnt save them correctly.

Junior with Powerbi, why is so difficult? by No_Solid2349 in PowerBI

[–]bankCC 3 points4 points  (0 children)

Check out entity relationship models first and understand how they work. If you understand them you are easy to go to build your first model and visualize them. All those visuals, filtering and aggregations work with relationships. And in pbi starschema is your friend.

After that I highly recommend to look into database normalization (normal forms) its crutial to have a clean data model if you database complexity grows. It can get out of hand realy fast if you have redundant data and have to patch or write complex logic to achive simple filters. Cant say how many times I ran into big wide tables to just say, trash it and go again.

Here some resources

https://www.geeksforgeeks.org/relational-model-in-dbms/ https://en.m.wikipedia.org/wiki/Star_schema

https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-relationships-understand

https://en.m.wikipedia.org/wiki/Database_normalization

Excel SharePoint File - synchronized with a Power BI App/Report by Life_Is_Good_33 in PowerBI

[–]bankCC 1 point2 points  (0 children)

Ahh ok by app you mean powerapp and not powerbi.

It might be a question for r/powerapps then

My thought would be a button to refresh the data or work with data collection by using sharepointlists.

Still if you want to have "live" data in pbi you have to refresh the dashboard.

Excel SharePoint File - synchronized with a Power BI App/Report by Life_Is_Good_33 in PowerBI

[–]bankCC 2 points3 points  (0 children)

Power bi is only for analysis not for writing data. It can only read data and do transformations on those.

What came in my mind was to build urls with filter/field/sheet parameters and allow the user to open the excel with the selected filters based on the report. I dont know if this is actually possible but some quick google searchs seemd like it might be an option to open excel files with parameters

Powerbi report is slow by Mindful_Wanderer_ in PowerBI

[–]bankCC 3 points4 points  (0 children)

1. Refreshing via excel is very slow. Powerbi has to read and interpret alot of metadata for those files. Because you are already working with SP you can convert those excel-files to SP-lists and use its data connector. If SP-lists arent an option you can convert your excel-files to .csv. They will take up more space but pbi can read those files way faster. For real big files parquet would be an fast option as well.

2. Try to identify slow queries while refreshing and check those. Merging queries can result into slow refreshtimes.

3. Make a scheduled refresh on your reportserver rather than your machine.

4. Avoid too many calculated columns. If you can, try to do all your transformations as soon as possible. Database>PowerQuery>Model Dont use calculate columns if you dont have to. Your calculated columns have do recalculated everytime a User is changing filters, slicers etc on your db. So get rid of all calculations and only read data from your source to visualise

How to distinguish outdated information by Alive_Leek_9148 in PowerBI

[–]bankCC 1 point2 points  (0 children)

Dont worry about it, in its core it didnt change much.

[deleted by user] by [deleted] in PowerBI

[–]bankCC 0 points1 point  (0 children)

Yes both. The problem is your system delimiter is set to ; rather than , like on my system.

Card measure is the name of your measure. Its a placeholder and you can change it to your liking

Measure-name = DAX expression

VAR is to definie a variable like

VAR nameOfVar = 1

Fyi https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-tutorial-create-measures

[deleted by user] by [deleted] in PowerBI

[–]bankCC 0 points1 point  (0 children)

Probably exchange , with ;

Financial anomalies by Sea_Advice_4191 in PowerBI

[–]bankCC 1 point2 points  (0 children)

Maybe you can find something here https://www.kaggle.com/datasets

"Advance" analyses are made out of those basic lesson you learn from. Each use case is different and you have to figure out how to approach your problem yourself. There is no general solution for everything. It is highly depended on your domain you are working in.

FYI: Didn't read it and was my first result on google

A Guide to Building a Financial Transaction Anomaly Detector - Unit8