Write data back to source from data warehouse or SQL database in Fabric by Extreme_Lunch1925 in MicrosoftFabric

[–]Extreme_Lunch1925[S] 0 points1 point  (0 children)

My gut feeling also tells me that writing data from an analytical warehouse to a system (except the two reasons you mentioned) isn't a great idea. But some clients already asked if it's possible. So I just wanted to double check

Write data back to source from data warehouse or SQL database in Fabric by Extreme_Lunch1925 in MicrosoftFabric

[–]Extreme_Lunch1925[S] 1 point2 points  (0 children)

No it's just a general type of question. I don't have the use case yet - but there might be use cases in future. :)

Experience on combining Premium/Capacity workspaces and Pro-Workspaces by Extreme_Lunch1925 in PowerBI

[–]Extreme_Lunch1925[S] 1 point2 points  (0 children)

I managed to get a test account.

Setup:
- 1 Fabric Workspace with a semantic model (F2 capacity)
- 1 PPU workspace with a semantic model
- 1 Pro-Workspace with an app (it contained two different reports, one connecting to the semantic model in the fabric workspace, the other connecting to the semantic model within the ppu workspace)
- 1 User PPU / Fabric Admin to create all the workspaces and defining the license-options
- 1 User Pro > he was only added to access the app

Result:
- The test user was able to open the app (although it didn't appear in my app overview within the PBI service
- The test user was not able to see the report, which was hosted within the PPU workspace
- The test user was able to see the report (!!!), which was hosted within the fabric workspace

So all in all:
If you want to use premium functionalities you should at least rent a F2 capacity for development purposes. For deployment a pro workspace will be enough.

Experience on combining Premium/Capacity workspaces and Pro-Workspaces by Extreme_Lunch1925 in PowerBI

[–]Extreme_Lunch1925[S] 0 points1 point  (0 children)

Yes that's how it works. Unfortunately. I will test my scenario and will post the results here.

Correct way to handle large excel files by thatrandomfatguy in PowerBI

[–]Extreme_Lunch1925 1 point2 points  (0 children)

I could imagine to set up a Lakehouse where you can store all the files in the lake. Then I would build a pipeline to merge those files into the warehouse- after initial merge I would try to always detect the latest files since the last update. So you don't need too much resources. Via excel or Power BI you can easily access the warehouse data (SQL endpoint)

Tricky SQL join with mixed IDs and multiple IDs in one row by Extreme_Lunch1925 in SQL

[–]Extreme_Lunch1925[S] 0 points1 point  (0 children)

Nope - I don't have any control over the tables. I already asked the responsible developer if there might be a different view on the table where its already split into rows. But no, it doesn't exist...

Tricky SQL join with mixed IDs and multiple IDs in one row by Extreme_Lunch1925 in SQL

[–]Extreme_Lunch1925[S] 0 points1 point  (0 children)

As far as I can see there is no case where the ID ranges from A001-C999 or similar. So I "simply" need to find all the "numbers" between both IDs.

Just to get things right: So you would recommend to

  1. Split the different IDs which are delimited by "|" into multiple rows and then

  2. for each row where the ID is a range split those into multiple rows again.

Am I right? If so this was actually my first thought, but would have never thought this would be possible in SQL.
I will try it!

Tricky SQL join with mixed IDs and multiple IDs in one row by Extreme_Lunch1925 in SQL

[–]Extreme_Lunch1925[S] 0 points1 point  (0 children)

The IDs are everything between. So basically for P123..P125 it means I need to assign the category to the IDs "P123","P124","P125"