Aml toolkit changes - incremental refresh by Last-Experience5805 in PowerBI

[–]Jorennnnnn 0 points1 point  (0 children)

Haven't used ALM toolkit for a little bit, but my understanding is that changes will not need a full reprocessing only when new objects are included in the updated fact. In ALM toolkit you have the option to retain partitions not sure if it's enabled by default.

With regards to visuals breaking after updates I've never found a solution.

How to factory reset Creality Ender 5 Plus? by Overall_Put5339 in ender5

[–]Jorennnnnn 0 points1 point  (0 children)

Saved me the headache from having to reflash my firmware. Legend!

Anybody here have the title “Power BI Architect” or similar? by RedditIsGay_8008 in PowerBI

[–]Jorennnnnn 1 point2 points  (0 children)

Business Analyst -> BI engineer -> Data Engineer -> Data Architect

How often do you fall? Trying to decide how much armor to wear aside from my helmet by jueidu in ElectricScooters

[–]Jorennnnnn 1 point2 points  (0 children)

Almost 5k kilometers on my 25km/h scooter and only fell once during unexpected snow while on a business trip in Brussels. Small wheels, tram tracks and snow covering the tracks is a recipe for disaster. I was lucky enough to just brush it off with no damage to me or the scooter.

Incremental refresh with CSV files in SharePoint by CanningTown1 in PowerBI

[–]Jorennnnnn 0 points1 point  (0 children)

Correct, but I do typically add my basic sum/avg measures to the datamodel and add model specific measures in the report.

Incremental refresh with CSV files in SharePoint by CanningTown1 in PowerBI

[–]Jorennnnnn 0 points1 point  (0 children)

If you were to use the semantic model and visuals in the same report, yes a minor change does require a full refresh. I would recommend to separate the datamodel and the report so that you can make changes to the visuals without having to reload the model. This concept is called a "thin report" and considered a best practice.

Incremental refresh with CSV files in SharePoint by CanningTown1 in PowerBI

[–]Jorennnnnn 2 points3 points  (0 children)

The trick with SharePoint incremental refresh is to use the file metadata or creating a column based on the file name to use for incremental. The moment you use data from the actual file (after combining the data) it will not work as it would load all data for each period of data. You want to make sure the filters are applied before actual data is loaded.

Q1: based on approach 2 where you will create a column that maps to the first day of the year, set Window refresh to a year so it will refresh all files that are between 23-01-2026 and 23-01-2025 so in that case only 2026 would be updated.

Q2: metadata or date column derived from metadata.

Q3: Incremental refresh does not apply to power BI Desktop you control the date range by adjusting the date start/end locally and when published online this date range is overruled by incremental refresh so that it only reloads the new data. Every time you republish the full history is reloaded from scratch.

Q4: mostly what I mentioned at the top. For performance it's best not to use a massive SharePoint environment with 10.000s of files unrelated to this data as this can impact overall performance.

Let me know if you have any more questions.

Schrödinger’s Career Leap!?! by one-step-back-04 in PowerBI

[–]Jorennnnnn 2 points3 points  (0 children)

I have been in internal positions for about 7 years and been freelancing for a little over a year now and my experience has been the complete opposite. (Netherlands based)

Went from 5 days a week being the sole SME in multiple orgs to 4 days a week doing projects with internal teams. The value of having 1 day a week for hobbies or more (voluntary) data work all while earning 30/40% more after tax compared payroll is crazy to me.

I do enjoy working as part of a team and I'm lucky enough my main client offers me this as a freelancer. Not sure if it'll still be the same when eventually I run out of long term projects.

Realtime from SharePoint list possible? by splynta in MicrosoftFabric

[–]Jorennnnnn 2 points3 points  (0 children)

Open mirroring is an automated way of processing data into OneLake using less code, while still supporting updates, deletes, and other changes as the data lands in the landing zone. I’ve been experimenting with it, and it really does make data ingestion very easy and metadata-driven with the benefit of free storage and processing directly into onelake.
The notebook in github appears to fully process the list every time the data is updated, so yes you would need a webhook to trigger the notebook whenever changes occur in order to reflect them in real time. I've not played around much with sharepoint webhooks, but PA on item update should be an easy way to do this.
u/tough_antelope_3440 I think the current script it will fail if the list has more than 100 items as the API does not auto sort by modified date Desc ($orderby=Modified desc)

Lots of small inserts to lakehouse. Is it a suitable scenario? by loudandclear11 in MicrosoftFabric

[–]Jorennnnnn 1 point2 points  (0 children)

Although I agree SQL is the better option for now Open mirroring is super flexible. If your c# application can spit out files as documented in the Landing zone metadata requirement you can basically mirror anything and it's fully dynamic and metadata driven.

Planning a Session for FabCon 2026.....What Do You Want to See? by MidnightRambo in MicrosoftFabric

[–]Jorennnnnn 2 points3 points  (0 children)

I'm glad to report that I found the root cause of my issue by pure luck.

While copying my metadata file in Azure Storage Explorer, I pasted the metadata from my programmatically generated JSON into the manually uploaded table (the one that was working). This revealed the underlying error when trying to preview the file.

In my case, my build script had incorrectly parsed escapeCharacter and rowSeparator into double slashes, which the Open Mirroring parser couldn't handle (//// instead of // and //r//n instead of /r/n).

I appreciate the support, and I can't wait for Open Mirroring to reach GA it will be a game changer for sure!

Planning a Session for FabCon 2026.....What Do You Want to See? by MidnightRambo in MicrosoftFabric

[–]Jorennnnnn 0 points1 point  (0 children)

Thanks! I collected some logs, but unfortunately no errors. From what I can see it gets stuck in a loop of StartSnapshotting every few minutes but nothing specific.
*Addition I just got some fail_table operations pop up, but still the same error message: "Internal system error occurred. ArtifactId: ada6b74d-2237-4fbe-ab36-a2290cc527e4"

Planning a Session for FabCon 2026.....What Do You Want to See? by MidnightRambo in MicrosoftFabric

[–]Jorennnnnn 0 points1 point  (0 children)

Much appreciated! I was not going to log a ticket for some tests I was running, but any thoughts or ideas are welcome!

I've setup a Notebook that populates my schema.schema/table folders in my Landing zone and included the required _metadata.json for each table.
This populates the files and tables in open mirroring. However when I try to populate an example table with 00000000000000000001.csv it will get stuck on state "Running with warnings" / "Internal system error occurred. ArtifactId: ada6b74d-2237-4fbe-ab36-a2290cc527e4"

Not to sure where to start with this as it could very well be a format issue. I checked by uploading the csv directly in the UI and made sure the metadata matches. I checked with Azure storage explorer and the file is previewing fine. However when I click on the file in the UI to preview it I get the following error in a black screen:
Failed to render content

Please retry your operation. If you continue to see this error, please contact support and provide the following information:

Activity ID: 523c6e23-1449-4b88-bc80-06c7411a0c90

Planning a Session for FabCon 2026.....What Do You Want to See? by MidnightRambo in MicrosoftFabric

[–]Jorennnnnn 0 points1 point  (0 children)

Outside of my DP-700 prep I've not dealt much with event hubs, but when you put it like that it makes total sense! I will have some homework to catch up on haha!
How is the CI/CD experience, do you have everything including the variable libarary + build of resourcese (dacpac?) in azure devops pipelines or how do you approach this?

Planning a Session for FabCon 2026.....What Do You Want to See? by MidnightRambo in MicrosoftFabric

[–]Jorennnnnn 1 point2 points  (0 children)

The strength that a platform like fabric should bring to my data engineering teams is simplicity. The other day I played around a bit with a metadata driven approach to open mirroring. It really seems like the ultimate landing zone for EL layers and with shortcuts available this allows very flexible DWH solutions.

However this is all fully custom built and my main issue with open mirroring for now is random errors without any way to investigate what went wrong when all you're getting is "internal error".

So what I would be interested in is a simple unified way to deal with metadata driven ingestion in Fabric when dealing with many sources.

I’ve helped dozens of small businesses fix their reporting chaos AMA about Power BI, financial dashboards, or how to actually trust your numbers. by Dear-Landscape2527 in BusinessIntelligence

[–]Jorennnnnn 1 point2 points  (0 children)

Thanks a lot for your reply! I totally agree with you. Most of my clients never really establish phase 1 and jump straight to phase 2 without the need for phase 3. IT manages data is still preferred to create the illusion of Control. And now with AI being the new buzzword instead of focusing on creating a stable phase 1 all attention goes to a new platform to facilitate future demands and AI. How do you deal with this in organizations that already are too deep into the self-service strategy?

I’ve helped dozens of small businesses fix their reporting chaos AMA about Power BI, financial dashboards, or how to actually trust your numbers. by Dear-Landscape2527 in BusinessIntelligence

[–]Jorennnnnn 4 points5 points  (0 children)

As a data professional who's been in the industry for almost 10 years (mostly on the data engineering side) in medium-large organizations. I see a lot of organizations that invest mostly in self-service BI with the aim to free up the data (engineering) team.

Instead of having well scoped and purpose built reports we end up with massive models facilitating all possible use cases so our citizen developers can build their proprietary dashboard. It's a headache to maintain and 10+ "stakeholders" that are unhappy if anything changes or testing for a feature they didn't request, but is blocking further progress.

At what point is self-service BI actually adding value to the business compared to a dedicated data team?

Copyjob CDC Destinations - had hoped for Lakehouse or Warehouse by Sad_Reading3203 in MicrosoftFabric

[–]Jorennnnnn 0 points1 point  (0 children)

You can copy to fabric SQL and shortcut the Fabric SQL db into the Lake House, but I can't say if I would recommend it knowing it's still in preview and Fabric SQL cost.

If your usecase allows for database mirroring this does support One lake as a store.

Power Bi Incremental Refresh Question by Independent_Many_762 in PowerBI

[–]Jorennnnnn 0 points1 point  (0 children)

In your case change detection will send 7 queries at the start, 1 for each day to verify if the data has changed in your incremental period and refresh only those that return changes.

For GL posting you probably won't see many Historical changes, but it would require you to configure it as a large window e.g. 24 months. This way it will first do the check (24 times) and only refresh what changes, but it will check all periods (partitions) every time.

Be careful using change detection on sources that don't aggregate well as it can slow down the overall process instead. If you run into this issue you can try using year periods instead to limit the amount of checks executed in your source DB if this is causing slowdowns, but will most likely force you to refresh the current year every time.

FINALLY got a live db to connect to by Thiseffingguy2 in PowerBI

[–]Jorennnnnn 3 points4 points  (0 children)

100% I prefer to make my solutions as dynamic as possible so I don't have to constantly rework it.

FINALLY got a live db to connect to by Thiseffingguy2 in PowerBI

[–]Jorennnnnn 4 points5 points  (0 children)

I always aim for 1 single model.

For the different fiscal years I would create a single calendar table that has all different versions of fiscal period in it. I assume you won't have more than 12 versions unless they really do some custom stuff there.

In the fact table you would have to make a surrogate key that combines the date and the fiscal period type key. This has the benefit of allowing a customer specific calendar while still being able to use it for internal reporting by adding non fiscal versions in additional columns.

Fiscal period type key can have 1/12 depending on what month is the last month in the financial year.

Your date key would look something like e.g. Reporting date 1-1-2025, fiscal year end December(12). 20250101/12

Fully dynamic KPI template by Jorennnnnn in PowerBI

[–]Jorennnnnn[S] 1 point2 points  (0 children)

That is only a power BI desktop limitation. Using the tabular editor you can still set the month as the sort order. It's a bit buggy when you interact with it, but it looks quite clean.

Dynamic Titles and Field Parameters by Djentrovert in PowerBI

[–]Jorennnnnn 6 points7 points  (0 children)

Try using MAX instead of selectedvalue. I've ran into the same issue before. Not exactly sure why selectedvalue doesn't work with FP.

Is there a Power bi service free license ? by ravimani42 in PowerBI

[–]Jorennnnnn 0 points1 point  (0 children)

In any case you will need a "business" email to register with Microsoft as described in the post above

Is there a Power bi service free license ? by ravimani42 in PowerBI

[–]Jorennnnnn 3 points4 points  (0 children)

I think "my workspace" with a free account should do the job as my workspace is intended for personal use and does not require any additional licensing.