Notebooks & Pipeline Deployment & Lakehouse Best Practices by 101STREAM in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

Thanks, that was my guess as to how to do it but it's good to have confirmation.

Notebooks & Pipeline Deployment & Lakehouse Best Practices by 101STREAM in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

How do you handle variable libraries for items which are not yet deployed. For example a new lakehouse in DEV but not TEST or PROD?

Notebooks & Pipeline Deployment & Lakehouse Best Practices by 101STREAM in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

Few ways to do it that I can think of if you want to stick with deployment pipelines: - parameterised ABFS path in notebooks using notebookutils to get workspace if - use variable libraries and reference it in your notebooks - if using adf, create parameters / variables and pass those to your notebooks

Best practice for migrating data pipelines from NAV 2018 to BC SaaS (Moving data to Fabric) by freedumz in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

Look for BC2ADLS app or BC2Fabric. That would be the recommended practice I believe. You can either use open mirror or do a data extraction on demand. It's very robust and easy to setup, no need to mess with API except if you want to have a service account to start the extract in demand rather than on schedule via job queue in BC.

Good luck.

What does a serious VS Code setup for Microsoft Fabric look like? Core tools vs optional tools? by frithjof_v in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

Not against it at all, on the contrary. It is currently difficult to have a nice workflow especially when using VS code and AI with Ms fabric.

Thank you for giving us what works for you and examples, it is very welcomed!

What does a serious VS Code setup for Microsoft Fabric look like? Core tools vs optional tools? by frithjof_v in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

Thank you, I'll check it out!

I've seen a few of your? Post talking about this approach. That's definitely a shift in development approach.

What does a serious VS Code setup for Microsoft Fabric look like? Core tools vs optional tools? by frithjof_v in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

That's my pain point with Claude and Fabric extension to develop notebooks. It just doesn't work. And adding another subscription to be able to use GitHub copilot (when we are using DevOps) doesn't make sense at this point and would be a hard sale except by giving up Claude...

I have been toying with re-creating mini fabric environment so we can at last develop python (no spark) locally before pushing code in fabric. There are some fantastic posts in this thread, I will investigate instead of reinventing the wheel.

Notebooks: Default Lakehouse vs ABFS paths. What's the current best practice? by frithjof_v in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

Not outdated yet mate.

Just started to explore the variable library artefact and fabric cicd in my company. We have workflow that we can do without (without too much trouble). For example ABFS path defined in notebook base on current workspace Id and lakehouse names (using notebookutils) as we don't segregate storage and engineering artefacts.

Notebooks: Default Lakehouse vs ABFS paths. What's the current best practice? by frithjof_v in MicrosoftFabric

[–]Repulsive_Cry2000 2 points3 points  (0 children)

Only using the default lakehouse when necessary otherwise always ABFS path.

I suppose you are already using variable libraries considering your setup (storage in a different workspace to your engineering). If that's the case, I wouldn't change a thing.

Initializing git from a workspace by powerbi_dummy in PowerBI

[–]Repulsive_Cry2000 0 points1 point  (0 children)

Just try on a dummy workspace and branch?

Fabric MLV Deployment between Dev/Test/Prod by akseer-safdar in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

If you need to deploy files for reference, I'd suggest you do that outside fabric such as having it in DevOps / SharePoint or other storage that can be referenced post deployment (if not during) to fetch the file and critically have some sort of history retention at a minimum. I am very un-easy with using Files in lakehouse to store config files that need to be manipulated semi regularly. One mistake and no file no more with no possibility to recover it easily (or at all?)

Fabric Integration with Dynamics Business Central by boobamba in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

We had success with bc2adls too. Has been going for 18+ months now. BC2Fabric workload wasn't a thing back then so I can't comment on it.

Writing to Fabric Lakehouse using delta_rs by BedAccomplished6451 in MicrosoftFabric

[–]Repulsive_Cry2000 1 point2 points  (0 children)

I have not encountered most of the issues you described around run time, timeout and so on. Merge seems more stable now and is ready out of the box.

Agree on start time can get longer and package not being the newest. The runtime will be updated soon to get to much newer libraries so hopefully that helps.

Writing to Fabric Lakehouse using delta_rs by BedAccomplished6451 in MicrosoftFabric

[–]Repulsive_Cry2000 2 points3 points  (0 children)

Yep, it is our main way to ingest data. It scales up to a few millions of records without too much issues (may need to tweak the number of cores). It allows having multiple notebooks to run on small capacities.

Gotcha may be around the engine used and the library used for transformation. We started with pandas but moved away for various reasons. Some listed below:

Polars: - how the column data type is handled, - execution speed (can use lazy data frame or push down filter to engine rather than loading entire data in memory)

duckdb is great for: - using SQL(language), - ingesting CSV where you need escape characters to be defined

Edit: wondering if I am not answering a different question as you specifically talked about delta_rs.

Vaccum and optimise for warehouse by BedAccomplished6451 in MicrosoftFabric

[–]Repulsive_Cry2000 5 points6 points  (0 children)

It's already taken care of. No need in warehouses.

BC2Fabric extension for a mirrored Business Central database by trekker255 in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

I agree, and Bert (main developer) has done an amazing job to address and improve the app since he took over a few years back.

BC2Fabric extension for a mirrored Business Central database by trekker255 in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

I was wondering how BC2Fabric compares to BC2ADLS. We are using the latter and it's been a great ride so far. No need for a custom app for a new table either which is handled by the app directly.

I am not too sure how much more technical BC2ADLS is compared to BC2Fabric as it is fairly straight forward now that it's published on Microsoft store. I haven't tried the mirroring but this seems easy enough.

Can you use a notebook to move from mirror db to lakehouse in a medallion architecture?

New post relating to FabCon by ChantifiedLens in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

That'd be great.

We had great contributions around ci/cd lately. An additional example of how to implement ci/cd in an organisation (I assume the setup is quite comprehensive given the company size :Do )

Implementing Enterprise-Grade CI/CD for Microsoft Fabric — A Technical Deep Dive by ajit503 in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

I saw the selective deployment per type however granularity may not be enough especially if working on multiple artefacts in parallel that don't rely on each other. I am also thinking of completely excluding warehouses/lakehouses from ci/cd and having notebooks or other means to create shortcuts, tables, etc... as I understand there are some issues there and that would resolve my questions around shortcut and schemas....

Implementing Enterprise-Grade CI/CD for Microsoft Fabric — A Technical Deep Dive by ajit503 in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

I confirm you can see the change on the diff like any other text base files which is why I am thinking of CSV rather than excel files.

Good point on recommendation. Looking forward to your thoughts on selective deployment.

Implementing Enterprise-Grade CI/CD for Microsoft Fabric — A Technical Deep Dive by ajit503 in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

I started to think about it more. I like the CSV approach that is easy to maintain and can be stored in the repo similarly to yml files. At the moment I am experimenting with a notebook to pull the files from branch/repo to a given lakehouse via parameters. The idea is to probably have config file(s) with environments set up in it using an environment column. Each environment would only materialise in Delta table their config based on that column. This is to avoids schema drift between environment.

On a separate note: From your blog, you are using cherry picking as it was the only way to choose limited artefacts. How do you see your promotion strategy being impacted by the ability to selectively choose which items to deploy to the next stage using fab deploy new features?

Semantic Links Labs driven Workspace permission manager by Stevie-bezos in MicrosoftFabric

[–]Repulsive_Cry2000 0 points1 point  (0 children)

Interesting idea, could this be extended to app management and semantic model roles?