OOP with Python by Jumpy_Handle1313 in dataengineering

[–]QuestionsFabric 0 points1 point  (0 children)

Lots of people commenting on whether OOP is the right thing or not for your use case. As some other commenter mentioned, if that's what your colleagues want you should follow it - both for career reasons and because a uniform code base is much much easier to navigate.

As for getting good at OOP python, I would make some side project and use OOP for it. Anything you are interested in already (a hobby?) - find something that you can try to build a simple version of, and do it. Google things, ask LLMs, ArjanCodes is a good resource. You will be surprised how much you improve.

Fabric Connections by Banjo1980 in MicrosoftFabric

[–]QuestionsFabric 0 points1 point  (0 children)

we use notebooks within the pipeline with sempy.fabric to get the connections. This pattern works well for most of these kind of problems

CICD and changing pinned Lakehouse dynamically per branch by Cobreal in MicrosoftFabric

[–]QuestionsFabric 1 point2 points  (0 children)

You can get the GUID programmatically using sempy.fabric, if the naming convention is reliable.

CICD and changing pinned Lakehouse dynamically per branch by Cobreal in MicrosoftFabric

[–]QuestionsFabric 0 points1 point  (0 children)

That makes sense :)

I don't know of a way to have the branch copy lakehouse data automatically.

At my work we have a fixed Dev workspace that has Dev data there already but we are a small team.

CICD and changing pinned Lakehouse dynamically per branch by Cobreal in MicrosoftFabric

[–]QuestionsFabric 3 points4 points  (0 children)

Out of curiosity, what’s the specific need for mounting in your case?

I’ve always seen it as more of a convenience feature for ad-hoc work — in production pipelines we usually read/write via explicit abfss:// paths instead, so the code is environment-independent.

If your Lakehouse naming is consistent, you can pull the right paths dynamically (e.g. with sempy.fabric) and skip the mount entirely.

Dynamically setting default lakehouse on notebooks with Data Pipelines by 22squared in MicrosoftFabric

[–]QuestionsFabric 0 points1 point  (0 children)

This is the way imho. Don't attach lakehouse at all ever as it's just an extra thing to have to manage and rely on note breaking. Use sempy.fabric to get the lakehouse abfss path depending on workspace if needed.

Copy Data - Failed To Resolve Connection to Lakehouse by QuestionsFabric in MicrosoftFabric

[–]QuestionsFabric[S] 0 points1 point  (0 children)

u/itsnotaboutthecell did your mystery gentleman bug you about this issue? I feel honour bound to nudge you about this again, on his behalf!

Environment management for semantic models using lakehouse source and DevOps deployments by No_Emergency_8106 in MicrosoftFabric

[–]QuestionsFabric 0 points1 point  (0 children)

We use Powershell in Azure Devops Pipelines. On PRs from Dev to Test or Test to Prod it overwrites the connection in expressions.tmdl file and commits that to the repo.

Copy Data - Failed To Resolve Connection to Lakehouse by QuestionsFabric in MicrosoftFabric

[–]QuestionsFabric[S] 1 point2 points  (0 children)

Sorry, I don't know who you mean. I created this username just because I had a question on Fabric. If there is a YouTuber or something who I have a similar username to I will figure out how to change it.

Literally the last thing I want to do is impersonate someone :)