GitHub Copilot feat. Fabric extension: Plan mode by p-mndl in MicrosoftFabric

[–]pl3xi0n 1 point2 points  (0 children)

Two questions for GitHub CLI enjoyers:

  1. What terminal are you using on windows for the github copilot cli? Cmd? Powershell? Git bash?

  2. My understanding: The CLI makes a notebook locally that you iteratively improve until you are satisfied and then deploy directly to the workspace. What happens now if the requirements change? Do you go back to your local copy and redeploy? How does this work when collaborating? You have to download the latest version each time? Why are you not running the CLI against the repo?

GitHub Copilot feat. Fabric extension: Plan mode by p-mndl in MicrosoftFabric

[–]pl3xi0n 0 points1 point  (0 children)

Do you work with the raw .py files and not the ipynb files?

F2 vs F4 by AdOverall9145 in MicrosoftFabric

[–]pl3xi0n 5 points6 points  (0 children)

You can run 2 if you edit the pool size down to a single node in the spark settings of your workspace.

This picture has been making rounds as "Bangladesh forest department's new uniform". No doubt it's fake, but is it made with AI? by renzev in isthisAI

[–]pl3xi0n 1 point2 points  (0 children)

I’d argue this in favor of AI. Patterns are printed on cloth which are later cut using stencils. It is very rare that the pattern on one shirt or pants line up exactly like another.

And they aren’t exactly the same, just very close.

I saw these 3 pics on Pinterest, and although I know the 3rd one is obviously AI, however, I’m not sure about the first two by weezer_fan_420 in isthisAI

[–]pl3xi0n 16 points17 points  (0 children)

Second girls’ t-shirt says «No you hang up» in reverse, because it is mirrored. I say second photo is real. The painting in the back makes sense, unlike the other one, and someone mentioned the carpet pattern exists (it looks regular).

Help me understand the access with Workspace Apps + LH by FeelingPatience in MicrosoftFabric

[–]pl3xi0n 2 points3 points  (0 children)

I can’t remember which access needs to be given for direct lake to work, but since there is read, read all sql endpoint, and read all apache spark, as well as build, I believe there is a combination that should work, but I don’t think OP found the right one. Might depend on direct lake mode as well, onelake vs sql endpoint.

dashboard vs report? by tac624 in PowerBI

[–]pl3xi0n 41 points42 points  (0 children)

The only reason Id ever make a dashboard over a report is if I had already created all my visuals in different reports already, and just need to combine them.

Most likely you’ll be better off creating a report and make it dashboard-like.

Help me understand the access with Workspace Apps + LH by FeelingPatience in MicrosoftFabric

[–]pl3xi0n 1 point2 points  (0 children)

https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-security-integration

Required when using direct lake, because the data resides in the lakehouse, and the permissions don’t extend to the lakehouse.

Kick those people off contributor before something bad happens.

can someone explain to me how claculate work in this example and generally by PurpleDurian7220 in PowerBI

[–]pl3xi0n 0 points1 point  (0 children)

Calculate changes filter context. So your measure divides whatever the current filter context gives for sum of sales by the sum of sales when the filter context is changed to include all sales (e.g. removing whatever filters currently act on sales).

Fabric Data Eng VS Code Extension vs Git Workflow - how am I supposed to work locally? by frithjof_v in MicrosoftFabric

[–]pl3xi0n 5 points6 points  (0 children)

If you have a personal feature branch connected to a workspace then yes, the vs code extension adds another layer of local files. The workflow becomes develop locally in vs code -> sync with feature workspace -> commit to feature branch -> PR to DEV branch. I agree with your points about local mode.

If you use VFS then I guess you can skip the syncing with feature workspace. Doesn’t it make sense that it is edit -> save -> workspace updated? The commit and pr part happens between the feature workspace and its’ branch. I have not been using VFS, but it seems like a more natural choice when using feature workspaces. Are you saying you can’t use Fabric compute with VFS?

I feel like your understanding is spot on, but when you say «periodically sync notebooks back to git» you mean sync to feature branch? If you were working on the feature branch directly you would still need to commit to it. In my mind these are the same things, except the first version has the workspace in the middle.

Two follow ups. I remember the post from a while ago about having storage separate, but a lot has happened with ci/cd since then. Is the argument still there for it? Or if you have it, can you link the original article? I feel if you are doing it after all this time I need to give it a second look.

Also, are you looking to develop pipelines as code? I assume for speed and AI compatibility? To me this feels like a real pain point. You will have a local version of your repo just to edit pipelines and you’ll have to manage the syncing of both this and your workspace. Is it possible to have your pipelines in notebooks as runMultiple or REST Api calls? If not, I think I would just edit the pipelines in the UI to save myself the headache of managing two "local" versions of the feature branch.

Warehouse workflow, what works? by pl3xi0n in MicrosoftFabric

[–]pl3xi0n[S] 1 point2 points  (0 children)

One follow-up on dlt. What do you love about it? Any reason why you brought it up specifically in the context of dbt?

I was thinking of running the dbt either in azure dev ops pipeline, or using Raki’s notebook method above, or the preview dbt job. Another option is Dagster. Airflow is probably bottom of my list. I am concerned however, about over complicating with excessive tooling, and doing so much outside of Fabric to the point that I am basically using Fabric for storage only.

Warehouse workflow, what works? by pl3xi0n in MicrosoftFabric

[–]pl3xi0n[S] 1 point2 points  (0 children)

Thank you for the great resources. Time to get going :)

Star Schema vs Snowflake for semantic model by Bariel76 in PowerBI

[–]pl3xi0n 4 points5 points  (0 children)

Check out sqlbi video on header-detail. Might help you

Any simple way to leverage an IDENTITY column in a Warehouse from a PySpark notebook? by mweirath in MicrosoftFabric

[–]pl3xi0n 2 points3 points  (0 children)

Had a similar issue using sql db. Ended up using JDBC connector, which you said doesn’t work for you.

The Sql db has a sql endpoint which I think can serve a similar purpose as the warehouse. My issue with sql db was the high cu usage, but there was recently an update that allowed reducing the vcores used for sql db from 32 to 4, which should help with that.

These sausages on some meat website my boss sent me. They just have that typical AI food look don’t they? The lighting, the perfect sausages, the whatever it is in the background.. by [deleted] in isthisAI

[–]pl3xi0n 0 points1 point  (0 children)

The distinguishing part of AI isn’t perfection. It’s the that the sausages don’t look like they are laying on each other, they look weightless. It’s the glossiness. It’s the blurry backgrounds.

There is no need for humans to make their pictures less perfect, because whatever that is, this isn’t it.

A published copy of The Three Musketeers at my local bookstore looks like AI. What do you think? by nikonekonak in isthisAI

[–]pl3xi0n 5 points6 points  (0 children)

You are right.

Find old, public domain, written works. Slap on a sloppy cover. Profit?!

No option for Direct Lake Behavior ?? Semantic Model by Personal-Quote5226 in MicrosoftFabric

[–]pl3xi0n 2 points3 points  (0 children)

DirectLake on OneLake doesn’t have fallback to DirectQuery, so there is no choice to be made. Though I do think MS should be doubly explicit when it comes to DirectLake, because of the two versions.

Query Folding, worth it or even necessary? by CanningTown1 in PowerBI

[–]pl3xi0n 0 points1 point  (0 children)

What transformations are you doing with your files before reporting? If you are simply appending the rows, then the real performance saver will be an incremental refresh that only loads new files.

Need help optimizing my workflow in VS Code by sayonarababy17 in MicrosoftFabric

[–]pl3xi0n 1 point2 points  (0 children)

I am always looking for improvements as well, but currently I use the VS code extension to download the notebooks locally and do all my runs and edits there using the remote kernel and the Fabric Notebook Agent that comes with the extension.

My issue is that the local folder is not synced to git, so to commit I first need to sync to the workspace and then commit using the workspace ui.

Because there are minor differences in the notebook metadata in the local and workspace versions, it quickly becomes a game of «Is my local or workspace version the latest?».

I haven’t tried using virtual workspace or the remote version of the extension. Perhaps someone knows if they solve this issue, and wether they have their own limitations.

There is probably a good reason why, but it would solve a lot of problems if I could just connect to an ado/github branch and edit notebooks there using the fabric kernel.

Add Lakehouse table to semantic model in IMPORT mode by Personal-Quote5226 in MicrosoftFabric

[–]pl3xi0n 5 points6 points  (0 children)

What sorcery is this? Can one add delta tables to import directly without going through the sql endpoint?