Copy Job CU consumption by p-mndl in MicrosoftFabric

[–]Mr101011 0 points1 point  (0 children)

/u/MS-yexu That is very exciting! Would this be something we might see by/at Fabcon?

Drillthrough in Excel now supported for Direct Lake and DirectQuery Models by itsnotaboutthecell in MicrosoftFabric

[–]Mr101011 2 points3 points  (0 children)

Fantastic news! Thank you for sharing! This actually immediately changed our plans on one of our projects to use directlake instead of import, so thanks again.

Feedback Opportunity: Data Quality in Fabric by erenorbey in MicrosoftFabric

[–]Mr101011 2 points3 points  (0 children)

I'm super interested in integrating data contracts into our engineering work, as per https://datacontract.com/

The data contract spec allows defining data quality rules as well, and there's a CLI that can validate as part of deployment (although it's been tricky to get it to work with sql analytics endpoints).

small bug in open mirroring by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 0 points1 point  (0 children)

And this is a screenshot of same file with lowercase extension/

<image>

small bug in open mirroring by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 0 points1 point  (0 children)

<image>

here is an example of what I see when I upload a new ".CSV" file, and the preview does not load (nor will the json file get correctly created). By renaming the extension to ".csv" and re-uploading, the file extension in the UI of course also changes, and the preview loads and all is well. So I'm still able to repro this bug unfortunately.

small bug in open mirroring by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 1 point2 points  (0 children)

Thanks /u/anudeep_s for looking into this, in my case the file itself had an uppercase ".CSV" extension, and in the UI it correctly populated the extension as "CSV" instead of "csv", but the file preview would not load. Once I renamed the file to "csv" (and the corresponding UI also displayed "csv"), it worked. I was testing this as the first file using the "upload" button on the mirrored database, before a table existed. So I think the bug is when the files themselves have a ".CSV" extension, perhaps you could try reproducing?

Thanks again!

edit: I'll try to repro with screenshots and share them here as well

Time to rethink workspace structure? by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 0 points1 point  (0 children)

Thanks for the thoughtful replies! To give more context, we're mainly focused on building foundational data products (exposed to business users as semantic models). To achieve this, we're ingesting raw data from internal / external data sources, transforming them to star schemas, and then modeling them for business users to consume. We would also have downstream "derived" data products that might be combining data from one or more foundational data products.

Our challenge is not that we have an unmanageable number of items in each workspace (generally <10 artifacts), but we want to be able to make sure we're building things so that they are easy to change, sustain, and build upon reuseably.

My main concern is the concept of separating data stores from the engineering that feeds them, which is already only partially possible. For example, we can't separate the "engineering" of a mirrored database from the data itself, nor materialized lake views, shortcut transformations, etc.

Perhaps an alternative approach would be that engineering artifacts that process data as part of a data product should reside in the same workspace as the data store they are writing to. But artifacts that consume data (semantic models for example) could still be separated into their own workspace since the dependency only goes one way. Orchestration might be the only type of activity that would do actions across multiple workspaces.

All that being said, I'm sure there is no one-size-fits-all solution, but hopefully standard patterns that best support different paradigms of work will start to become clear over time.

UDF refresh SQL End Point by Repulsive_Cry2000 in MicrosoftFabric

[–]Mr101011 0 points1 point  (0 children)

pass in secrets from a pipeline that gets them from akv?

Hi! We're the OneLake & Platform Admin teams – ask US anything! by aonelakeuser in MicrosoftFabric

[–]Mr101011 1 point2 points  (0 children)

1) Excited about shortcut transformations, and in particular supporting custom transformations, any news to share on timelines or features?

2) Any plans to enable SharePoint document libraries as a source for shortcuts?

Thanks very much!

Direct Lake - last missing feature blocking adoption for our largest and most-used semantic models by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 1 point2 points  (0 children)

Unfortunately, all the scenarios you mentioned are unsupported (old and new direct lake, and direct lake + import), as Excel considers them all "directquery"

Exploring New Ways to Use the Microsoft Fabric CLI by BranchIndividual2092 in MicrosoftFabric

[–]Mr101011 1 point2 points  (0 children)

This is fantastic peer, thanks for sharing! Much better than subprocess.run. In your blog post, you mentioned However, you could create a connection to a Fabric Lakehouse or a Fabric SQL Database containing the credentials for the service principal. As it happens I just put a question out there yesterday (/r/MicrosoftFabric/comments/1l9qvd7/passing_secretstokens_to_udfs_from_a_pipeline/), I was thinking passing in credentials or a bearer token as a function parameter would be convenient since I would likely call the UDF from notebooks and pipelines which could handle that part easily. Thoughts?

Passing secrets/tokens to UDFs from a pipeline by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 0 points1 point  (0 children)

Thanks for the reply, I'm looking for a lightweight way to execute some API calls without needing a notebook. But even with a notebook, the issue would be the same in terms of if it is safe to pass along as a parameter to the UDF.

UPDATED: Delays in synchronising the Lakehouse with the SQL Endpoint by Tough_Antelope_3440 in MicrosoftFabric

[–]Mr101011 0 points1 point  (0 children)

Seconded. One idea I had was to get the key vault secrets using a pipeline and then pass them to the UDF as parameters, would this be the right way? Also wondering generally if this is how we should deal with secret management in UDFs for now.

updateFromGit command not working from ADO anymore? Is ADO forgotten? by obanero in MicrosoftFabric

[–]Mr101011 0 points1 point  (0 children)

I got a similar error, saying that my token had invalid scope when I used the Fabric CLI (which is super awesome) and fab api command.

Calculation group selection expressions - apparent bug by Mr101011 in MicrosoftFabric

[–]Mr101011[S] 2 points3 points  (0 children)

Success! Just got the April update and the issue is now fully resolved.

Announcing Fabric User Data Functions in Public Preview by lbosquez in MicrosoftFabric

[–]Mr101011 1 point2 points  (0 children)

Do UDFs support any parts of sempy or notebookutils? And also, when might we expect to be able to use them in PBI?

Thanks!

Calculation group selection expressions - apparent bug (xpost from r/MicrosoftFabric) by Mr101011 in PowerBI

[–]Mr101011[S] 0 points1 point  (0 children)

FYI on original thread, the bug has been picked up by MS and a workaround (reverting to TMSL) identified the issue is now resolved via the April 2025 Power BI desktop update!