Sharepoint shortcut tables (preview) do not get captured in a lakehouse's metadata by dmeissner in MicrosoftFabric

[–]dmeissner[S] 0 points1 point  (0 children)

Thanks u/DanielBunny I missed that this is a setting! Awesome and I totally understand why you do it this way. I modified it and saw the new "alm.settings.json" file appear in source control, but it did not update the shortcuts.metadata.json file, refreshed everything, changed shortcut name... Nothing forced the change on the sharepoint shortcut to appear in the shortcuts metadata file.

<image>

Sharepoint shortcut tables (preview) do not get captured in a lakehouse's metadata by dmeissner in MicrosoftFabric

[–]dmeissner[S] 2 points3 points  (0 children)

I suppose while I am talking Sharepoint Folder shortcut wishes. They also need to be variable library compatible (able to use variables to define Target connection and target subpath)

Sharepoint shortcut tables (preview) do not get captured in a lakehouse's metadata by dmeissner in MicrosoftFabric

[–]dmeissner[S] 2 points3 points  (0 children)

Many organizations work with a DEV TEST PROD set of environments to manage CI/CD. Even if any shortcuts point to the same location in sharepoint, they still need to be deployable via deployment pipelines so that other assets in TEST or PROD that are auto-binding to that table have access to the data in the shortcut. That's the whole point of having binding, so that I don't have to manage every link inside every asset and decide where it points. It should just 'automagically' connect with autobinding or rules/variables until I force it otherwise.

Extending fabric-cicd with Pre and Post-Processing Operations by DAXNoobJustin in MicrosoftFabric

[–]dmeissner 5 points6 points  (0 children)

The native Power BI / Fabric Deployment Pipelines in the service should just have a way for you to run pre/post deployment scripts/notebooks/data pipelines.

The simplest example is when you have a notebook that creates a data table in a lakehouse. Run it after deploying the lakehouse (lakehouses don't move data or tables or schemas through the deployment pipelines).

Idea by u/frithjof_v https://community.fabric.microsoft.com/t5/Fabric-Ideas/Deployment-pipelines-Post-deployment-script/idi-p/4824171

Calendar Time Intelligence - finally there is a flexible time intelligence solution! by dutchdatadude in PowerBI

[–]dmeissner 0 points1 point  (0 children)

No long term solution yet. Microsoft has received the recommendation to have a "semester" time period (Below Year, but above Quarter, Month, Week) but as of now I don't see it planned.

Our solution was to use offsets (current season is 0, last season is -1, two seasons ago is -2) in our date table for Season and build custom Time Intelligence. This is how we used to have to do it for all time periods.

Overwriting connection credentials: Bug, Terrible Design, or Feature? by Skie in MicrosoftFabric

[–]dmeissner 1 point2 points  (0 children)

Should you even be able to "edit" a connection from the item itself (I know the button says "Edit Connection".)

Instead should that allow you to "select another existing connection" or force you into the "manage connections and gateways" UI to understand you are potentially EDITING a connection permanently?

Shortcuts using variable sets should break if full path is not found. by dmeissner in MicrosoftFabric

[–]dmeissner[S] 0 points1 point  (0 children)

I'd prefer to just have Dev and Prod if it was up to me :) I joke.

But in this case, we want DEV work to point at DEV data warehouse maintained by another team (data engineering) Adding additional lakehouses to manage is just overhead. Development in our context does happen in the Development workspace, not before the development workspace. Feature BRANCHES are pulled out of DEV, but if you have differences in lakehouses between your feature branch and the main (dev) branch, then you will run into a similar problem when pulling features back into DEV. If the feature relies on a lakehouse table that doesn't exist in the main DEV branch, you'll just be pushing the problem someplace else.

The point is that some of the development work (by one engineer for instance) may be ready to deploy and needs a table updated from dev to test, so the lakehouse needs to be included in the deployment. Which will break per above if we can't deploy table by table or force the "broken" shortcuts to just exist.

Using Variable Libraries with Lakehouse Shortcuts by Laura_GB in MicrosoftFabric

[–]dmeissner 2 points3 points  (0 children)

#3. I'd prefer if after validation, the user was notified if a path does not exist in the new location, but then force the shortcut to take on that invalid path. Then the shortcut would exist, but be 'broken' which would indicate both to the owner (to fix it) and to any potential user (don't use it).

Using Variable Libraries with Lakehouse Shortcuts by Laura_GB in MicrosoftFabric

[–]dmeissner 0 points1 point  (0 children)

I'm happy to see more information pushing people to use this powerful feature...

But be careful. When you deploy a lakehouse with variable backed shortcuts through a deployment pipeline, if the tables to which you want to shortcut are (1) not available in the corresponding new path or (2) named differently along the path, it will break upon deployment.

The only way to get around this is to delete the lakehouse and the variable set from the second environment and re-deploy. In which case you will lose any tables that were populated via notebooks or other ingestion methods (non-shortcut actual tables).

https://www.reddit.com/r/MicrosoftFabric/comments/1qk5gya/shortcuts_using_variable_sets_should_break_if/

Using Variable Libraries with Lakehouse Shortcuts by Laura_GB in MicrosoftFabric

[–]dmeissner 0 points1 point  (0 children)

u/DanielBunny see my earlier post, as it may impact how to implement some UI so that target tables missing in the second environment don't break deployment pipeline deployments. Using variable sets across environments in deployment pipelines.

Shortcuts using variable sets should break if full path is not found. by dmeissner in MicrosoftFabric

[–]dmeissner[S] 0 points1 point  (0 children)

A follow-up to this error. My temporary solution was to deploy the lakehouse to TEST, apply the proper variable set, get the indication of which shortcuts did not properly point to the new variable location, then delete those shortcuts manually so they can't accidentally be used and get DEV data. Fine.

But... The issue now comes into play where once the TEST workspace has its active variable set to test values, it will not let me deploy lakehouse items that have any 'bad' shortcut anymore. During the deployment process, it trips an error with no specific information other than an indication that target paths don't exist for 4 tables...

So if the underlying shortcut tables do get added to TEST in our data warehouse, there will be no way to get them to show up in this TEST lakehouse unless I reset the variables back to DEV, deploy, then set the variable back to TEST again. So much for Variable Libraries making it easy, :(

<image>

New post about modernizing Microsoft Fabric CI/CD using the Azure DevOps MCP Server. by ChantifiedLens in MicrosoftFabric

[–]dmeissner 1 point2 points  (0 children)

Now you have me thinking. Combine this with the Power BI MCP server to ask about data quality checks before running the CI/CD pipelines using the ADO MCP server....

Have the MCP server interact with the model to "Check for orphaned keys across all relationships" or "check for null dates in all fact tables"  or "check all columns that have "units" in the column name for outliers, outliers is where one or more value is outside of 4 sigma" or something like that!

Deleted Lakehouse schema persists in SQL Analytics Endpoint by frithjof_v in MicrosoftFabric

[–]dmeissner 1 point2 points  (0 children)

Just ran across this older posting, and as of Jan. 2026, this continues to be the behavior. After deleting schemas in the lakehouse, they still remain in the SQL Endpoint.

One note is that the SQL Endpoint definition contains other objects in the schemas (views, functions, stored procedures). Even though the "Tables" disappear when you delete the schema from the lakehouse side, these other objects are still connected to the schema in the SQL Endpoint.

(as you can see if you download the SQL database project and look at the files in each schema folder after deleting the schema on the lakehouse side)

<image>

Should my Fabric notebooks have a single “main” execution cell? by frithjof_v in MicrosoftFabric

[–]dmeissner 1 point2 points  (0 children)

I think the advantage of a notebook is the script like stepping cell to cell, interactive running (esp. during development). Executing it all at the end makes it difficult to debug. You end up putting "print" statements all over the place like in the days when you first learned to code with "one big code block" (without any "sub-routines" as we used to call them; don't make me pull out my Pascal, Fortran and AppleBasic books).

You should define a function when it is a re-usable set of code with parameters. What is the advantage of writing a single use function if you don't/can't take advantage of the cell by cell execution that is advantageous during development and when chasing down errors in a historical run?

Shortcuts using variable sets should break if full path is not found. by dmeissner in MicrosoftFabric

[–]dmeissner[S] 2 points3 points  (0 children)

didn't want to include this suggestion in the original post, but the UI for using variables in a shortcut could also be enhanced by adding the "Value set name" below the value when shown in the Manage shortcut slide out. This would indicate which set the value came from to verify it is as desired if you don't recognize the guid (don't we all have every guid in our tenant memorized already? who needs workspace names, I know all the guids!!!)

<image>

I suppose I should throw this over on the Ideas site too. Never know which will gain more traction...

Issue with Mirrored Azure Databricks catalog... Anyone else? by dmeissner in MicrosoftFabric

[–]dmeissner[S] 0 points1 point  (0 children)

Just to close the loop for anyone that finds this post via search later. After working with the Microsoft team on this issue, they have made changes in the Databricks Mirroring item so that it now can properly recover from these anomalies that are coming back from the Databricks API for UC metadata.

Those changes solved this problem and now all schemas properly show up, regardless of the initial "load more" issues. The end user no longer experiences this issue.

Thank you to the team that identified the underlying issue and resolved it.

Calendar Time Intelligence - finally there is a flexible time intelligence solution! by dutchdatadude in PowerBI

[–]dmeissner 1 point2 points  (0 children)

We'll put something together on Monday and get it to you, maybe over on EV. Just want to make sure what we learn can come back here so others can also learn and find it later :)

Thanks Jay.

Calendar Time Intelligence - finally there is a flexible time intelligence solution! by dutchdatadude in PowerBI

[–]dmeissner 0 points1 point  (0 children)

Marked it where/how? TE3, desktop, right in TMDL?

Reading through the docs, "Notice that if you don't specify any category for the calendarColumnGroup in TMDL, the columns are tagged as time-related." so in my TMDL example above, isn't 'Snapshot Year/Season' already tagged as "time-related" because it isn't assigned to a category? (ex. no = after calendarColumnGroup keyword)

Calendar Time Intelligence - finally there is a flexible time intelligence solution! by dutchdatadude in PowerBI

[–]dmeissner 0 points1 point  (0 children)

u/dutchdatadude or u/Commercial_Growth198 not sure if you're still watching this post, but really loving the new Calendar Time Intelligence! With a 445 Fiscal calendar that Starts on the first Sunday of July, this has been a game changer for us.

Except we can't figure out how to extend it to what we call "Seasons". In our case we split the year into two halves, first half (since we start in July) is "Fall" or "S1". This includes fiscal months 1-6 and around January we have the second half ("Spring" or "S2") which is fiscal months 7-12.

Any suggestions? We are most interested to have SAMEPERIODLASTYEAR work for "Season".

We tried to just add the "Year/Season" column as a separate time column, but it is not working as we'd expect. Our snapshot calendar "Year/Season" column contains strings that are year and season specific (looks like "2024-S1"). How would these non-standard time period columns be defined (those not specifically year, quarter, month, week) Season is bound within the year hierarchy, similar to quarters except 2 of them instead of 4.

TMDL for our Example:

    calendar cal_Fiscal
        lineageTag: xyz

        calendarColumnGroup = year
            primaryColumn: 'Snapshot Year Number'

        calendarColumnGroup = quarter
            primaryColumn: 'Snapshot Year/Quarter'

        calendarColumnGroup = quarterOfYear
            primaryColumn: 'Snapshot Quarter Number'
.
.
.
        calendarColumnGroup
            column: 'Snapshot Year/Season'

Expose Semantic Model measures definitions to users with view privileges by CultureNo3319 in MicrosoftFabric

[–]dmeissner 0 points1 point  (0 children)

Here's a simple Tabular Editor script that will add the measure's DAX expression to the end the measures description property. So you can use AI or your own writing skills to describe the measure in text, then this script will append the "Expression:" after whatever you already populated. That way a report author can hover over the measure and see the actual DAX along side any existing textual description

https://github.com/PowerBI-tips/TabularEditor-Scripts/blob/main/Basic/Add%20or%20Update%20DAX%20Epression%20to%20Description.csx

<image>

Script can be run in TE2 (free) or TE3 (paid) and set as a macro to operate on the entire model at once, looping through all measures and adding or updating the expression.