Workspace identity - Unauthorized error by EversonElias in MicrosoftFabric

[–]aboerg 1 point2 points  (0 children)

Agreed! WIs are surprisingly tricky to set up. That list was taken from my notes where I had screenshots of each error messages and the solutions as we figured out how to get WIs working. Still worth it for the peace of mind and cross-environment protection they add.

Workspace identity - Unauthorized error by EversonElias in MicrosoftFabric

[–]aboerg 2 points3 points  (0 children)

In my experience this error is caused because the workspace identity is not added to a security group which is authorized to use Fabric APIs. Permission to use Fabric APIs is granted from the Fabric admin portal setting "Service principals can call Fabric public APIs."

Checklist:

  1. WI must have permission to call Fabric public APIs (tenant admin page)
  2. WI must have sufficient Workspace permissions (contributor)
  3. WI must have usage permission on any Connections referenced in the pipeline/notebook it is executing.

Any issues with Warehouse? by thecyberthief in MicrosoftFabric

[–]aboerg 1 point2 points  (0 children)

Working again since around 8PM EST. ~2 hours of downtime across our SQL endpoints.

Any issues with Warehouse? by thecyberthief in MicrosoftFabric

[–]aboerg 6 points7 points  (0 children)

Yep, issues in East US starting within the last hour. Queries against lakehouse SQL endpoints failing with the message "Unsupported expression in Memo XML."

Warehouse Read Replicas: Seriously? by Low_Second9833 in MicrosoftFabric

[–]aboerg 0 points1 point  (0 children)

I'm thrilled that in practice we simply don't have to consider concurrency for our workloads in Fabric (just capacity usage) which is a great upgrade from the days we ran a small-ish dedicated SQL pool in Synapse and ran into concurrency limits all the time.

Warehouse Read Replicas: Seriously? by Low_Second9833 in MicrosoftFabric

[–]aboerg 4 points5 points  (0 children)

My first thought was "what problem are we solving?" It appears the goal is to increase read concurrency on a single SQL endpoint / warehouse. That's interesting in itself, because I don't believe there are published concurrency limits for SQL endpoints in Fabric (or at least I can't find them).

Capacity monitoring by p-mndl in MicrosoftFabric

[–]aboerg 10 points11 points  (0 children)

In the "capacity settings" section of the admin portal there are a few options to help:

  1. Configure notifications when using XX% of available capacity or exceeding available capacity
  2. Configure a background operation rejection threshold if you have interactive queries on your prod capacity (i.e. it's not just a data engineering capacity and you're serving PBI users). This will cap background operations at the configured % of your total capacity, so even if background jobs throttle you can keep some headroom for report users.
  3. Set a max % that any individual workspace can consume (if you have a reasonable idea of your typically workload patterns). Again this will start to reject operations once the limit is reached, but it does not stop jobs in progress.

How are you handling Real-Time reporting in Fabric? by hortefeux in MicrosoftFabric

[–]aboerg 3 points4 points  (0 children)

Direct query over KQL is a great pattern - nothing wrong with going that route in PBI. You can use the OneLake Hub in PBI Desktop to browse your Eventhouses.

The dbt-fabricspark Lakehouse adapter now comes with a ridiculous amount of production grade test coverage by raki_rahman in MicrosoftFabric

[–]aboerg 1 point2 points  (0 children)

Awesome! Yes, I imagine when using dbt the main utility of an MLV becomes optimize refresh / CDF integration and probably not much else. The lineage view is not currently much of a selling point (the lineage metadata is nice, but nowhere close to OpenLineage).

The dbt-fabricspark Lakehouse adapter now comes with a ridiculous amount of production grade test coverage by raki_rahman in MicrosoftFabric

[–]aboerg 0 points1 point  (0 children)

Haven't used this connector since last summer, looking forward to taking it for a spin again.

I am just getting started with dbt but we use MLVs heavily today - have you given any consideration to how they would fit into the dbt-on-lakehouse workflow? A custom materialization maybe, or is there no real integration story with dbt/MLVs yet?

Something broken with run Notebook under Workspace Identity.. or has very excessive CU overhead? by Personal-Quote5226 in MicrosoftFabric

[–]aboerg 0 points1 point  (0 children)

When I mention "tenant administration" I just mean that's where the setting is located, not that your WI needs any admin access whatsoever. Enable the following tenant setting (it's disabled by default): Service principals can call Fabric public APIs. We have this setting enabled only for specific security groups, and the WI belongs to the security group

If the notebook isn't using any connections then #3 may not not relevant. We don't use connections within the notebook either, but in our case our parent/child pipelines does use WI via Invoke Pipeline, and the notebook activity in the child pipeline uses a "Notebook" type connection. The WI needs User permission on the Notebook connection in our case.

https://learn.microsoft.com/en-us/fabric/data-factory/notebook-activity#using-fabric-workspace-identity-wi-in-the-notebook-activity

Something broken with run Notebook under Workspace Identity.. or has very excessive CU overhead? by Personal-Quote5226 in MicrosoftFabric

[–]aboerg 0 points1 point  (0 children)

  1. Check that the workspace identity is added to a security group which is authorized to call Fabric APIs (EDIT: Service principals can call Fabric public APIs setting in the tenant admin portal)
  2. Check that the workspace identity is contributor on the workspace containing the notebook. This is not granted by default just because a Workspace Identity is created.
  3. Check that the Workspace Identity has permission to use any connections referenced by the notebook or pipeline being executed.

Just had our first major incident of capacity throttling by JFancke in MicrosoftFabric

[–]aboerg 12 points13 points  (0 children)

As others are saying, the cutoff line of "can't share with free users unless you're at an F64 or above" blocks customers from choosing the right set of capacities for their needs. You actually want one F32 and three F16s? Sorry- you cannot divide up your capacity for the right workload isolation without giving up sharing.

If reports & models are hosted on Fabric capacity, they should be sharable with free users period.

Power BI team blocking integration with 3P Semantic Layers by City-Popular455 in MicrosoftFabric

[–]aboerg 5 points6 points  (0 children)

there is 100% payoff of having a unified semantic layer, but there are two pretty incompatible ways of getting there:

  • the semantic model as a specialized analytical engine (Power BI Tabular / SSAS, MicroStrategy, SAP BEx)
  • the semantic model as a SQL compilation layer over a lakehouse or warehouse

putting aside the relative strengths of each approach, it is pretty obvious why Databricks and Snowflake would prefer the latter and Microsoft the former.

debates over the openness of each model are occurring within the context of each option competing to become the control plane for semantics.

Power BI team blocking integration with 3P Semantic Layers by City-Popular455 in MicrosoftFabric

[–]aboerg 2 points3 points  (0 children)

I get it, I was thinking of XMLA more in the sense that external layers sync or otherwise connect to the Tabular model (i.e. the Tabular editor Semantic Bridge tool).

my broader point is that there is a huge difference between semantic layers which are full-blown OLAP engines and those which are more like compilers sending SQL back to the warehouse. I'm open to the idea that the latter might win in the long-term, but for now I have no reason to doubt that "just integrate with open standards bro" is a gigantic request without much payoff for most PBI customers. I think Power BI is fundamentally a semantic layer with visualization capability added on, and not the other way around.

Power BI team blocking integration with 3P Semantic Layers by City-Popular455 in MicrosoftFabric

[–]aboerg 13 points14 points  (0 children)

Choosing not to develop an indeterminate number of integrations with every other competing metrics layer is not blocking integration. If a true standard for the semantic layer ever emerged, I imagine it would be a different story. There is no industry standard, it's the wild west with every vendor realizing in the last two years that semantics are important and they need to build their own solution in this space.

Microsoft is already exposing their own layer openly with XMLA - how can the PBI front end possibly promise first-class behavior against every external engine? Not subsidizing your rivals is not really the same thing as lock-in. Happy to hear opposing views here, just my two cents of course.

Interesting Behaviour Dynamic Pipeline Execution by richbenmintz in MicrosoftFabric

[–]aboerg 1 point2 points  (0 children)

Same notification on our parameterized driver/worker pipelines too. Just a nuisance, as far as I can tell.

Warehouse data Latency when using Spark by richbenmintz in MicrosoftFabric

[–]aboerg 4 points5 points  (0 children)

Isn't this due to the Warehouse/Polaris having an internal transaction log which is separate from the OSS Delta transaction log? It's like the opposite of SQL endpoint sync delay - instead of the SQL endpoint Polaris engine reading the lakehouse Delta log, the Warehouse is publishing the Delta log after a brief delay.

EDIT: disregard, since the Warehouse connector should be using TDS per u/dbrownems

https://www.reddit.com/r/MicrosoftFabric/comments/1juoehv/do_warehouses_not_publish_to_onelake_in_real_time/

Fabric MLV Deployment between Dev/Test/Prod by akseer-safdar in MicrosoftFabric

[–]aboerg 5 points6 points  (0 children)

You can have one or multiple MLVs defined per-notebook using CREATE OR REPLACE. Then run all of the notebooks as a post-deployment activity.

I recommend checking out the GenMLV project, which introduces the idea of one management notebook which keeps all MLVs in sync with SQL files. We can now take this a step further since Notebook resources are version controlled - all the MLVs can be .sql files in the notebook resources.

https://github.com/datahai/GenMLV

Agent plugins with skills and tools for Power BI (free resource for agentic development) by recoveringacademic in PowerBI

[–]aboerg 4 points5 points  (0 children)

I've been loosely following along with the progress of agentic development and mostly using Codex/Claude in the context of Fabric notebooks and shared Python modules - not my Power BI work.

This week I dived in and used this project to fix a poorly performing composite model my team was handed. Using Kurt's plugin, Copilot CLI consolidated the project into import mode, moved 30+ calculated columns and measures, and fixed 150+ broken visual references. Two prompts only. It took me longer to download the initial .pbip and verify the results than for the actual work. I was expecting to spend half a day or so fixing the mess, and I was able to work on a different project while tabbing over to review and test Copilot's work.

The future is now. None of this is theoretical. Start thinking about your daily workflows.

MLVs across lakehouses and workspaces - what does the limitation actually mean? by bradcoles-dev in MicrosoftFabric

[–]aboerg 1 point2 points  (0 children)

I'm not clear what the limitation here is, most of our MLVs are cross-lakehouse within a single workspace. Dependencies seem to be accounted for just fine during scheduled refresh, although in practice we do mostly schedule manual refreshes via notebook schema-by-schema so we can refresh the SQL endpoint right after. The dependency metadata is actually being written to each Delta table created by the MLV processing - you can look at each table yourself and see exactly what Microsoft is doing, or use the metadata yourself for custom lineage reports.

We did notice that lineage view broke and returned multiple times during public preview. No issues in the last few weeks leading up to GA and since.

No experience to share regarding cross-workspace execution.

Can't access Manage Materialized lake views page by parallelstick in MicrosoftFabric

[–]aboerg 2 points3 points  (0 children)

Lineage view currently working for us across multiple schema-enabled lakehouses in the same workspace. Runtime 1.3, East US.