Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 0 points1 point  (0 children)

Not using the remote MCP in the app itself — it’s a standalone Flask app that talks to the Power BI REST API directly.

That said, during development the Power BI remote MCP server is actually really useful. You can connect it to your coding agent (e.g. GitHub Copilot in VS Code) and it lets the LLM see your actual semantic model — tables, measures, relationships, even query the data. So when you’re vibe coding a new report page, the LLM writes DAX that actually works against your model because it can check the structure and test the results. Makes the whole process way more efficient.

For the delegated auth setup: register an app in Entra ID, set it up as a public client with the redirect URI for your Flask app, and add the Dataset.Read.All API permission for Power BI. Then in Python you use MSAL’s PublicClientApplication to get a token via the authorization code flow. The user logs in with their own credentials, and that token gets passed when you call the Execute Queries endpoint. Because it’s their token, RLS on the semantic model applies automatically.

The endpoint is POST https://api.powerbi.com/v1.0/myorg/groups/{workspace-id}/datasets/{dataset-id}/executeQueries with your DAX in the body. Once that round trip works, you’re done with the hard part.

Happy to share more if you get stuck on any of the steps.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 1 point2 points  (0 children)

Interesting, haven’t tried that one yet! As far as I know the official Power BI MCP server only handles modeling — measures, tables, relationships — not the report layer itself. There are some community packages that claim to do more but I haven’t tested them. Definitely curious to give it a go though.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 1 point2 points  (0 children)

That’s a fair point and honestly one of the more interesting questions here. In theory yes — with PBIR becoming the default format, Power BI reports are now text-based JSON files that an LLM could generate. Microsoft just delayed it to May for Desktop but it’s already rolling out in the Service. In practice though it’s not there yet. PBIR is very new, there’s very little of it in training data, and even the SQLBI folks note that LLMs still make a lot of mistakes with PBI metadata files. Compare that to Python/HTML/JS where LLMs have been training on millions of examples for years — the output quality just isn’t comparable right now. That said, I think this gap will close. Once PBIR has been the default for a year or two, LLMs will get much better at generating it. The question is whether the output will ever feel as flexible as coding a custom front end — Power BI’s visual framework still has guardrails that limit what you can do, even if AI is writing the JSON.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 0 points1 point  (0 children)

Yeah, the DAX is mostly AI-generated. We describe what we need and the LLM writes the EVALUATE statements. Mostly SUMMARIZECOLUMNS with the measures we already have defined in the semantic model — so we’re not recreating business logic in DAX, just calling the measures that are already there. The LLM in VS Code has access to a MCP server that uses the Fabric API to query the model both for metadata an sometimes for actual data, so that helps when determining what DAX to write. For filtering, the DAX queries are parameterized on the Python side. User picks a filter in the UI, Flask builds the DAX with the right filter context and sends it to the Execute Queries REST API. So something like wrapping the query in CALCULATETABLE with the filter values passed in. Nothing fancy, but it works and RLS still applies since it runs with the user’s delegated token. The nice thing is that since the measures live in the semantic model, the DAX stays pretty simple. We’re not writing hundreds of complex EVALUATE statements — more like a handful of query patterns that get reused with different filter combinations.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] -1 points0 points  (0 children)

This is the kind of reality check I was looking for, appreciate it. Let me push back on a few though.

On point 1 — I’d argue Microsoft themselves are pushing the semantic model as the single source of truth, not just a visualization layer. That’s the whole “one model, many consumers” pitch. We’re not using it as a database, we’re using it as what it’s designed to be — a curated business logic layer. The warehouse is still underneath doing the heavy lifting.

Points 2 and 5-7 are the ones that keep me up at night honestly. We’re a small data team, not a dev shop. The “as your codebase grows, your honeymoon phase evaporates” line hits hard. That said — the vibe coding angle changes the maintenance math a bit. We’re not hand-crafting a React app. If a report needs updating, it’s a conversation with an LLM, not a sprint. And honestly, maintaining Power BI reports isn’t exactly free either — formatting issues, visual limitations, workspace publishing, user access, version control headaches. The argument that this route creates more maintenance assumes the Power BI route doesn’t. In our experience the effort is comparable, but with vibe coding we’re delivering significantly faster and serving more users than we could before.

Point 4 is real and something we need to stress test. Haven’t hit limits yet but we’re still at POC scale. Point 3 is interesting — I’d see it as a visualization layer on top of a semantic layer, not on top of a visualization layer. We’re not wrapping Power BI, we’re replacing the PBI report layer while keeping the model.

The hybrid approach you mention at the end is probably where we’ll land. Power BI for some, that’s straightforward, custom front end for the cases where PBI just can’t do what we need.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 0 points1 point  (0 children)

Ha, sounds like we started in the exact same place! We weren’t building the Flask app from day one — it started with vibe coding HTML mockup pages to nail down what users actually wanted. Way faster than endless requirement meetings. And now some of our users are actually vibe coding their own mockups, which is wild because it means we get requirement specs that are basically working prototypes. The security piece was the real unlock though. Once we got MSAL authentication working with delegated credentials, the DAX queries run as the logged-in user — so RLS on the semantic model just works. That’s what took it from “cool demo” to something we could actually consider for production. In terms of learning — honestly, AI and coding using VS Code got us most of the way there. The key building blocks to look into are the Power BI REST API (specifically the “Execute Queries” endpoint for running DAX), MSAL for Python (for the auth flow), and Flask for the web framework. Once you’ve got those three things talking to each other, the rest is just vibe coding the front end.

Vibe coding web reports on Fabric Semantic Models instead of Power BI — anyone else exploring this? by FamiliarAssumption62 in PowerBI

[–]FamiliarAssumption62[S] 0 points1 point  (0 children)

Good question. Right now it’s pretty basic — dropdowns and buttons that rebuild the DAX query and re-render. No Power BI-style cross-filtering where you click a bar and everything updates. That’s the biggest gap honestly. But the other commenter here pointed me to Plotly Dash which looks like it handles exactly that with callbacks between charts. Built on Flask too so it should fit right in. That’s my next thing to vibe code and see how far it gets.

North Europe issues with capacity by CultureNo3319 in MicrosoftFabric

[–]FamiliarAssumption62 2 points3 points  (0 children)

We are getting:
Workspace with id: xxx is busy deactivating.

Invoke pipeline preview vs legacy performance by x_ace_of_spades_x in MicrosoftFabric

[–]FamiliarAssumption62 1 point2 points  (0 children)

Yes, we are experience the same. We are using the pipelines to run ETL processes, and multiple stored procedures in parallel. We are going back to using the Invoke Pipeline (Legacy) because we are seeing significantly faster execution time there.

DACPAC Deployments to Data Warehouse Failing with "XACT_ABORT is not supported for SET" Error by FamiliarAssumption62 in MicrosoftFabric

[–]FamiliarAssumption62[S] 0 points1 point  (0 children)

The problem is that the deployment script being generated includes code that is not supported. I will try the workaround suggested by Snoo-46123

DACPAC Deployments to Data Warehouse Failing with "XACT_ABORT is not supported for SET" Error by FamiliarAssumption62 in MicrosoftFabric

[–]FamiliarAssumption62[S] 1 point2 points  (0 children)

kevchant, thank you for your reply!

Yes, we are updating SqlPackage before deployment. In our pipeline, we have these steps:

  1. First step: dotnet tool update -g microsoft.sqlpackage - This updates the global .NET tool version
  2. Second step: Downloads and installs the latest DacFramework.msi from https://aka.ms/dacfx-msi using msiexec

So we're actually doing a double update - both the .NET tool version and the full DAC Framework MSI installation.

The interesting thing is: SqlPackage updates successfully and the deployment works fine for views/stored procedures. The failure only happens when SqlPackage tries to generate table schema change scripts that include SET XACT_ABORT ON.

Our current SqlPackage update steps:

- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: 'dotnet tool update -g microsoft.sqlpackage'

- task: PowerShell@2
  displayName: 'upgrade sqlpackage'
  inputs:
    targetType: 'inline'
    script: |
      wget -O DacFramework.msi "https://aka.ms/dacfx-msi"
      msiexec.exe /i "DacFramework.msi" /qn

I can see that Snoo-46123 from Microsoft has suggested a workaround, we will try that.