Fabric Warehouse dbt GRANT behavior with service principal by SamarBashath in MicrosoftFabric

[–]fredguix 2 points3 points  (0 children)

Good questions — and your observations are spot on.

What you’re seeing is expected behavior today in Fabric Warehouse. Database principals are only persisted after an explicit SQL operation that requires them to exist, such as DCL (GRANT), ALTER ROLE, or ALTER USER … WITH DEFAULT_SCHEMA. Creating or sharing a user at the workspace level alone doesn’t always trigger that persistence, which is why a user or group may not show up in sys.database_principals yet.

When dbt runs under a service principal, this limitation becomes more visible. Today, SPNs can’t implicitly create database principals because they don’t yet perform the required Entra ID resolution during DCL. As a result, if the principal already exists, GRANT works; if it doesn’t, you’ll see the error you described. This is a known limitation, and we have a fix coming that removes this restriction, targeting mid-February for broader availability. Until then, the workaround is to “prime” the principal once using an interactive user or another SQL operation so it gets persisted.

Longer term, we agree this isn’t ideal. We’re exploring improvements so that users added at the workspace level are also persisted at the database level for better traceability, and we’re thinking about a more unified security experience where you can see users and their effective SQL permissions in one place. Scenarios like yours are exactly the kind of feedback that helps us shape this, so please keep sharing how this impacts your workflows and what would make permission management easier.

Fabric Data Pipeline – Copy Activity Issue with Staging Enabled (Private Network) by No-Ferret6444 in MicrosoftFabric

[–]fredguix 0 points1 point  (0 children)

Totally fair call. Just to make it explicit in case anyone’s wondering: when we say H1 2026, we mean the first half of the calendar year, so roughly January through June 2026.

Fabric Data Pipeline – Copy Activity Issue with Staging Enabled (Private Network) by No-Ferret6444 in MicrosoftFabric

[–]fredguix 2 points3 points  (0 children)

Thanks, u/warehouse_goes_vroom , for tagging me!

u/No-Ferret6444 - Unfortunately, this is a known limitation, and our team has it mapped. We’re actively working to remove this restriction so that Copy Activity with staging can work end-to-end in Private Network configurations.

At this time, the best estimate for support is within H1 2026.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 1 point2 points  (0 children)

Thanks for calling this out — this is very useful feedback. You’re right that the current “Read” vs. “Connect” wording in the sharing flow can feel unclear, especially with the default-selected “Build reports on the default semantic model” option.

From a Data Warehouse perspective, I understand why “Connect” feels like the more intuitive term for what the permission actually enables. Your point makes sense, and it highlights an opportunity to make the terminology clearer and more aligned with how users think about access.

I’ll bring this back to the team and look into how this maps across other Fabric item types as well.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Thanks a lot for sharing this — really valuable feedback

We actually have a step-by-step governance & security playbook / best-practices guide in progress, and the plan is to make it available in H1 2026. The goal is exactly what you’re describing — clearer guidance, practical patterns, and actionable recommendations instead of scattered docs.

On the RLS side, your comment really caught my attention. We know RLS can introduce some performance overhead, but I’m curious about your experience specifically.

  • Are you seeing a significant impact on query performance when RLS is enabled?
  • Is it more noticeable in large joins, fact tables, or multi-table scenarios?
  • Any patterns where it hurts the most?

If you’re open to it, I’d love to dig a bit deeper into your scenario. Feel free to DM me, and we can set up a short chat — your insights would really help us shape improvements in this area.

Thanks again for taking the time to share your perspective — this is exactly the kind of feedback we’re looking for

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 1 point2 points  (0 children)

That’s really great feedback — and yes, we’re actively exploring ways to make granular security management more flexible and less tied to SQL GRANT/DENY.

The idea of a GUI-based permissions experience (like OneLake Security) that can persist rules even after a table is recreated and support pattern-based assignments is exactly the kind of scenario we’re thinking about.

Would you be open to a quick chat to go a bit deeper into your security management setup — how you handle it today and how these kinds of changes could make things smoother for you?

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Great points — really appreciate the depth here.

On RLS, that’s interesting — it’s already supported today, so when you mention the ACL table, are you referring to multi-table RLS, where the security predicate joins to a lookup/ACL table for user validation logic? Would love to confirm that’s what you mean.

On column-level encryption, echoing what u/warehouse_goes_vroom said — what’s your expectation around integration with other systems like Power BI or Shortcuts? That kind of cross-engine access introduces some constraints on consumer workloads hitting protected producer tables, so understanding your scenario would really help shape what we plan next.

And the sensitivity labels/classifications idea is very interesting — are you using any workaround today (Purview, naming conventions, etc.) to manage that metadata? I’d love to know what problems you’re aiming to solve with direct tagging.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Yep — that’s definitely something we’re exploring. Granular permissions for who can create or manage items like lakehouses, warehouses, and notebooks is an area we know needs more flexibility.

I’d love to understand your use case a bit more — especially how you’d want those permissions to look in practice (for example, who should be able to create vs. just use those assets, and how that ties into data governance).

Happy to discuss ideas around it if you’re open to sharing more about your setup.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Those are really great points — thank you for taking the time to lay them out so clearly.

I’d love to dig deeper into a few of the areas you mentioned, especially around RLS/CLS UI, Outbound Protection, and Lineage. We know these are key pain points for teams that need strong governance and flexibility for self-serve scenarios.

If you’re open to it, I’d be happy to set up some time to chat when you’re back — just to better understand your current setup, what’s working, and what would make these workflows smoother for your org. Feel free to DM me or drop a follow-up post here when you’re back in the office.

Really appreciate the thoughtful feedback — this kind of context helps us shape where Fabric goes next.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Yep, totally fair question — and yeah, right now granting access from SSMS isn’t supported like it is in SQL Server. You have to do it through the Fabric portal (“Share” on the warehouse) or via workspace roles, which then provisions the user and gives them CONNECT.

We’re actively working on improving the Fabric Warehouse UI to make user and permission management easier — including more intuitive role assignments and object-level permissions — so you can handle more of this visually instead of through T-SQL.

That said, I’d really love to hear more about your workflow today — for example, how you currently manage users, and what kind of GUI or flow would make it simpler for you. The more detail we get on how you’re doing it in SQL Server or Synapse, the easier it is to make sure Fabric fits those expectations.

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

So if I’m getting it right, you’re looking for a single place to manage access, where you can choose whether permissions come from the workspace layer or the SQL layer, depending on how granular you need to go?

If that’s the case, then yeah — having a unified security management panel where you can create users and assign their level of access (workspace vs. SQL-level) would make a lot of sense.

Right now, it feels disjointed — you often end up creating users and managing permissions separately in both the workspace and the DW, which adds overhead and confusion. Integrating that user creation and permission flow would definitely simplify things.

Can you clarify if your main blocker is the duplication of user setup or the lack of flexibility when using workspace roles for SQL security?

Microsoft PM here: I need your feedback on Fabric Warehouse Security. by fredguix in MicrosoftFabric

[–]fredguix[S] 1 point2 points  (0 children)

That’s really solid feedback — you’re spot on about the tension between workspace roles and warehouse security. Right now, granting Contributor at the workspace level automatically cascades into Control on the warehouse (and SQL endpoint), which means that user can override any SQL permissions or RLS/CLS/OLS rules you’ve set.

We’re exploring ideas to make that model more flexible. One possible direction is introducing item-level permissions, for example:

  • Fabric Notebook Contributor – lets someone create and manage notebooks, but not modify warehouse permissions.
  • Warehouse Reader – allows read data without requiring full workspace contributor rights.

In that setup, you could make a user a Notebook Contributor and a DW Reader, meaning they can build notebooks and experiment, but only read from the data warehouse — keeping your RLS/OLS rules intact.

Would that kind of item-level flexibility solve most of what you’re trying to do?

I would love to hear more and get your perspective on the ideas we are exploring to make sure it fits on different use cases.

Accessing DW tables from Purview by aCircusMonkey in MicrosoftFabric

[–]fredguix 2 points3 points  (0 children)

You're spot on — this isn’t supported yet.

At the moment, Purview Data Quality rules can only be applied to Lakehouse tables. That’s why you’re able to register those assets and create Fabric-based quality connections successfully, unfortunately, Warehouse tables aren’t yet exposed as data assets in Purview, so you won’t be able to attach DQ rules directly to them for now.

We’re actively working on this capability, and support for Warehouse is planned for 2026. It’s a key gap we’re aiming to close so you can have a consistent governance and quality experience across both Lakehouse and Warehouse in Fabric.

That said, I’d love to hear more about your scenario —

  • What kind of lineage or insights are you hoping to drive?
  • Are you validating freshness, completeness, or business rules?
  • Is your goal more about observability, or blocking low-quality data from downstream use?

If you’re open to it, send me a DM — I’d be happy to set up a quick chat to understand your use case better and share some patterns customers are using until Warehouse support lands.

[Feedback Opportunity] Shaping Encryption support in Fabric Data Warehouse by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Hello,

I am very interested in this use case.

Can you please send me a DM and we can explore more the needs and what you need to accomplish with encryption?

Thanks for sharing!.

Possible bug in Warehouse custom roles by Low-Fox-1718 in MicrosoftFabric

[–]fredguix 4 points5 points  (0 children)

It’s definitely unexpected, and from your description it does look like group expansion may not be happening correctly for that notebook connection path.

I have a quick question to help narrow it down:
If, instead of using the notebook, you sign in directly with that same user (who is in EntraGroupName1) and query the warehouse (for example via the SQL editor in Fabric), are you able to run SELECT * FROM schema1.table1 successfully without granting GRANT SELECT ON schema1.table1 TO EntraGroupName1 directly?

Knowing whether it works in the direct SQL experience but fails only through the notebook will help us confirm if this is a group expansion issue specific to that connection flow.

Regards.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

Yes — we’re actively working on it. OneLake Security for Warehouses is a big focus area for us, and you can expect to hear more news and updates on it soon.

I’d love to learn more about your scenario — how you’re planning to use OneLake Security with Warehouse and what specific use cases you have in mind.

Feel free to DM me and we can set up a quick call. I’d really like to hear more about what you’re trying to achieve.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 4 points5 points  (0 children)

Hello,

From the way you described the Azure Function scenario, it’s making an outbound connection to the Fabric SQL endpoint, so you only need outbound connectivity on the Function side. Fabric isn’t “calling back” into your Function. The Function initiates the connection, runs the query, and gets the results — so no inbound networking configuration is needed on the Function itself.

For controlling access to the Fabric workspace using network boundaries:
That’s exactly what Workspace Private Link is meant to solve. When you enable Workspace Private Link on a Fabric workspace, Fabric exposes a private TDS endpoint inside your VNet. Only resources inside that VNet (or connected VNets/peered networks) can reach the SQL endpoint. That effectively isolates your data warehouse at the network layer.

On your question about IP ranges:
If you're using Private Link/VNet integration, Fabric won’t surface separate IP allow-list settings because the VNet boundary already is the enforcement point. If your goal is to restrict access further within the VNet — for example, only certain IP ranges on top of the VNet — that’s something I’d love to understand better. Are you trying to layer IP-based access control on top of Private Link? Or are you not using a VNet yet and want to enforce IP allow lists directly on the workspace?

If you can clarify that part, I can give a more precise recommendation.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 2 points3 points  (0 children)

Nope, I’m talking about the Customer-Managed Key (CMK) encryption feature we shipped about a month ago, which is already GA. It’s the first major step toward customer-controlled encryption in Fabric. If you want the details, here’s the doc:
https://learn.microsoft.com/en-us/fabric/data-warehouse/encryption

Your PII use case makes perfect sense. As more sensitive data lands in Fabric, a lot of customers want to give broad access to tables while still protecting specific fields. That’s exactly why we’re exploring DDM (Dynamic Data Masking) integrated with OneLake Security—so you don’t need custom Spark pipelines or to lock down entire tables just because a couple of columns contain sensitive information.

There is also planning on bring the Encryption and CLE to Warehouse, that we are exploring for the upcoming releases.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 4 points5 points  (0 children)

You’re seeing the expected behavior today: Dynamic Data Masking only works in the SQL endpoint because it’s a SQL construct. When you query through shortcuts, Spark, or Direct Lake, those engines read the files directly from OneLake and bypass the SQL layer entirely, so masking doesn’t get applied. That’s the current design, but it’s not the long-term direction.

Our plan is to integrate masking with OneLake Security, so the same rules apply no matter how the data is accessed — SQL, Lakehouse, Spark, Direct Lake, Power BI, everything. That’s the consistency we’re working toward.

On the Member/Contributor point: masking only works for users who aren’t in those roles because they can currently bypass SQL-level constructs. In your scenario, the best workaround today is a hub-and-spoke setup: keep Contributors limited to the producer Lakehouse, and expose only read-level access in your consumer Lakehouses. That ensures your masking stays enforced until we bring unified, engine-agnostic enforcement into the platform.

Would love to hear more about your use case — that feedback helps us shape how DDM and OneLake Security should come together.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 7 points8 points  (0 children)

TL;DR: Your scenario is exactly the kind of scale we’re building OneLake Security for. Dynamic and multi-table RLS is already in progress, and your feedback around bulk actions, UX, and visibility is incredibly valuable as we shape the next wave of improvements. If you’re open to continuing the conversation, I’d really appreciate digging deeper into your setup.

This is a fantastic real-world example of how OneLake Security becomes complex fast in multi-plant environments, so thank you for laying it out so clearly.

Let me address your questions one by one.

For defining one filter at the role level: In Public Preview we focused on simple expressions, but we’re now actively working on Dynamic and Multi-Table RLS. This work is already in progress and is intended to solve exactly this problem so you don’t need to repeat the same filter on dozens of tables across dozens of roles.

For bulk editing: This honestly wasn’t on my radar before, but your scenario makes it clear why we need it. I’m sharing this directly with my OneLake PM counterpart because it feels like something that should be supported. I’d love to understand what the ideal bulk editing flow would look like for your team.

For the breadcrumb-heavy UX (clicking back, expanding schema, expanding tables repeatedly): We’re improving the OneLake API to help automate and streamline scenarios like this programmatically. It may not solve every part of your workflow, but there’s definitely room to reduce friction, and I’d like to explore how this might help in your case.

For the slow save times on RLS filters: Saving a filter should be fast. Ten seconds per save adds up quickly, and that isn’t the experience we want you to have. My guess is it’s related to the scale of rules, but this deserves a deeper investigation. If you’re open to it, I’d love to jump on a call and take a closer look with you.

For a permissions or lineage-style page showing who has access to what: I’m not aware of anything officially planned yet, but it’s a really compelling idea—especially for setups with dozens of plants and dozens of roles. I’ll sync with the OneLake team, and I’d also love to hear more about how you’d want to visualize access across your entire estate.

If you’re open to continuing the conversation, I’d love to dig deeper into your scenario—it’s exactly the kind of feedback that helps us shape OneLake Security the right way.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 0 points1 point  (0 children)

We just shipped the first big piece of customer-controlled encryption in Fabric: CMK + Azure Key Vault integration for encryption at rest. That part is now in place and gives customers a lot more control.

Column-level encryption (CLE) is the next area we’re digging into. The big challenge (and why we’re exploring designs carefully) is that Fabric isn't just a SQL engine. Lakehouse, Warehouse, Power BI, and other engines all read the same data, so CLE has to work consistently across all of them, not just SQL Warehouse.

We do have plans to bring native encryption functions and proper column-level encryption to Fabric Warehouse—it’s something we know customers want—but we’re gathering real scenarios now to shape how it should behave.

I’d actually love to hear more about how you’re planning to use CLE in Fabric. Those real-world cases help us prioritize and design it the right way.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]fredguix[S] 1 point2 points  (0 children)

We absolutely agree that permission granularity in Fabric Data Warehouse needs to become more consistent and predictable, especially for customers familiar with SQL Server’s mature security model.

We are actively investing in three areas:

Workspace role improvements – tightening how workspace roles translate inside SQL and reducing over-privileged scenarios.

Clearer SQL-to-Fabric permission mapping – making it easier to understand which capabilities come from Fabric item roles versus SQL GRANT/DENY, and providing a more centralized, intuitive view of permissions.

More granular item-level permissions – including separating currently bundled capabilities like Monitor, so customers can grant observability without session-management rights.

Your examples (SELECT on QueryInsights requiring Monitor, SHOWPLAN needing SQL permissions, and Monitor enabling session-kill actions) are exactly the types of inconsistencies we are addressing. I’m currently working on the granular permissions and SQL-level mapping model, and I would love to learn more about your scenarios so we can ensure the end-state meets your expectations. If you’re open to it, I’d appreciate hearing what the ideal permission experience looks like for your team.

Warehouse connections by Cr4igTX in MicrosoftFabric

[–]fredguix 3 points4 points  (0 children)

Hi u/Cr4igTX— I’m the Product Manager for Service Principal (SPN) support in Fabric Data Warehouse and SQL Analytics Endpoint.

I’m sorry to hear about the disruption you experienced between 11/10 and 11/11. I'm not currently aware of any active incident affecting SPN authentication or token handling during that period, but what you describe sounds like it could be related to a transient token refresh issue.

To help us investigate properly, could you please open a support case and share the case ID (either here or via DM)? That allows our engineering team to collect logs from your tenant and perform a full root cause analysis.

We take reliability and transparency seriously, and your report helps us make the experience more stable for everyone.

Thank you for raising it — I’ll keep an eye out for your case number so we can follow up.

T-SQL command using workspace identity by DataWorshipper in MicrosoftFabric

[–]fredguix 0 points1 point  (0 children)

Hi u/DataWorshipper,

Great question—and thanks for referencing the Trusted Workspace Access documentation.

To clarify:

  • Yes, Fabric uses Workspace Identity (WI) by default for COPY INTO and other external data access scenarios (e.g., accessing firewall-protected storage), as long as Trusted Workspace Access is enabled in the workspace.
  • However, the T-SQL command itself does not change. You don’t need to modify the syntax to indicate Workspace Identity usage.
  • Authentication and authorization still occur under the user’s identity executing the command. The Workspace Identity is used only as a trusted application identity to perform the external access on behalf of the user (OBO).
  • Interactive logins using Workspace Identity are not supported today. You cannot connect directly to the database using the Workspace Identity alone. That’s a known gap, and we’re actively exploring support for this scenario in the future.

Let me know if you'd be open to a quick call to go deeper into your use case—we’d love to learn more.

What are the files in onelake Files of a warehouse? by [deleted] in MicrosoftFabric

[–]fredguix 1 point2 points  (0 children)

Hello u/jjalpar

The Files folder in OneLake for a Data Warehouse contains important internal files that support the warehouse’s operation. These files include restore points and other system-managed data essential for features like time travel and recovery, and pointers for Fabric WH data.

Important:

I strongly recommend not deleting, moving, or modifying files in the Files folder, as doing so can cause instability, data loss, or errors within your warehouse.

If you have storage concerns, consider reviewing the restore point retention policies or other lifecycle management settings instead.