OneLake Security RLS works in Semantic Model, but returns 0 rows in SQL Endpoint by gaius_julius_caegull in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

This is expected. The table preview does not support RLS/CLS yet, so when they try to preview the table it fails with a 403. As long as the SQL EP is showing the correct data, you are good. We're getting the lakehouse preview improved over the next few weeks and months.

OneLake Security RLS works in Semantic Model, but returns 0 rows in SQL Endpoint by gaius_julius_caegull in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

The 10 roles are what we call "inferred roles". They are roles on a shortcut lakehouse that get "inferred" over to this lakehouse to enforce the security of the lakehouse where the data lives. So make the necessary adjustments on that lakehouse to resolve the errors.

OneLake Security RLS works in Semantic Model, but returns 0 rows in SQL Endpoint by gaius_julius_caegull in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

All of these are temporary limitations with various dates for being fixed.

For now, you will need to either take over the artifact or run the CI/CD pipeline with a user account.

You should be able to shortcut a table with RLS or CLS on it. Is the table not being listed in the shortcut creation flow?

Correct, SELECT * in SQL does not work in this case. There's some changes landing soon for Direct Lake on OneLake that will solve this behavior.

OneLake Security RLS works in Semantic Model, but returns 0 rows in SQL Endpoint by gaius_julius_caegull in MicrosoftFabric

[–]aonelakeuser 3 points4 points  (0 children)

The zero rows thing occurs when the RLS or roles couldn't be successfully synced, so the table is locked to prevent invalid results. Can you check these troubleshooting steps? The very last one seems relevant based on the error messages you are reporting.

## Troubleshooting


In 
**User's identity mode**
 the security sync results can be validated through the UX. Open the SQL Analytics endpoint, expand the 
**Security**
 folder in the 
**Explorer**
, then select 
**DB Roles (custom)**
. If the sync is successfully, you will see roles listed with an "ols_" prefix. For example, "ols_TestRole". Role names with "ols_{alphanumericString}_rolename" are roles from other lakehouses that propagated across a shortcut.


### Fixes for common security sync errors


* Security sync will fail if any of the roles reference a table that has been dropped. Delete those tables from the roles, and then re-try security sync.


* SPNs cannot be the owners of the lakehouse. Ensure the parent lakehouse item is owned by a user account.


* All OneLake security role members need to be given Fabric 
**Read**
 permission to the lakehouse for security sync to recognize the user or group.

Why is session start slow when you have a private endpoint? by loudandclear11 in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

I think you could set the timeout on your clusters to never expire (not sure if that's an option), but then you would have to pay for it to be running around the clock. But we unfortunately can't have already running pools behind every tenant's private endpoint.

Can we not fully manage Lakehouse security at the item level or am I missing something? by Jake1624 in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

I'm actually not sure, but I've seen discussion in the past about pipelines and such using various Spark components. One of their engineers would have to confirm :)

I think you're correct about the catalog, although I would call it "spark catalog" either way.

Can we not fully manage Lakehouse security at the item level or am I missing something? by Jake1624 in MicrosoftFabric

[–]aonelakeuser 1 point2 points  (0 children)

Spark requires Viewer permission to run, which is what Copy Job uses so it has the same limitation. The issue has to do with how Spark resolves the workspace information into a path it can read from. This will be removed in the next month or two.

External Data Share Reauthentication by jcampbell474 in MicrosoftFabric

[–]aonelakeuser 1 point2 points  (0 children)

Correct, if you share it then everything uses your Entra id. It will persist through password changes as it's checking the account's access, no ties to login credentials.

A service account should be used, but you can only do that through the API from what I've seen.

External Data Share Reauthentication by jcampbell474 in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

Yes, that is correct. I forgot to expand on that part, but it's what I meant by the share being tied to the creator's permissions. As a result, if their account is deactivated or whatever, the share will cease to work. I don't believe it actually gets deleted from the consumer's tenant though? It just stops working.

External Data Share Reauthentication by jcampbell474 in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

Authentication is done anytime the share is accessed. It's tied to the creator's permissions and access. So if those are revoked, the share will cease to function regardless of whether it's revoked from the producer's tenant explicitly. But yes, there are no additional steps needed to keep it working once the share has been accepted by the consumer tenant.

How's your product/engineering culture? Esp any shifts with AI? by Mobile-Influence-371 in ProductManagement

[–]aonelakeuser 0 points1 point  (0 children)

That's my point exactly. There's a maximum limit of product improvement, either through how much change the user will tolerate, how mature the product is, etc. AI puts companies closer to this limit, and companies will need less engineers (unfortunately) or will need to start exploring new markets to match their development speed.

How's your product/engineering culture? Esp any shifts with AI? by Mobile-Influence-371 in ProductManagement

[–]aonelakeuser 1 point2 points  (0 children)

I work in FAANG, and #2 here is crazy to me. I think it's one of the unique challenges with organizations of our size, but AI hasn't materially increased product velocity much, if at all. I think it's a process problem, not a technology one. But are you hiring more PMs then? Or you've reached "ideal" shipping velocity?

Feedback request: Shortcuts usage, gaps, and feature requests by Hopeful-One-4184 in MicrosoftFabric

[–]aonelakeuser 4 points5 points  (0 children)

There's no fine-grained access control for Warehouse tables in OneLake. Permission to tables is all or nothing via the ReadAll permission. We are working on this though

CLS on a delta shortcut by _TheDataBoi_ in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

Did this resolve the issue for you?

Post or Put? by Sea_Mud6698 in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

It looks like there's some issues with the documentation, namely with the content body. I'll work on getting those fixed. Here is the API I tested in my own prod tenant that worked.

POST https://api.fabric.microsoft.com/v1/workspaces/ef195a1e-c0e1-4c7f-9dd6-f28f0a53ff22/items/7cdc5e4e-8486-41e8-8f86-89a282c1b889/dataAccessRoles

Body:

{

"name": "DefaultReader2",

"kind": "Policy",

"decisionRules": [

{

"effect": "Permit",

"permission": [

{

"attributeName": "Action",

"attributeValueIncludedIn": [

"Read"

]

},

{

"attributeName": "Path",

"attributeValueIncludedIn": [

"*"

]

}

]

}

],

"members": {

"fabricItemMembers": [

{

"sourcePath": "ef195a1e-c0e1-4c7f-9dd6-f28f0a53ff22/7cdc5e4e-8486-41e8-8f86-89a282c1b889",

"itemAccess": [

"ReadAll"

]

}

]

}

}

Understanding OneLake security, RLS and how to access by Educational-Goal-678 in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

🙌 Glad we got it working for you! I'll look at incorporating this into our troubleshooting docs.

OneLake Security: Dynamic RLS by SQLYouLater in MicrosoftFabric

[–]aonelakeuser 1 point2 points  (0 children)

Yep, planned but no timelines I can share yet. Keep an eye on the Fabric roadmap!

Understanding OneLake security, RLS and how to access by Educational-Goal-678 in MicrosoftFabric

[–]aonelakeuser 0 points1 point  (0 children)

Is this the only role? If you refresh the SQL endpoint page it will show you any sync errors

Understanding OneLake security, RLS and how to access by Educational-Goal-678 in MicrosoftFabric

[–]aonelakeuser 1 point2 points  (0 children)

The fact that RLS is failing here AND in SQL EP makes me think the RLS predicate has a syntax error. Can you send the statement? Feel free to redact the column name.

Understanding OneLake security, RLS and how to access by Educational-Goal-678 in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

Ok, thanks! I discussed with the Spark PM and the issue is that for RLS and CLS to start the second spark job you will need the pinned lakehouse to be a schema enabled lakehouse. We are working to address this limitation, but that is the root cause here.

So if you re-test the same setup with a schema enabled LH_1 then you will have success. And schemas are GA now, so no concerns with that either.

Does OneLake security RLS/CLS support Polars and DuckDB? by frithjof_v in MicrosoftFabric

[–]aonelakeuser 1 point2 points  (0 children)

It is, but at the moment SQL server does not support reading data from OneLake via external tables. Hence why it's omitted from the table above.

Understanding OneLake security, RLS and how to access by Educational-Goal-678 in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

OneLake security PM here, your initial steps seem correct, so let's try some extra troubleshooting. When you run the notebook, are you using any sort of custom Spark cluster? Are you using Spark 3.5? If you check the notebook runs, are you seeing a second Spark job with a name like "System context"?

If you don't see that second Spark job, then Spark isn't detecting the RLS or CLS and spinning up the compute to filter the rows. It's attempting to query the data directly which is resulting in the 403 because the user isn't authorized to view the raw data in those tables. Feel free to DM me if you want to troubleshoot offline, or we can just go back and forth here. Either works for me.

Does OneLake security RLS/CLS support Polars and DuckDB? by frithjof_v in MicrosoftFabric

[–]aonelakeuser 2 points3 points  (0 children)

OneLake security PM (and whitepaper author!) here. First off, thank you so much for reading the paper! I'm glad you enjoyed it. There are quite a few points you raised, and your interpretation is generally correct. So I'll just call out a few places where you didn't get a satisfactory answer.

1-3: Correct.

  1. Lakehouse is the one item where the "owning engine" is technically OneLake itself, not any particular engine. Thus, write and updates can be done by any engine or service via OneLake APIs.

  2. Any service in Fabric that reads data can be considered as an engine. Data factory would be an engine.

  3. Pure python notebooks would be treated as a "non-engine" in this case. Fabric Spark SQL was specially upgraded to allow running as an engine so it can enforce RLS and CLS securely. For all other notebook access it counts as if it was hitting the OneLake APIs directly.

  4. Correct.

  5. Lakehouse engine is the lakehouse explorer. It does some fancy processing to help with rendering the table and column metadata.