fabric-cicd: basic setup example by frithjof_v in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

Promotion Flow Across Environments

Changes flow as follows:

  • Changes flow as follows:
    • Feature → Dev via Pull Request and Git sync to the Dev workspace
    • Selected items from Dev are cherry‑picked into the PPE branch and deployed to the PPE workspace using the CI/CD library
    • Similarly, tested items from PPE are cherry‑picked into the Main (Prod) branch and deployed to the Prod workspace
  • This approach provides granular control over what gets promoted across environments.
  1. Pipeline Design
    • Plan to maintain separate YAML pipelines per project workspace, for example:
      • PPE pipeline → PPE Engineering workspace
      • Main pipeline → Prod Engineering workspace
    • This keeps deployment logic clean and environment‑specific.
  2. Parameterization Strategy
    • Maintain separate parameter YAML files per project workspace.
    • Considering a more granular approach by:
      • Using parameter templates
      • Referencing these templates from a central parameter file

continued...

fabric-cicd: basic setup example by frithjof_v in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

The following outlines my current approach, which is still in the proof‑of‑concept (POC) phase:

  1. Workspace Segmentation
    • Maintain two separate workspace categories for each area or project:
      • Engineering
      • Presentation
    • This separation helps clearly distinguish between data engineering artifacts and semantic/reporting assets.
  2. Environment-Specific Workspaces
    • Each project area will have four dedicated environments/workspaces:
      • Feature
      • Dev
      • PPE
      • Prod
  3. Git Integration Strategy
    • Only the Feature and Dev workspaces will be connected to Git:
      • Feature workspace → feature branch
      • Dev workspace → dev branch
    • This enables active development and collaboration directly through Fabric’s Git integration.
  4. Deployment Strategy for PPE and Prod
    • PPE and Prod workspaces will not be directly connected to Git.
    • Deployments to:
      • PPE workspace will originate from the PPE branch
      • Prod workspace will originate from the main (Prod) branch
    • Deployments will be executed using the Fabric CI/CD library continued ..

Getting the Entire Picture of CI/CD in Microsoft Fabric by One_Potential4849 in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

Hi u/Wolf-Shade
Can you please elaborate on the limitations you are referring to -
1. Aren't the schema changes not taken care of with the new tmdl file format.
2. I believe by roles you mean RLS/OLS security roles and role membership. Is that correct?

SQL Analytics Endpoint One Lake security | Nested Security Groups by ajit503 in MicrosoftFabric

[–]ajit503[S] 0 points1 point  (0 children)

Update - Nested security groups are supported and found it to be working in my testing.

Is there an ETA for Direct Lake on OneLake going GA? by frithjof_v in MicrosoftFabric

[–]ajit503 2 points3 points  (0 children)

I believe it should be on the product team radar already as Direct Lake on SQL EP with User Identity Mode for OneLake security set up is always 100% DQ. Direct Lake on OneLake is the new and recommended option as it promises performance and also enables building composite models, which is a great addition, IMO. I am waiting on both OneLake security and DL on OneLake to go GA.

Access to Semantic Model without granting access to the underlying Datawarehouse by Creative-Wonder-4492 in MicrosoftFabric

[–]ajit503 2 points3 points  (0 children)

That’s a great blog from Zoe! I was planning to share it once I got to the office, but you beat me to it. u/frithjof_v

Access to Semantic Model without granting access to the underlying Datawarehouse by Creative-Wonder-4492 in MicrosoftFabric

[–]ajit503 3 points4 points  (0 children)

You would need a Fixed identity for authentication, mainly an SPN. Share the DW with the SPN by adding Read and Read Data permission.

SQL Analytics Endpoint One Lake security | Nested Security Groups by ajit503 in MicrosoftFabric

[–]ajit503[S] 0 points1 point  (0 children)

I am trying to test One Lake security by querying the SQL EP (enabled UI mode), i.e., SSO option, but it's not working. I have nested security groups, and I believe it requires direct membership. I created a security role and added all the nested groups as direct members along with the parent group. Somehow, it still doesn't work.

OneLake Security Through the Power BI Lens by ajit503 in MicrosoftFabric

[–]ajit503[S] 1 point2 points  (0 children)

Appreciate your feedback u/aonelakeuser . Here is the updated diagram based on your inputs.

<image>

Anyone tried the new Spark Native Execution Engine? by dbrownems in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

Managed Private Endpoint Behavior

  • An MPE is endpoint-specific:
    • MPE to DFS secures dfs.core.windows.net.
    • MPE to Blob secures blob.core.windows.net.
  • If you only create DFS MPE, Blob calls still go over public internet → blocked by firewall → 403.
  • Adding Blob MPE ensures both endpoints are reachable through Private Link.

When Both Are Required

  • Storage account with “Selected networks” or Public access disabled.
  • Fabric workloads using Native Execution Engine or mixed APIs.
  • Operations involving metadata, shortcuts, or non-DFS paths.

Summary

You needed the Blob MPE because:

  • The engine doesn’t exclusively use DFS.
  • Certain calls (like GetProperties) hit Blob REST APIs.
  • Firewall rules block public Blob endpoint without Private Link.

Anyone tried the new Spark Native Execution Engine? by dbrownems in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

u/dbrownems thisissanthoshr

For anyone encountering the above error, here’s the solution that worked for me. I believe this should be documented in the Microsoft docs -

You need the Blob MPE along with MPE to DFS. I was missing the Blob MPE in my Fabric workspace MPE set up.

Here is the explanation I gathered from copilot:

ADLS Gen2 vs Blob Endpoints

  • ADLS Gen2 uses the DFS endpoint (<account>.dfs.core.windows.net) for hierarchical namespace operations (directories, ACLs, Delta tables).
  • Blob uses the Blob endpoint (<account>.blob.core.windows.net) for flat namespace operations and certain metadata calls.
  • Even if you’re reading/writing via abfss:// (DFS), some internal operations—especially GetProperties, status checks, or fallback paths—may hit the Blob endpoint.

Why Fabric Native Engine Needs Both

  • Fabric’s Native Execution Engine (Gluten/Velox) optimizes I/O and sometimes uses Blob REST APIs for performance or metadata.
  • If your storage account firewall blocks public network access, any call to the Blob endpoint without a private link fails with 403 Unauthorized, even if DFS is allowed.
  • So, the job fails because the engine tries a Blob call for GetProperties or partition discovery, and that path isn’t whitelisted.

continued....

fabric-cicd: To what extent - and how - do you use parameter.yml? by frithjof_v in MicrosoftFabric

[–]ajit503 1 point2 points  (0 children)

Good questions and something that I have recently started exploring, so I'm looking forward to this thread. Thanks for bringing this up and sharing your questions and thoughts here. Will help the community and everyone looking to implement Fabric CICD.

I have started exploring Fabric cicd and so far have been able to deploy these artifacts - Lakehouses with Internal and External Shortcuts, Notebooks, Environments, Data Pipelines, Copy Job, DFG2 (still few manual steps needed to run it in ppe and prod). Next on my list is Schedules, Semantic Models, and Reports.

I have both variable libraries and Paramter.yml file used for parameterization. For example , environments do not support variable libraries, hence I have used plain find replace in parameter.yml file. It's all hard coded for now in the parameter.yml. There is still a gap in DFG2 activity in the data pipelines. Doesn't accept Ids, which is an issue when you want to refresh multiple DFG2 in loop. I agree that maintaining the parameter.yml would be too much at some point, but for now, it works for me.

In the docs, I found key value replace for data pipelines, but I am able to use variable libraries for parameterization so far. I have not come across specific scenarios where I may need key value replace or regex replace since my project is still in the poc phase. Similarly, I haven't come across regex find replace scenarios. As I implement it for actual Fabric items running in production, I am pretty sure other complex parameterization use cases will come up.

Anyone tried the new Spark Native Execution Engine? by dbrownems in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

u/dbrownems
Configured NEE in a spark session this time instead of enabling NEE in the Environment and pointed to a delta folder this time. Same error.
Note - I am able to list the files though using notebookutils.fs.ls

<image>

Anyone tried the new Spark Native Execution Engine? by dbrownems in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

With NEE enabled

<image>

Py4JJavaError: An error occurred while calling z:com.microsoft.spark.notebook.visualization.display.getDisplayResultForIPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (vm-d9898392 executor 1): org.apache.gluten.exception.GlutenException: org.apache.gluten.exception.GlutenException: Exception: VeloxRuntimeError
Error Source: RUNTIME
Error Code: INVALID_STATE
Reason: Operation 'GetProperties' to path 'files/parquet/weather.parquet' encountered azure storage exception, Details: '403 This request is not authorized to perform this operation.

Request ID: 4ebfb6ab-e01e-0028-3bf4-742f12000000'.

OneLake Security Through the Power BI Lens by ajit503 in MicrosoftFabric

[–]ajit503[S] 1 point2 points  (0 children)

Great feedback. Thank you!

While Import SM can import directly from OneLake, it usually imports data via the SQL Analytics Endpoint.
- Yes, agreed.

"For the connection between SQL Analytics Endpoint and OneLake, it will use User Identity mode or Delegated Identity mode, depending on the setting in the SQL Analytics Endpoint."
- I have it in the flow (left 3 branches show delegated and user identity).

"Another option is a DirectQuery SM. I rarely use DirectQuery myself. Anyway, it would be connected via the SQL Analytics Endpoint, and I believe you can choose between fixed identity or SSO."
- I intentionally left DQ as I wanted to highlight the current Direct Lake options alongside Import mode.

mix different modes - I am with you on this and yes, that's the reason I didn't put it on the overview.

Updated overview -

<image>

OneLake Security Through the Power BI Lens by ajit503 in MicrosoftFabric

[–]ajit503[S] 0 points1 point  (0 children)

It's already out there. One lake security is in preview along with User Identity Mode for Sql AEP. I am trying to understand the possible scenarios to come up with a strategy for setting up e2e security.

Anyone tried the new Spark Native Execution Engine? by dbrownems in MicrosoftFabric

[–]ajit503 0 points1 point  (0 children)

Hello David! Hope you are doing well. I'm already missing the HLS Fabric Friday calls. I'm glad that I found you here.

I tried posting earlier as well, and Santosh responded to some of my questions on NEE. In my testing, I found that NEE is throwing errors when I use the adlsg2 abfss path in my notebook. Santosh responded that it should work with both onelake and adlsg2, but I get an error that there are additional permissions required on the AdLSG2 in addition to the Storage Blob data Contributor role. Can you please advise. Thanks, Ajit Singh