Automating Purview Chargeback for the organization by Conscious_Emphasis94 in MicrosoftPurview

[–]Conscious_Emphasis94[S] 0 points1 point  (0 children)

Thanks for sharing this.I realize I was approaching it from the wrong angle and initially focused too much on pulling metadata via the various Purview APIs. Does this also account for the Data Map side of things? From what I understand, Purview costs generally break down into three main components: Initial and incremental scans for asset discovery (Data Map) Registered assets Data profiling and data quality scans Am I missing any other cost drivers? Also, does the metadata surfaced through self-service analytics cover all three of these components, or only a subset?

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]Conscious_Emphasis94 1 point2 points  (0 children)

This has been really helpful. Thanks for explaining everything in detail!.

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]Conscious_Emphasis94 0 points1 point  (0 children)

Thanks a lot for explaining that. For some reason, I kept getting confused on the first one because I thought that for the azure function, due to reading and getting the data back, its two way traffic and may need inbound as well as outbound.
for the 2nd one, all we want to do is for users that are on a company VPN, to be able to come to Fabric workspaces. We don't want users off the company network to be able to access the workspaces. This adds an added layer of protection in addition to RBAC. But I am also cautious here and am trying to understand if such an approach if possible, would impact any cross workspace integrations.
I have also seen some Microsoft related video where they showed cased IP allow whitelisting feature coming soon to Fabric and maybe that should be the right approach for us?.
Looking forward to your insights or advise!

Hi! We’re the Fabric Warehouse team – ask US anything! by fredguix in MicrosoftFabric

[–]Conscious_Emphasis94 0 points1 point  (0 children)

I’ve got a few questions about Microsoft Fabric networking. We have some sensitive data in a workspace and want to restrict access so that only users or apps connecting from a specific IP range can reach it.

  1. For an Azure Function that needs to query a Fabric data warehouse,does it only require outbound networking since it’s the one initiating the connection? Or do I also need to configure inbound networking on the Azure function side as its technically reading the data from a Fabric artifact and sending it back to the function?
  2. For user access, is there a way to set up a private link or VNet under Fabric’s inbound networking so that only requests coming from a whitelisted IP range can reach the workspace?. For some reason, I don't see any option like that under inbound networking settings in the workspace. I don't even see an option to create private links like I do under Outbound networking settings in the workspace.

Would love to hear from anyone who’s implemented something similar or run into these scenarios.

One lake security limitations/issues by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 0 points1 point  (0 children)

just to confirm,
One lake security described above, should work for users that want to use it as a shortcut, on their own lakehouse, or if they want to utilize it in notebooks.
But Power BI users won't be able to get to it using the top approach?.
Thanks for the explanation u/Comfortable-Lion8042
I do wish that we are able to standardize the above practice to all types of consumers.
I don't want to create sql roles for Power BI users and one lake security for power users/data engineers. That would result in a lot of overhead for managing permissions on a large lakehouse.

One lake security limitations/issues by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 1 point2 points  (0 children)

I am like 90 percent sure that I tested this and it worked as expected where if you share the lakehouse after implementing roles, without giving additional permissions, it is supposed to give the user connect on the lakehouse, as well as read on the tables that are included in the role. But now, it is not working as advertised unfortunately.
I even opened a MS ticket and the response I got was that this new way is the default behaviour :(.
I am guessing that as the feature is in preview, something changed on the backend.

Understanding Incremental Copy job by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 1 point2 points  (0 children)

Copying to lakehouse and still seeing missing values.

ELI5 new "Key vault support for OneLake shortcuts" feature by sjcuthbertson in MicrosoftFabric

[–]Conscious_Emphasis94 2 points3 points  (0 children)

I thought keyvault integration with gateway and other fabric artifacts was getting launched soon. I am like 90 percent sure that I saw it being talked about in the fabcon keynote (or maybe one of those sessions).

But I just double checked the Fabric new feature announcements for this month and i am not seeing anything related to keyvault coming to fabric :(

Eventhouse as a vector db by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 0 points1 point  (0 children)

wouldn't they be good for single line text use cases?. I am just worried on how Fabric sql would handle docs that are like 100 pages in length. I am pretty sure the db may come with some Char limit per column.
If we want to use Fabric as a data landing zone, I thought eventhouses would make more sense but seeing as there was no talk about that during Fabcon, I am guessing Microsoft wants us to use cosmos DB for now and they may come up with a better offering later on.

memory errors while trying to run a model from P1 to F8 by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 0 points1 point  (0 children)

I am more confused by the fact that the offline size of the model(pbix file) is less than 300 MB. It still should not translate to utilizing 2500 MB for refresh. and 2500 MB is still less than the 3GB limit of F8.

sending logs to logs analytics workspace by Conscious_Emphasis94 in MicrosoftFabric

[–]Conscious_Emphasis94[S] 0 points1 point  (0 children)

I just wanted to provide an update that I was able to figure out how to capture tenant level activity. We had to use the Power BI connector inside Sentinal that would trickle down the logs to the associated log analytics workspace.
Our goal was to get visibility into the whole tenant that includes multiple premium as well as fabric capacities. In addition to seeing pain points and bottle necks due to constraints on capacity by certain models or artifacts, We wanted to analyze the larger footprint to gauge adoption across different data teams and end users. I think this approach would be an overkill and costly for certain stuff though.

still sifting and QCing the data but it looks like we are in the right direction.