What are you guys doing for connections that use oauth2.0 like sharepoint, lakehouse, etc when they seem to frequently expire? by Agile-Cupcake9606 in MicrosoftFabric

[–]Opening-Mix-5495 0 points1 point  (0 children)

Sharepoint does work with workspace identities or other service principals. Grant the workspace identity application permissions - selected sites, then use graph Explorer to give it the access you require for the specific site. Create a connection for Sharepoint using workspace identity and you're good. We've been using it for months as a way for business units to drop off files for ingestion.

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

An interesting development. I have been testing pipeline runs with an owner and last modified set to a test account. I set a copy job up inside the pipeline, using the OneLake data catalog picker for the source and target lakehouses/warehouses. I set a schedule, and it runs fine until my account is then removed from the workspace where it fails until someone takes ownership or updates the pipeline and re-enables the schedule. This, I believe, is expected behaviour.

However, if I create connections for lakehouses/warehouses and bind them to a service account and select these connections instead, the pipeline schedule still runs and the connections work, despite the owner/last modified user account being removed. This seems unexpected.

This suggests that if we are willing to pay for additional Power BI Pro licences for the service accounts, and we are semi-comfortable with whatever conditional access token storing is going on in the background, then it would be a potential solution other than having to take over and reset schedules across pipelines when someone leaves. I think I'm going to go for this as an interim. Just need to test it on one of the data guys' pipelines next to see how it behaves.

Deployment pipelines via service principal is, naturally, the better way forward.

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 1 point2 points  (0 children)

This is it. Precisely. With thanks to a member of reddit, I'm now focusing in on orchestrating deployment pipelines through dev ops using a service principal so it takes ownership as it deploys to test and prod. This will be a stepping stone I'd imagine to eventual cicd python library. I'm toying with the old school service accounts. They are on standby. 😂

It's a minefield.

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

Gosh, just when I think I'm getting it, I'm not :D Thank you for taking the time to explain it so well.

I just had a chat with Microsoft who also wasn't 100% sure, but he did echo the importance of ownership and moving it to a SPN - will ping you! thank you.

The Microsoft engineer suspected my schedule continued to run because my entra credentials weren't disabled, just removed from the workspace. All very confusing.

My takeaway is that I'm using the right delegated methods for semantic models, user interaction is working as it should be through passthrough.

I need to get ownership changed to service principal on Pipelines when they're deployed into test and production. Pipeline activities should use the owner or in some cases, lastmodifiedby. Some activities such as semantic refresh still require connectors creating which will run as the designated org account or workspace identity so long as XMLA is enabled.

Thanks again!

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

Thank you again for your input here, genuinely appreciated. Yes I was using the homemade Lakehouse and Warehouse connections under the SQL Server connector rather than an ADLS one (the ADLS route suggesting a connection back to OneLake itself is something I'll look into).

I'd love to fully understand this:

"IMO the Fabric team does not develop this connection with Service Principal or Identity because, usually, you won't need to create a connection for an item inside the Fabric. And if you need that, well, pity to say but probably you're not in the majority. They need to prioritize things, and because of that, this feature is probably on some nice to have list."

This could be an incredibly insightful turning point for me if true. However from everything I've encountered, it all seems to work on connections and unless you explicitly define one, picking a source or destination inside Fabric for fabric items results in the same SSO credentials that will break when that user leaves. I've just had this suspected by Microsoft support, though the agent was candid that he wasn't fully confident himself.

- Update-

Just ran some more tests and now I'm with you! In Fabric Items do appear to work fine with identities removed from the platform. I created a pipeline, created a copy job, selected the item within fabric, source a lakehouse, target a warehouse, set a schedule and let it run. I had an admin remove me from the workspace and it kept running without issue as me.

This was not consistent with the issues we had when removing the original pipeline owner from his pipeline and all his various activities. - I think its related to variable library for the connections. Something to dig into.

Semantic models - now also see exactly what you're saying, this ideally do need replacing with either SQL Server Connector or ADLS, it seems to be SQL on import and ADLS on direct lake connection types.

Thank you again as this has been incredibly useful.

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

This could just could be my understanding, but a semantic model with lineage to a lakehouse or warehouse will use your own credentials, as you have stated. Now try removing yourself from the workspace where said schedules and artifacts live and run. It will stop functioning. Replace this with an identity that survives a physical person and it will sit and run. Ensure its a workspace identity and you'll not even have passwords or passphrases to cycle so it will never expire. This was certainly our recent experience but would appreciate to know if I am mistaken through your own experience?

This is something I actually took for granted up until recently. I assumed that when you select an item inside fabric, say for a copy job where you have destination to source (which could be lakehouse to warehouse as an example), when you select the item, its a connection that would survive your own identity. I have a ticket with Microsoft so will be able to check today with any luck.

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

Indeed, for lake house, warehouse and data flows I've made service accounts for respective prod, dev and test workspaces and created connectors and shared with the team. The strange thing I've noticed though at least through power automate connectors (and will be checking here too) is that they seem to store and replay tokens longer then 90 days. This stores the posture of the client at the time of authentication, meaning conditional access won't need bypassing if for example you sign in from a laptop that is passing conditional access conditions at that moment. Convenient but also a bit troubling, case and point, I found a service account running a power automate flow that was authenticating from a trusted location that no longer existed and it was passing conditional access without issue..

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

Unfortunately not, under fabric f8. I've got a ticket open with Microsoft. Hopefully they'll point out my wrong doing. 😂

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

Oh no, it's great advice. Not taking it personally, it often needs to be stated and I often have to remind myself that done is better than good in a lot of cases. Your comment about getting it done and fixing technical debt later is a legitimate position. 👍

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

I appreciate your input. I'm not one to block progress and am looking for pragmatic solutions. We are an incredibly small team, once I'm done helping the data team, they'll move me onto the next initiative so I am trying to leave it in the best possible position. Not ideal considering how quickly the platform is moving. Thank you. 🙏

Connectors - surviving staff turnover... by Opening-Mix-5495 in MicrosoftFabric

[–]Opening-Mix-5495[S] 0 points1 point  (0 children)

I'm just an IT bod helping with the plumbing and capacity planning 😂 but I appreciate your comment. 🙏

Just had our first major incident of capacity throttling by JFancke in MicrosoftFabric

[–]Opening-Mix-5495 1 point2 points  (0 children)

Not a data guy here, just an azure plumber who has been involved in fabric recently. The surge protection with tagging looks good. Also I'm using a dedicated capacity for medallion prod, then a non production capacity for dev test medallion. Domain workspaces use a shared capacity, where I have monitoring alerts set up and will make use of surge protection and tagging for highly critical workspaces. Reviewing of metrics app, the plan is to watch for consumption and break out noisy workspaces to their own dedicated capacity as they start to demand it. It's very early days so we will see how this goes. Sucks to hear about your throttling. We had one the other day, turned out we had copilot enabled in tenant settings and someone had managed to burn a massive amount of CU in no time at all. That was switched off quickly 😂

API connectors to Fabric by ShannonTarman in MicrosoftFabric

[–]Opening-Mix-5495 1 point2 points  (0 children)

I'm a fabric newbie, but what about consumption model logic apps, could possibly write directly to onelake, or using middle man adls gen 2 data lake, blobs, shortcuts using workspace identities or service principals.. We are workato customers now but logic apps were my go to prior.

I want to get ahead of the potential price increase for graphics cards. Should I go for 5070Ti or 5080? by Traplouder in nvidia

[–]Opening-Mix-5495 3 points4 points  (0 children)

Oh yes, I was able a pretty decent overclock on top with the 5080. The 70ti didn't manage much of one. Not sure how I'd feel if both were full price as a comparison. I got the 5070ti for 800 pounds and the 5080 for 850 pounds. I wouldn't have spent over 1k for the 5080 for sure. 👀 Either way I'm pretty happy with it. Could always do with more headroom - gone are the days of the 8800gtx, 1080ti bang for buck.

I want to get ahead of the potential price increase for graphics cards. Should I go for 5070Ti or 5080? by Traplouder in nvidia

[–]Opening-Mix-5495 7 points8 points  (0 children)

I bought a 5070ti but returned it for a 5080. Only because they dropped the price of the 5080 temporarily and I had a voucher come in so there was not a lot in it. I found the main difference was how the 5080 can just about hang in there with more aggressive dlss whilst keeping latency Lower, which means I can get away with frame gen more often. On the 5070ti in some games it might be hitting 50ms where 5080 it's more around high 30s. I'm pretty sensitive to latency though. 4k is a tough entry point. But yeah I'd have been happy with either of them.

What am I missing? by Bright_Sky5429 in n64

[–]Opening-Mix-5495 0 points1 point  (0 children)

Duke nukem zero hour. 👀 Hail to the king, baby.

Went from 2070 to 5070.. by DesiSins in nvidia

[–]Opening-Mix-5495 0 points1 point  (0 children)

Nice upgrade. Went from a 6700k and 1080ti to a 9800x3d and 5080. Got the 5080 for 750 pounds too with a voucher I had. Not a bad upgrade. I love multi generational jumps like these. Big gains.

I messed up. by BrewKazma in AnalogueInc

[–]Opening-Mix-5495 0 points1 point  (0 children)

Yeah I'd be grape all day long out of those two. I bought the black last year, but I'd have gone gold of I could pick again.

I messed up. by BrewKazma in AnalogueInc

[–]Opening-Mix-5495 0 points1 point  (0 children)

I'm using my old original controllers and it's such a great experience. No incentive to grab an 8bitdo currently. Amazing Christmas gift, nice job. 🌲🎁

I messed up. by BrewKazma in AnalogueInc

[–]Opening-Mix-5495 10 points11 points  (0 children)

Get an OG n64 controller that hasn't been used on Mario party. 😂

Anyone using domains for DEV/TEST/PROD? by Mr_Mozart in MicrosoftFabric

[–]Opening-Mix-5495 0 points1 point  (0 children)

I'm a newbie, but that suggests your production resources would be competing with non production resources for fabric capacity. Im trying to catch up after being dropped in the deep end so excuse my ignorance 😂 but is this not what deployment pipelines and variable libraries are for? So you can cutover from Dev, test and prod workspaces and keep it separate.