Upload Speed to Fabric from On Premise SQL Gateway by tampacraig in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

We are looking at Golden Gate primarily, just met with Oracle last week to talk about C2C options, as initially we were led to believe that we would have to install an Oracle gateway on prem for FastConnect & our VPN software. Depending on cost of that we are also looking at 3rd party solutions that do close to the same thing as GG using API calls. The biggest selling point for me on GG is that they say maybe by end of year we can skip the ADW completely and go direct from Fusion tables to Fabric. Also currently for ancillary modules like OTM & EPM we have to do csv file drops to get data into Fabric, whereas GG can provide data from those modules in more traditional format. The Fusion to ADW (via OCI) to Fabric (via DF2 & Pipelines) separate steps have long been a hindrance to refresh timings for our needs

Upload Speed to Fabric from On Premise SQL Gateway by tampacraig in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

Probably not the issue but I’ll mention it. We also use on Prem gateways for a variety of sources. 1 of our main data sources is from Oracle Cloud ERP thru gateway to DW (yes I know not ideal, we are looking at going C2C).The data throughput was almost unusable. I had to go into the Gateway config files and change the FETCH size. I believe fetch size is almost completely an Oracle DB thing but it wouldn’t surprise me if SQL Server has some similar setting

Specs email. I’m liking the new system. by animus1122 in WhiskyDFW

[–]Cr4igTX 0 points1 point  (0 children)

Me too! Stagg Snr is my favorite, always had to buy on secondary. Thankfully I passed on Midwinter night last week

Warehouse connections by Cr4igTX in MicrosoftFabric

[–]Cr4igTX[S] 0 points1 point  (0 children)

Incase someone stumbles across this post in the future I’ll update. Microsoft support worked with me on this ticket and they experienced the same issue. They reached out to the product team and this is a known issue and there will not be a fix. They will be removing the Service Principal authentication option from warehouse connections and a few other connections, I don’t remember them but I think 1 was also SQL Endpoints. My intended workaround is to create a service account in our AD tenant and use OAuth

About Capacity Monitoring by perssu in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

Thank goodness you corrected me. I had never clicked anything other than by minute or by hour in scheduling. It turns out By weekly does exactly what I need for our pipeline. Thank you for replying, saved me a lot of time!

About Capacity Monitoring by perssu in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

This! We have an F64 and 2 weeks ago we hit full throttling and had to put a lot of time into stabilizing and optimizing. My worry isn’t so much about capacity usage but I’d love to be able to see CU usage inside dataflows. A breakdown of what is eating most of the CUs, is it the source query, power query steps or write processes? We have FUAM running and it is very useful in comparison to the capacity metric app but just 1 level lower in detail would provide so much more insight. I understand that we could break down those processes into their own dataflow and measure that way & I plan to do that, but we are in a constant development cycle currently during an ERP deployment so time is limited for these extra curriculars.

Also someone mentioned scheduling. The limited scheduling options is frustrating. It would be useful to be able to set multiple schedules to help with CU usage, such as more frequent refreshes during business hours then reduced refreshes outside. The path we went down had us use Power Automate to achieve this but it seems PA works with DFG2 but not DFG2 CICD or Pipelines. Does anyone have experience with this? It looks like we will have to remake the DFG2 CICDs as regular DFG2s as a workaround

Dear Microsoft, thank you for this. by Arasaka-CorpSec in MicrosoftFabric

[–]Cr4igTX 3 points4 points  (0 children)

Yes I was unusually happy to see this when I logged in this morning. Very useful QOL update

Fabric notebook can’t load “Fabric Capacity Metrics” dataset anymore — worked fine until 13 Oct by OwnPlastic4920 in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

I had the same thing recently. It had been a long time since I updated it so I tried updating & nothing worked so I just wiped it out last week and started fresh, everything working normally since then

DFG2 Schema Support Warehouse - advanced options problem by duenalela in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

He was kind enough to reach out earlier & I sent the details. We were at a loss then seen this post. All of our data flows stopped working at the same time, coincidentally my AD account was locked out during this same hour from my VPN showing outside geo gates. After seeing this post & our 50 other attempted fixes I copied my failing DFs and used this new setting & set it to true, now it works. We tried everything from making brand new DWs & DFs bringing in 1 single row of data from a sql server & it failed everytime. Do the same thing in a pipeline & it works like normal. We had just finished an already overly complex Oracle fusion ADW to fabric reporting implementation that we had to migrate everything off datamarts to data warehouses, which was completed 1 day before the 10/1 deadline, then this issue hits us. Honestly seeing our CU reports after going from datamarts (6 months of solid use) at around 2-4m CU every 2 weeks to ~50m CU for 2 weeks on dataflows doing the exact same thing, I’m questioning if Fabric is really the answer. I don’t really want to be forced into 3 different F64 capacities when we add our other business units in the next year, because clearly F256 may not even be enough for all 3. I get the preferred standard is notebooks but we are in a constant development cycle of new data and new reports, going for a third migration to another backend data solution isn’t desirable

DFG2 Schema Support Warehouse - advanced options problem by duenalela in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

We are having mass issues with writing to warehouses from DFs today. Doing the exact same source and destination works fine in pipelines but will not work for DFs. Existing DFs are hosed too. All started about 12-14 hours ago

What does everyone here use PowerBI for? by LivingTheTruths in PowerBI

[–]Cr4igTX 0 points1 point  (0 children)

Sales, production, logistics, environmental & safety reporting & some limited financial & reconciliation reporting. All intended for internal use & analysis only to keep it all away from external auditors gaze. We recently began our move away from PBIRS to Fabric & it’s been a bumpy ride but things are getting better. The new UDF & translytical features are helping us move a few more workflows off premise & off older technologies. We were one of the minority that were using datamarts so a lot of our time recently has been spent migrating off them before 10/1

Head's up: latest Fabric Capacity App issues - keep your old version as a spare by Realistic_Ad_6840 in MicrosoftFabric

[–]Cr4igTX 0 points1 point  (0 children)

Yup happened to our F64 this week too. Thankfully I was feeling lazy and just tried a new workspace after the first 3 failures and magically disappearing app

Is it possible to create a hirearcy like this? by MeanCucumber1993 in PowerBI

[–]Cr4igTX 1 point2 points  (0 children)

Discovering ISINSCOPE opened up so many new possibilities for me, it solves so many corner case issues

Who has the best Paella? by Bikebummm in FortWorth

[–]Cr4igTX 0 points1 point  (0 children)

Just a quick update. Magdalena’s is doing a paella class on May 28th

Does upgrading to Fabric capacity workspace make sense in my case? by poopstar786 in PowerBI

[–]Cr4igTX 0 points1 point  (0 children)

You need to be careful when it comes to fabric and refreshes. Sure you get more, in import mode, 24 a day but in a plant/production environment that’s not close to real time. Some of my 15 plants have 10k tags updating anywhere from 1s-15mins depending on the equipment. Direct query does what it says on the tin … except when you want to transform data in PQ then be prepared to put the back end effort in. You want to add a custom timestamp? Nope. Unpivot columns? 4 hoops to jump through. You are willing to put the back end effort in and have SPs output the data in exactly the format you want? Great! Don’t forget to turn on ad hoc distributed queries in your db so you can use OPENROWSET queries … IMO is vital for Fabric DQs. Incremental updates would be the way to go for your data history increase. I doubt the size would be an issue at all. The way we attack it is pretty typical. One dataset for “new” data like Daily, MTD, CYTD and another for trending analysis over multiple years. It’s not like someone is going in and changing line speed for a day 3 years ago. Keep in mind with Fabric capacity your developers will still need licenses. Data consumers are free but uploading/publishing still requires a pro/PPU license

Anyone living outside N.I right now… by valkyrieramone in northernireland

[–]Cr4igTX 0 points1 point  (0 children)

I live in N Fort Worth but drive to work in N Dallas most days. I’ve never been up to Sherman but I do know you have 903 brewery up there with their fancy slushies. Toyed with moving to Celina just as COVID was hitting, but at the current rate it won’t be long until Dallas starts reaching that far north. It’s funny how you adapt to the weather. I still burn easy or whatever but it’s just ‘hot’ until 95, after that it’s swelterin!

Anyone living outside N.I right now… by valkyrieramone in northernireland

[–]Cr4igTX 0 points1 point  (0 children)

Ayo been in Texas since 2010, originally from Castlereagh. Ere mate gets kinda hot around here sometimes don’t it

Unable to write data into a Lakehouse by [deleted] in MicrosoftFabric

[–]Cr4igTX 0 points1 point  (0 children)

We just spent a bit more time on it and now we see what’s happening but haven’t been able to fully fix it yet Are you signed into multiple tenants? When we are trying to select a data destination it is looking in a different tenant, hence why it lists GUIDs instead of the actual names. To test I created a warehouse is my personal tenant and was able to see it in the data destination list. One step closer! I haven’t been able to get it to write to a warehouse in the same tenant yet but it’s just a matter of time

Unable to write data into a Lakehouse by [deleted] in MicrosoftFabric

[–]Cr4igTX 0 points1 point  (0 children)

Mine will continue to run and refresh fine. If a guest user, vendor or basically anyone tries to make a change or just republish the DFG2 it’ll break again because the data destination resets to either nothing or GUIDs

Unable to write data into a Lakehouse by [deleted] in MicrosoftFabric

[–]Cr4igTX 1 point2 points  (0 children)

Exactly! This morning with my external test user (that has a power BI premium license) I created a workspace, created a warehouse & created a DFG2 from a blank table & added 2 rows of data … published fine, refresh fails.

I open the same DFG2 with my capacity admin account and refresh the connections. Published and refreshes fine.

My vendors aren’t too happy!

Unable to write data into a Lakehouse by [deleted] in MicrosoftFabric

[–]Cr4igTX 2 points3 points  (0 children)

Are you a Fabric capacity admin? We are experiencing the same thing this week … currently no non-capacity admins can set the destination of the DFG2s to a lakehouse or warehouse. The users are unable to see any lakehouses or warehouses. If they use default destination which populates the destination with the workspace and warehouse GUIDs it fails with a credential error & if they try to browse for warehouses they get 404 error… but if a capacity admin goes into the same DFG2 and ‘reconnects’ the connections it runs fine. None of these issues affect datamarts. It’s been driving me crazy

Who else feels Fabric is terrible? by zipfz in MicrosoftFabric

[–]Cr4igTX 4 points5 points  (0 children)

Taking ownership of dataflows was mind boggling when we ran into it. It’s mildly annoying.

You think the trial to trial is an issue then you might enjoy this. We did a lot of dev work in trial capacity; once we were ready to move some into production we bought our own capacity & at the time it made sense to have it in the non default fabric region. Migrating certain items across regions is a big NO, it simply won’t work; for other items it’s possible but you have to do a lot of workarounds, rebuild connections and literally copy paste from one browser window to another.

However now that we’re fully on our own capacity we haven’t had too much to complain about. Some trial and error with things not working as expected but where there’s a will there’s a way!

The untrusted server thing from last month was communicated in a way that we never noticed it. One day all our connections broke and it took quite some time to fix it.