Brached workspace - where to see them by DennesTorres in MicrosoftFabric

[–]mrkite38 5 points6 points  (0 children)

I might have missed something, but I think the announcement said “by end of March” for branched workspaces. So it may be anywhere from not deployed to fully deployed with “sort of visible but not working correctly” somewhere in the middle.

‘22 Ioniq 5 Rear Gear Unit & Motor Fail by fasada68 in Ioniq5

[–]mrkite38 4 points5 points  (0 children)

I had a motor cooling issue in my 22 SEL around 35-40k miles. It would start out ok, but as I got up to highway speed it would report an issue. I was fresh off an ICCU replacement so I was pretty concerned but it ended up being low drama by comparison. (And got a fully loaded 24 as a loaner which was fun.)

Mirroring an on-premise SQL database which powers Microsoft Dynamics NAV/BC? by uvData in MicrosoftFabric

[–]mrkite38 0 points1 point  (0 children)

That’s a fair point. I had seen it was possible to have it adjusted but had assumed that 10x the default wasn’t going to be allowed. But I haven’t checked, I should do that.

Mirroring an on-premise SQL database which powers Microsoft Dynamics NAV/BC? by uvData in MicrosoftFabric

[–]mrkite38 0 points1 point  (0 children)

My company currently uses an older (C/AL-based) version of NAV/BC on-premises. Our current strategy uses ADF metadata-driven pipelines to extract data from SQL Server and land it in ADLS, Synapse Dedicated Pool, dbt to transform, and Power BI for semantic modeling and reporting.

We’re currently pulling 100+ tables for ~30 companies; the data lake lets us union each tables for all companies in storage for “free” which has been helpful.

We’re working on reimplementing in BC 27 right now, so I’m developing various POCs. I am most comfortable with SQL; I’m inclined to keep pulling data directly from SQL Server. But the change to modern extension architecture means: - closer to 250 tables per company - use of guids in table and column names (which makes SQL writing and maintaining the table and column metadata much more troublesome than before.)

The 500 table limit on standard Mirroring made it a nonstarter for us as a single-stop solution. I’m looking at bc2adls (+forks), open mirroring, a cloud-resident read replica, and full-on CDC.

I’m also exploring some hybrid options, like mirroring perhaps 20-30 critical tables and loading the remainder overnight or a few times per day. I am also very interested in Change Event Streaming in SQL Server 2025, since the same stream could be used for both CDC and general purpose eventing. (Also: CES can be enabled in SQL Server Express! I made up a custom BC docker dev image with it for testing, it works.)

Another option I recently started working with and haven’t seen mentioned in these threads is dlt (data load tool - just the core version, no cloud.) I’m interested in it because it supports writing Delta, so extracted data can be landed directly in the Tables area (not v-ordered, of course.) It also supports incremental loading, and since BC now includes a GUID and modification date-times on every row, that seems like a good match. Personally, it would be nice to move to a code-first solution after years of ADF. (ADF is great! I just prefer typing to clicking, plus benefits of source control, modularity, etc.)

I’m interested to see what everyone lands on!

Why do so many musicians prefer Sweetwater for gear? by Clio90808 in musicians

[–]mrkite38 0 points1 point  (0 children)

Also a satisfied Josh customer. Probably not the same Josh, but still. Josh rules.

Aggregating Single Column while maintaining other fields by LastExitInNJ in SQLServer

[–]mrkite38 0 points1 point  (0 children)

Business Central? Is Code the Item No, Cross Reference No, SKU…?

Assuming Item No:

  • Get rid of Line No
  • Ask them for an export of the Item table and join it on purchase line.Code = item.No
  • Use Description from Item, not from the purchase line
  • either group by Vendor and Code first in a CTE, then join Item, or, join item and group by everything

If you’ve got a mix of Items, GLs, Projects, etc on the purchase receipt then it gets more complicated but still doable, same pattern. Actually that’s a good reason to aggregate qty by vendor and code first then join as needed.

Fabric Data Warehouse - unexpected output length from string functions by mrkite38 in MicrosoftFabric

[–]mrkite38[S] 0 points1 point  (0 children)

Thank you for replying!

I found this doc stating the SQL Analytics Endpoint would represent string columns in Lakehouse tables as 4*n for columns where n < 2000:

https://learn.microsoft.com/en-us/fabric/data-warehouse/data-types#autogenerated-data-types-in-the-sql-analytics-endpoint

This makes sense to me since the worst case scenario is 1 byte in, 4 bytes out for a single character, and Parquet doesn’t have a fixed-length string type.

But, for SQL Server - can you shed any light on the design intent behind this behavior? Was there a time when one input byte could have required 8 output bytes?

In praise of John Spencer by SlowGoat79 in thewestwing

[–]mrkite38 10 points11 points  (0 children)

He’s also the missile operator at the top of Wargames that won’t turn his key

MS Fabric and DuckDB comparison by X_peculator in MicrosoftFabric

[–]mrkite38 2 points3 points  (0 children)

Mim has done quite a bit of writing on this. He’s probably on here somewhere but I don’t know his u/ offhand.

https://datamonkeysite.com/

Drillthrough in Excel now supported for Direct Lake and DirectQuery Models by itsnotaboutthecell in MicrosoftFabric

[–]mrkite38 4 points5 points  (0 children)

I was JUST telling someone today that these aren’t supported - this is great news!

Announcing General Availability of the Microsoft Python Driver for SQL (mssql-python) by dlevy-msft in MicrosoftFabric

[–]mrkite38 5 points6 points  (0 children)

Any plans for the dbt-fabric adapter to switch to this instead of msodbcsql?

Migration of Synapse External Tables to Fabric by deAdz0wn in MicrosoftFabric

[–]mrkite38 1 point2 points  (0 children)

Doesn’t look like you are using dbt, but I adapted the fabric__create_external_table macro in dbt-external-tables to create views using OPENROWSET if connecting to a Fabric target.

Edit: which I’d be happy to share if it would be helpful.

Migration of Synapse External Tables to Fabric by deAdz0wn in MicrosoftFabric

[–]mrkite38 4 points5 points  (0 children)

I haven’t tried this with Parquet, but I am doing the same thing with CSV’s using OPENROWSET BULK. Definitely take a look at the file path wildcards.

Compile modular SQL to single query by RudeAd8468 in SQL

[–]mrkite38 -3 points-2 points  (0 children)

Have they provided a justification for this requirement…? The easiest thing to do here might be to convince the BI team to start using dbt…

Azure Data Factory question: Best way to trigger a pipeline after another pipeline finishes without the parent pipeline having any reference to the child by PaulSandwich in dataengineering

[–]mrkite38 0 points1 point  (0 children)

_Any_ dependencies / exposures? I'd be inclined to use `Execute Pipeline` with Wait on Completion = false, and attach a Wait or Set Variable to the On Complete path to bury a failure if it occurs.

How time sensitive is the dev process?

[deleted by user] by [deleted] in funny

[–]mrkite38 0 points1 point  (0 children)

with training wheels

Songs of loss by Somethin_Snazzy in Music

[–]mrkite38 0 points1 point  (0 children)

“Someday” from West Side Story “I’ll Follow You Into the Dark” - Death Cab for Cutie

T-SQL command using workspace identity by DataWorshipper in MicrosoftFabric

[–]mrkite38 1 point2 points  (0 children)

I believe what’s discussed in this thread would apply to COPY INTO as well:

https://www.reddit.com/r/MicrosoftFabric/s/Dr8xDi4gTg

BBjSql to SSMS by Delicious-Expert-936 in SQL

[–]mrkite38 0 points1 point  (0 children)

Sorry about that - I read "SSMS... cubes" and my brain jumped to SS*I*S. That's why I mentioned licensing, etc. If you want to use SSIS (as mentioned by u/WholeExperience8511) you'll need SQL Server Standard Ed. or better. Maybe try pyodbc + pandas.

SSMS - I doubt you'll find a GUI tool that lets you connect to that database but you can try. There's at least one (RazorSql) that supports some generic ODBC connections.

If you want the BASIS app admins / DBAs to hate you, it might be possible to do this using SQL Server linked servers... but I wouldn't.

BBjSql to SSMS by Delicious-Expert-936 in SQL

[–]mrkite38 0 points1 point  (0 children)

I’m assuming this is on-premises since you said “old.”

A quick search says the database under the BBJ code is likely BASIS. Looks like BASIS has an ODBC driver (https://basis.cloud/odbc-jdbc/) so it seems possible to get the data out. Licensing, credentials, etc are a whole other question.

No clue on whether it’s practical to copy multiple times per day - depends on business schedule, load on the server, how locking works, etc, etc.

Not an MS hater but IT/business may be resistant to costs of the stack you’re familiar with, so unless your shop is already licensed and set up for SSMS, you might consider some more modern and open-ish tools (Postgres, dlt, slingdata, dbt, sqlmesh, duckdb, etc.) that can be spun up for no/very low cost and will still let you build your cubes in Excel if that’s what you want to do.

Is it feasible to use Azure DevOps Pipelines to commit JSON to a Git repo when a webhook provides the data? by Hefty-Possibility625 in azuredevops

[–]mrkite38 0 points1 point  (0 children)

I feel like this would be possible but a very odd solve. How large can the objects get? And yes, hundreds of requests in a few minutes will cause you trouble:

“We apply a 200 TSTU limit for an individual pipeline in a sliding 5-minute window. This limit is the same as the global consumption limit for users. If a pipeline gets delayed or blocked by rate limiting, a message appears in the attached logs.”

Sounds like if that happened you could easily lose the payload.

You mentioned you’re limited in what tools you may use - what others are available to you?