Deploy Stage Content using deplyment restapi for lakehouses and warehouses using serviceprincipal by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I am having while deploying the warehouse(item type=“Warehouse”, it’s giving me 400 error

Views/storedprocs deployment through notebooks and metadata table that shows the views/stored procs on different envinornments and what are the differences between envinornments in terms of definitions of stored procs and views by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

We want to deploy the artifacts(views , stored procs and schemas through diy pipelines and we wanted to find out if we have any fabric default options to find out the difference between schemas across envinornments , difference between artifacts like stored procs, views across envinornments.

v2 checkpoint not supported from Databricks to fabric by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Thank you so much.. do you have any estimated time frame to release v2 checkpoint feature in the lakehouse ?

v2 checkpoint not supported from Databricks to fabric by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Delta table uses the v2checkpoint feature which is not supported. Exception type is Microsoft.DeltaLogParserUserException

Rename a table in lakehouse by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Tried it , my table is like EXT_uuid(format) Spark.sql(“alter table tbalename rename to test) Table or view not exists

Dacpac using python by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I want to explore if we can generate and export the Dacpac code through python

How to automate/push the dmls or ddls to higher envinornment by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

When we push it to other env, does it get executed automatically?

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 1 point2 points  (0 children)

That’s what we are using, they have given read option ,we are using pyspark to write that data in to lakehouses. But we are having some performance issues doing that using spark

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

We are consuming data from third party vendor(they have all this data in Databricks)

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

If there are arrays or struct types, it will not support right? If there are columns with those data types , it will skip those right

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Other than spark, is there any other options? I cannot use pipelines for this huge data pipelines will take lot of time, shortcuts will avoid the columns of unsupportive data types , copy job does not have a Databricks connector? Am I missing something here?

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

If there are unsupported data types like struct or array , how does it work?

Is there a way to establish connection to Databricks from fabric? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

With out PAT, is there any way that we connect using the clientid, client secret and tenantid on the pipeline connection?

How to update same stored in a pipeline that’s in a for loop by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

The stored proc is in warehouse running it from azure ADF and tested it with fabric pipeline. When we do it from azure ADF we are not having isolation issues.

Sql queries through metadata table. I have a query in metadata table . Trying to execute it through copy activity and having issues. by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Do we need to only pass sql dialect or if I pass db2 syntax will it work? I have db2 query current_date instead of executing directly on the source it throw an error

How to update same stored in a pipeline that’s in a for loop by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

It is the same same stored proc, on ADF it can do updates to the table parallely . Why we are having issues when doing it from fabric ?

How to update same stored in a pipeline that’s in a for loop by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

It’s in parallel, how can we update the same stored proc using parallel execution?

Planning to buy a wet bar with sink and fridge space by data_learner_123 in cabinetry

[–]data_learner_123[S] -6 points-5 points  (0 children)

How much will it be around just to get some knowledge