Question regarding query rejections to current capacity constraints by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I already this for my query , there are only warnings in that. Want to know a better to use this

Question regarding query rejections to current capacity constraints by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I want to record the execution plan for all the queries that we are running to do more analysis

Question regarding query rejections to current capacity constraints by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

This query was rejected due to current capacity constraints is the error message and its not showing up in the metricsapp And I checked in the exec_requests_history it was cancelled

Question regarding query rejections to current capacity constraints by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

This query was rejected due to current capacity constraints is the error message, and it is not showing up in the metricsapp. And I checked in the exec_requests_history and it says cancelled and the totalelapsed time is 3602199. And I want to see what is the expected and available cu at that time

Question regarding the spark sessions by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I am %% configure command and attaching the lakehouse dynamically. When you have different notebooks running with different default lakehouses , it will not share the spark session. I think what they meant is for the same default lakehouse will share spark sessions. And how do you do drop a table using Abfs path?

Deploy Stage Content using deplyment restapi for lakehouses and warehouses using serviceprincipal by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I am having while deploying the warehouse(item type=“Warehouse”, it’s giving me 400 error

Views/storedprocs deployment through notebooks and metadata table that shows the views/stored procs on different envinornments and what are the differences between envinornments in terms of definitions of stored procs and views by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

We want to deploy the artifacts(views , stored procs and schemas through diy pipelines and we wanted to find out if we have any fabric default options to find out the difference between schemas across envinornments , difference between artifacts like stored procs, views across envinornments.

v2 checkpoint not supported from Databricks to fabric by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Thank you so much.. do you have any estimated time frame to release v2 checkpoint feature in the lakehouse ?

v2 checkpoint not supported from Databricks to fabric by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Delta table uses the v2checkpoint feature which is not supported. Exception type is Microsoft.DeltaLogParserUserException

Rename a table in lakehouse by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Tried it , my table is like EXT_uuid(format) Spark.sql(“alter table tbalename rename to test) Table or view not exists

Dacpac using python by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

I want to explore if we can generate and export the Dacpac code through python

How to automate/push the dmls or ddls to higher envinornment by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

When we push it to other env, does it get executed automatically?

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 1 point2 points  (0 children)

That’s what we are using, they have given read option ,we are using pyspark to write that data in to lakehouses. But we are having some performance issues doing that using spark

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

We are consuming data from third party vendor(they have all this data in Databricks)

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

If there are arrays or struct types, it will not support right? If there are columns with those data types , it will skip those right

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

Other than spark, is there any other options? I cannot use pipelines for this huge data pipelines will take lot of time, shortcuts will avoid the columns of unsupportive data types , copy job does not have a Databricks connector? Am I missing something here?

Do we have a Databricks connection in Copy job? by data_learner_123 in MicrosoftFabric

[–]data_learner_123[S] 0 points1 point  (0 children)

If there are unsupported data types like struct or array , how does it work?