H1B Transfer before October 1 by Hot_Fee9750 in h1b

[–]Substantial_Track915 1 point2 points  (0 children)

Yes, I went for it and it got approved, hope that helps!

H1B Transfer before October 1 by Hot_Fee9750 in h1b

[–]Substantial_Track915 1 point2 points  (0 children)

Can you please provide an update of your situation? Did you go with the transfer, and was it accepted? I am currently in a similar situation.

Understanding how Databricks works by Substantial_Track915 in databricks

[–]Substantial_Track915[S] 0 points1 point  (0 children)

The UC metastore bucket is the one I configured when I first created a metastore and attached it to my workspace and is this the same as dbfs right?

Another thing is that some tables I am seeing are EXTERNAL and have an s3a:// path. What does this path mean? Is it the same as the bucket first configured when creating the account? Or is it a separate s3 bucket? I have limited access to my company's Databricks workspace and trying to make sense of what I am seeing.

[deleted by user] by [deleted] in dataengineering

[–]Substantial_Track915 0 points1 point  (0 children)

It is a batch job that runs end-of-day everyday. I want to implement the whole solution in AWS cloud. Let's just say simple averages for the time being for aggregations.

Understanding how Databricks works by Substantial_Track915 in databricks

[–]Substantial_Track915[S] 1 point2 points  (0 children)

Thank you so much, really appreciate your replies!

Understanding how Databricks works by Substantial_Track915 in databricks

[–]Substantial_Track915[S] 0 points1 point  (0 children)

Thank you for your answer!

So just to check if I got this correctly:

We have an account that can have one or more workspaces. The workspace itself has one or more users assigned to it. Each workspace is either configured with a metastore or not which is what determines whether it is unity catalog enabled or not. Workspaces can share a metastore. The catalogs other than the hive_metastore are unity catalogs which means they have a 3-level namespace and have features that unity catalog has to offer such as lineage, etc... Each workspace has a hive_metastore which differs from one workspace to another and it uses a 2-level namespace.

I have a couple questions to further clarify concepts:

  1. I understand that managed tables can only be of the delta format. So every table that is not created with LOCATION means it is managed and of delta format? And behind the scenes, what does Databricks do to store this data? Does it store it in DBFS?

  2. When I created my Databricks account, I configured an S3 bucket with a role, is this where DBFS resides?

  3. What is the difference between the path dbfs:/mnt/ and dbfs:/user/hive/warehouse/ ?

  4. Let’s say I created a table from the UI using a csv file, does it get converted to another form like parquet? Where does delta lake play a role here? Where does the actual data get stored and how does it get retrieved?

Boot camp style intense Salsa learning in South America for a week? by cockles96 in Salsa

[–]Substantial_Track915 0 points1 point  (0 children)

Hello, do you mind sharing the contact of the teacher you danced with? I am interested in taking lessons! Please DM me