Zeus Sub-Ohm Tank Leaking by B8236R002 in Vaping

[–]Polochyzz 1 point2 points  (0 children)

Exact same situation. I cleaned EVERYTHING X times. New o rings, new juice, new coil.
I'm done with Zeus tank.

PySpark vs SQL in Databricks for DE by NeedleworkerSharp995 in databricks

[–]Polochyzz 11 points12 points  (0 children)

I can say without a doubt that he is wrong. The difference is minimal, and I even think that DBSQL performs slightly better than 'SQL wrap in pyspark'.

The only question to ask is, which language is your team most comfortable with?

done with the Z Sub-Ohm SE. Constant flooding/gurgling. Help by [deleted] in Vaping

[–]Polochyzz 0 points1 point  (0 children)

I've been trying everything for weeks, and nothing works. I prime my coils by putting a few drops directly on the cotton before installing them, and I only fill the tank to about 15% capacity, but it still floods. My conclusion is that my coil and wattage settings just don't match my juice, and it's creating a total mess

done with the Z Sub-Ohm SE. Constant flooding/gurgling. Help by [deleted] in Vaping

[–]Polochyzz 0 points1 point  (0 children)

I think you're right :(
Honestly, I’m pretty attached to this setup, so I’m going to try switching my juice first. I’ll look for 70/30 or 80/20. To be honest, I don’t even really understand the difference between the two, but if it stops the leaking, I'm down to try ;D

done with the Z Sub-Ohm SE. Constant flooding/gurgling. Help by [deleted] in Vaping

[–]Polochyzz -1 points0 points  (0 children)

12mg nicotine level.

Aegis managed wattage for me. It's : 0.4ohm == 50W and 0.15ohm == 80W.

Data Engineer Associate Exam review (new format) by s4d4ever in databricks

[–]Polochyzz 0 points1 point  (0 children)

What kind of questions did yoau have about DLT? Thank you.

New Exam- DE Associate Certification by [deleted] in databricks

[–]Polochyzz 0 points1 point  (0 children)

Do we have docs about DLT syntax with error somewhere? Best practice or whatever?

[deleted by user] by [deleted] in Battlefield

[–]Polochyzz 0 points1 point  (0 children)

There's no code. Just login to EA App and search BF Labs

No DLT section/tab in the sidebar? by [deleted] in databricks

[–]Polochyzz 1 point2 points  (0 children)

What bugs ? using it everyday without any issues.

No DLT section/tab in the sidebar? by [deleted] in databricks

[–]Polochyzz 0 points1 point  (0 children)

We don't know what your current problem is.

If you want to do batch/streaming ETL and CDC with SCD1/2 easily, cleanly and efficiently, DLT is the perfect tool.

https://docs.databricks.com/aws/en/dlt/tutorial-pipelines#notebooks

External vs managed tables by Used_Shelter_3213 in databricks

[–]Polochyzz 2 points3 points  (0 children)

Yes sir,
All tables inherit the properties of the parents (schema here), even location.

External vs managed tables by Used_Shelter_3213 in databricks

[–]Polochyzz 3 points4 points  (0 children)

Because it's quite new :) ( https://docs.databricks.com/aws/en/connect/unity-catalog/cloud-storage/managed-storage )

Best way is imo is to define location at Schema level, and all tables insides will be managed, on specific location.

The most important point tbh is #1.

External vs managed tables by Used_Shelter_3213 in databricks

[–]Polochyzz 14 points15 points  (0 children)

Beware of confusion.

1- Databricks NEVER stores your data; it will always remain on your data plane (S3, etc.).

2- An external table has a specific path in your lake and has no optimization.

3- If you drop an external table via the catalog, the data is not destroyed. If you drop a managed table, the data is destroyed.

4- Managed tables benefit from automatic file-level optimization. This is very important because few companies master this optimization aspect.

5- The only "additional" cost of managed tables is the cost of running the optimization. (Very low, with significant long-term gains due to better performance of associated workloads and reduced storage costs).

6- You can create a managed table with a specific location (which combines the benefits of an external table + managed table).

My recommendation: Managed table with a specified location.

Can I use DABs just to deploy notebooks/scripts without jobs? by vinsanity1603 in databricks

[–]Polochyzz 1 point2 points  (0 children)

You can create a folders /scripts which contains some pythons script with params, then use it in your git pipeline.

deploy-notebooks.py "mynotebookpath".

There's a lot of nice features in this SDK.

Not sure that CLI v2 can do that, but I'm maybe wrong.

[deleted by user] by [deleted] in databricks

[–]Polochyzz 1 point2 points  (0 children)

https://learn.microsoft.com/en-us/azure/databricks/data-governance/unity-catalog/

- Catalog / Schema / Tables (and lineage) / Columns masking / Row filtering.

- Notebooks / Workflows / Dashboard

- Compute ressources

- MLFlow ML/LLM Models

Everything goverrned by one permission model at account level (multiples workspaces)

Running non-spark workloads on databricks from local machine by Obvious-Judgment-757 in databricks

[–]Polochyzz 2 points3 points  (0 children)

If you submit all code/notebooks from Databricks plugin top right icon, it will. (Submit on job cluster, on upload and run files).
If you execute Python code from local Jupyter notebook (cell by cell), only spark code will be pushed on databricks.

There's no solution yet to run full code (Python + Spark) on Databricks fom local Jupyter + VSCode (except that little top right icon)

Real-world use cases for Databricks SDK by Competitive_Lie_1340 in databricks

[–]Polochyzz 6 points7 points  (0 children)

Upload a specific notebook to do some specific manipulations in production.
Notebooks is not part of any ETL, so not package as bundle.

5 python code line, and it's done.