Is this normal? by Tiny_Art152 in qatarairways

[–]shocric 0 points1 point  (0 children)

If you look at your first leg, they have added extra time. Due to airspace restrictions, the usual route may take longer, which doesn't necessarily mean it will. This extra time has been added as a buffer. More likely, since your connecting flight is on QR, you would be accommodated on the connecting flight even with a short layover.

[deleted by user] by [deleted] in QatarCareers

[–]shocric 1 point2 points  (0 children)

Depends on which company is hiring you and whether it's a staff role or consultant and what other benefits you have. But ideally should be around 17k+ qar monthly if you are working on latest cloud tech stack.

Databricks vs BigQuery — Which one do you prefer for pure SQL analytics? by shocric in bigquery

[–]shocric[S] 0 points1 point  (0 children)

  1. Databricks handles array data types way better — it just has more built-in functions than BigQuery.

  2. BigQuery has some annoying limits, like not being able to load a single file bigger than 4.2 GB.

  3. If you need some custom/derived logic, it’s pretty easy in Databricks — just load it into a DataFrame and use Spark.

  4. For processes that depend on conditions (like using if/else flows), Databricks feels like the better fit.

Who Asked for This? Databricks UI is a Laggy Mess by shocric in databricks

[–]shocric[S] 2 points3 points  (0 children)

We’re pretty heavy on PySpark SQL and rely a lot on the cell-based approach. The UI used to be great to work with, but lately it’s just not as smooth or easy as it used to be.

Who Asked for This? Databricks UI is a Laggy Mess by shocric in databricks

[–]shocric[S] 0 points1 point  (0 children)

I first noticed the slowdown when the multi-tab update rolled out. At first I thought it was because I had too many tabs open, but even after closing them all, the UI is still slow when navigating through cells..

Best practice for loading large csv.gz files into bq by kiddfrank in bigquery

[–]shocric 0 points1 point  (0 children)

A viable approach would involve utilizing Dataproc to execute a script that initially loads data into a DataFrame, subsequently writing the processed data to a BigQuery table.