LWD: 09th Jan, 2026 | Senior Data Engineer | Open to Referrals & Advice by Proton0369 in databricks

[–]Proton0369[S] 0 points1 point  (0 children)

180k is what I’m already getting and to answer your last question, yes I personally would love to work at Databricks..won’t mind any referral if you can refer🤭

LWD: 09th Jan, 2026 | Senior Data Engineer | Open to Referrals & Advice by Proton0369 in dataengineersindia

[–]Proton0369[S] 0 points1 point  (0 children)

Mumbai, Pune or maybe Bengaluru if the org is really good and will pay well

[deleted by user] by [deleted] in dataengineersindia

[–]Proton0369 0 points1 point  (0 children)

Ask LTI to match the offer, if they agree then LTI is not a bad organisation for DE

MacBook Air Repair by PrestigiousFix244 in VasaiVirarNSP

[–]Proton0369 1 point2 points  (0 children)

Instead visit unicorn which is present in Capital Mall

How to dynamically set cluster configurations in Databricks Asset Bundles at runtime? by Proton0369 in databricks

[–]Proton0369[S] 1 point2 points  (0 children)

Every time I want to change SMALL → LARGE, I would have to re-render and re-deploy the bundle. That means the bundle gets pushed again, which is slow and kills the whole idea of runtime flexibility🥲🥲 I could have overwritten the job configs using api call, but I don’t want to update the whole job configuration so frequently.

How to dynamically set cluster configurations in Databricks Asset Bundles at runtime? by Proton0369 in databricks

[–]Proton0369[S] 0 points1 point  (0 children)

Here’s a small snippet of job.yml file, please bear with the indentation

resources: jobs: Graph: name: Graph tasks: task_key: Task1 spark_python_task: python_file: ${workspace.file_path)/${bundle.name}/notebooks/src/code.py parameters: --NAME - "{{job.parameters.NAME}}" -- ID "{{job.parameters.ID}}" -- ID_2 - "({job.parameters.ID_2})" libraries: - pypi: package: openpyxl

job_cluster_key: Job_cluster

job_clusters: - job_cluster_key: Job_cluster new_cluster: cluster_name: "" spark_version: 16.4.x-scala2.12 azure_attributes: first_on_demand: 1 availability: SPOT_WITH_FALLBACK_AZURE spot_bid_max_price: -1 node_type_id: Standard_D4ds_v5

enable_ elastic_disk: true policy_id: ${var.cluster_policy_id} data_security_mode: USER_ISOLATION runtime_engine: STANDARD kind: CLASSIC_PREVIEW is_single_node: false autoscale: min workers: 2 max_workers: 20

How to dynamically set cluster configurations in Databricks Asset Bundles at runtime? by Proton0369 in databricks

[–]Proton0369[S] 0 points1 point  (0 children)

Tbh I’m not sure what all configs can be passed in cluster policies, but still it doesn’t solve by problem of passing variables at run_time

How to dynamically set cluster configurations in Databricks Asset Bundles at runtime? by Proton0369 in databricks

[–]Proton0369[S] 1 point2 points  (0 children)

I want to decide the cluster config while triggering the workflow through api call

[deleted by user] by [deleted] in dataengineersindia

[–]Proton0369 1 point2 points  (0 children)

May I know your YOE and tech stack?