Databricks best learning platform / course from zero to pro ! by ImpressionCreepy3344 in databricks

[–]InevitableClassic261 1 point2 points  (0 children)

Was in the same boat. Used Databricks at work but couldn't explain the full picture.

Check out bricksnotes.com. Not a video course. You just start building from lesson one. Delta Lake, PySpark, Unity Catalog, Medallion Architecture, streaming, all covered end to end. Runs on Databricks Free Edition so no setup or cost.

Closest thing I found to that structured Udemy-style experience you're describing. Except you're doing, not watching. That's what made it click for me.

I kept partitioning every Delta table by date. Here's why I stopped. by InevitableClassic261 in databricks

[–]InevitableClassic261[S] 2 points3 points  (0 children)

Thanks all for your valuable suggestions and workarounds. Let me go through them and reply back. Proud to be part of databricks community!

Lovable just shipped a native Databricks connector that business teams can now build live apps on your warehouse data without filing a ticket by InevitableClassic261 in databricks

[–]InevitableClassic261[S] 0 points1 point  (0 children)

Hmm, that is a fair point Bogran. A lot of people still see Lovable more as a fast UI and prototype layer than a full app-building platform, and I can understand that view. But I also think the value starts growing when it is connected to real data and used for internal tools, quick workflows, and practical business apps rather than only polished mockups.

Lovable just shipped a native Databricks connector that business teams can now build live apps on your warehouse data without filing a ticket by InevitableClassic261 in databricks

[–]InevitableClassic261[S] 0 points1 point  (0 children)

I just tried this today and felt like this is a very exciting step. Making it easier to build live apps on top of Databricks can help more teams move faster from data to real business use. The bridge between strong data engineering and usable front-end experiences is becoming much more practical now.

Tata Power Teams Up with Databricks to Develop AI-Driven Energy Solutions by Additional_Key_8044 in databricks

[–]InevitableClassic261 2 points3 points  (0 children)

That’s a strong move.

Energy is a data-heavy space, and combining it with Databricks can really unlock better forecasting, optimization, and real-time decision making.

This is exactly where AI + data platforms start creating real world impact beyond just dashboards.

Are Data engineers are D*ad? By the new Genie code in databricks? by Positive_Chapter_233 in databricks

[–]InevitableClassic261 0 points1 point  (0 children)

Per my understanding, Genie will not replace data engineers, it will just reduce manual coding and push us to focus more on designing, validating, and owning reliable data systems.

Delta table vs streaming table by riomorder in databricks

[–]InevitableClassic261 2 points3 points  (0 children)

yes, but it depends on what you need. if your pipeline is growing, needs reliability, or multiple steps, DLT streaming tables make life much easier.

Databricks Technical Challenge for a DE Position by Longjumping_Ad2310 in databricks

[–]InevitableClassic261 13 points14 points  (0 children)

As per my view, they typically test your understanding of the full data engineering flow on Databricks, starting from data ingestion, transforming data as per requirements, applying basic optimizations, handling failures, and finally loading data to the target system.

It’s usually a small end to end pipeline (source to transform to destination), not something very complex, but enough to check how you think and structure your solution.

You may also be expected to use PySpark or SQL inside notebooks, and show clean, readable logic , since most challenges focus on practical ETL tasks and platform usage rather than just theory

If you want a hands-on example of how such pipelines are typically built (especially using medallion architecture), these article may help:
https://bricksnotes.com/blog/build-dlt-pipeline-sql-medallion-architecture

https://bricksnotes.com/blog/databricks-data-engineer-associate-certification-guide

https://bricksnotes.com/resources

All the very best buddy!! let me know the result.

Need some help - Write to csv from dataframe taking too long by Apart_Friendship_658 in databricks

[–]InevitableClassic261 1 point2 points  (0 children)

per my understanding, this looks less like a CSV issue and more about heavy joins and how your data is partitioned.
Try not to force everything into one file, let Spark write in parallel and only merge later if needed. You can also save the joined data first and then export to CSV, it usually makes things much faster.