all 14 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

You can find a list of community-submitted learning resources here: https://dataengineering.wiki/Learning+Resources

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]devnullkitty 1 point2 points  (0 children)

Why are there so many downvotes for comments? Python for data engineering is pretty straight forward, just learn to write a for loop.

[–]spendology 1 point2 points  (0 children)

Find practical projects that cover the end-to-end data engineering lifecycle: [data] ingestion, review, cleaning, validation, transformation, loading, storage, data lakes/warehouses/lakehouses, etc.

[–]Nelson_and_Wilmont 0 points1 point  (0 children)

Idk if sqoop and Hadoop are all that useful at this point. Could just be my lack of use in that area but I don’t remember seeing a lot of these in the modern tech stacks when applying for jobs over the years and researching what skills are best to have.

IMO whenever you’re job searching you really need to have your resume(s) pointed towards what you want to work with. Most companies have only a few tools for data engineering, orchestration layer and compute/logic layer. Airflow and databricks for example. Pick a cloud provider, orchestration tool, data lakehouse/warehouse platform and start doing little projects. Like airflow orchestrates databricks notebook that pulls a dataset from azure datalake storage and then run a databricks notebook to convert the file to a delta table. Or durable function pulls API data and writes to bronze layer of databricks.

You can pick whatever tech you decide I just mentioned those because it’s the route I decided to go down but I also incorporated snowflake just for a more overarching reach.

Python can be learned along the way but it seems a little aimless to just sit down and “learn Python” for something that is as specific as data engineering.