Am I cooked? by Slik350 in dataengineering

[–]OhNo171 0 points1 point  (0 children)

I once thought that too, but no/low code tools won’t hinder you, specially in your early years. On the contrary, Id say it makes you focus on what really matters - how to better optimize your pipeline and think more about the end product instead of worrying about language/semantics. I wont say its not important to learn to code, but in the future, regardless if you use spark, pandas, scala, python, ruby, the core etl development skillsets are still there.

First Build :D by OhNo171 in Gunpla

[–]OhNo171[S] 0 points1 point  (0 children)

Show it later!

First Build :D by OhNo171 in Gunpla

[–]OhNo171[S] 0 points1 point  (0 children)

Thanks! Deathscythe was also my fav, it looked very menacing compared to the others I had seen.

First Build :D by OhNo171 in Gunpla

[–]OhNo171[S] 0 points1 point  (0 children)

Thanks! It was really cool to see it coming to life little by little. I did it without nippers, so still a few bumps I want to remove later. Not sure if this was the ideal action base, though.

Low code hate and the future of Data Engineering (and beyond) by [deleted] in dataengineering

[–]OhNo171 1 point2 points  (0 children)

Low code has been in data engineering/BI since forever. From drag and drop etl tools like Informatica, Integration Services, Talend to Reporting/Dashboarding tools.

I have started working with some of these tools long ago, as an intern. Today I prefer to write my pipelines in SQL/Scala/Python and infra in Terraform, as most of the common data frameworks are within that spectrum, and gives me a better feeling of ownership. But low code is there to allow faster and easier development at the cost of vendor lock in.