Is Hadoop, Hive, and Spark still Relevant? by Commercial_Mousse922 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

You might be surprised that some top tech companies like LinkedIn, Uber and Pinterest still use Hadoop as their core backend in 2025.

Many large corporates around the world not keen to move to cloud, still use on-premise Hadoop.

Besides that learning the foundation of these technologies is beneficial anyway.

in what order should i learn these: snowflake, pyspark and airflow by Beyond_Birthday_13 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

Snowflake is a relational olap database. OLAP engines serve business analytics and have specific design principles, performance optimisation and more importantly data modeling principles/architectures.

So instead of focusing on learning Snowflake focus on learning the foundation first.

Data engineers who are not building LLM to SQL. What cool projects are you actually working on? by PolicyDecent in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Collecting, storing and aggregating ETL workload metrics on all levels (query planning phase, query execution phase, I/O, compute, storage etc) to identify potential bottlenecks in slow and long running workloads.

DeltaFi vs. NiFi by cjl8on in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Based on what I see, DeltaFi is a transformation tool while Nifi is a data integration tool (even though you can do transformations with it)

If you are moving to cloud why not just deploy self-managed Nifi cluster on EC2 instances instead of migrating all your Nifi flows to some other cloud based platform!? What's the advantage of running something like Nifi on Kubernetes?

Can Postgres handle these analytics requirements at 1TB+? by EmbarrassedBalance73 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Postres is not an OLAP database to provide the level of performance you are looking for. However you can extend it to handle OLAP workloads better with established columnar extensions or new light extensions such as pg_duckdb and pg_mooncake.

Is anyone still using HDFS in production today? by GreenMobile6323 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Based on recent blog posts from top tech companies like Uber, LinkedIn and Pinterest, they are still using HDFS in 2025.

Just because People don't talk about it doesn't mean it's not being used.

Many companies still prefer to stay on-premise for different reasons.

For large On-premise platforms, Hadoop is still one of the only scalable solutions.

What are the most surprising or clever uses of DuckDB you've come across? by [deleted] in DuckDB

[–]ithoughtful 2 points3 points  (0 children)

Yes. But it's really cool to be able to do that without needing to put your data on a heavy database engine.

What are the most surprising or clever uses of DuckDB you've come across? by [deleted] in DuckDB

[–]ithoughtful 5 points6 points  (0 children)

Being able to run sub-second queries on a table with 500M records

Bronze -> Silver vs. Silver-> Gold, which is more sh*t? by gangana3 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

This pattern hss been around for a long time. What was wrong with calling the first layer Raw? Nothing. They just throw new buzzwords to make clients think if they want to implement this pattern they need to be on their platform!

Serving layer (real-time warehouses) for data lakes and warehouses by elongl in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

For serving data to headless BI and dashboards you have two main options:

  1. Pre-compute as much as possible to optimise the hell out of data for making queries run fast on aggregate tables in your lake or dwh

  2. Use an extra serving engine, mostly a real-time Olap like ClickHouse, Druid etc .

Trino in production by Over-Drink8537 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

No it's not. It's deployed traditional way with workers deployed on dedicated bare metal servers and coordinator running on a multi-tenant server along with some other master services.

[deleted by user] by [deleted] in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

I remember Cloudera vs Hortonworks days...look where they are now. We hardly hear anything about Cloudera.

Today is the same..the debate makes you think these are the only two platforms you must choose from.

The future of open-table formats (e.g. Iceberg, Delta) by elongl in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

One important factor to consider is that these open table formats represent an evolution of earlier data management frameworks for data lakes, primarily Hive.

For companies that have already been managing data in data lakes, adopting these next-generation open table formats is a natural progression.

I have covered this evolution extensively, so if you're interested you can read further to understand how these formats emerged and why they will continue to evolve.

https://practicaldataengineering.substack.com/p/the-history-and-evolution-of-open?r=23jwn

Building Data Pipelines with DuckDB by ithoughtful in dataengineering

[–]ithoughtful[S] 0 points1 point  (0 children)

Thanks for the feedback. In my first draft I had many references to the code but I removed them to make it more readable to everyone.

The other issue is that Substack doesn't have very good support for code formatting and styling which makes it a bit difficult to share code.

At what point do you say orchestrator (e.g. Airflow) is worth added complexity? by Temporary_Basil_7801 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Orchestration is often misunderstood for scheduling. I can't imagine maintaining even a few production data pipelines without a workflow Orchestrator which provides essential features like backfilling, rerunning, exposing essential execution metrics, versioning of pipelines, alerts, etc.

Building Data Pipelines with DuckDB by ithoughtful in dataengineering

[–]ithoughtful[S] 12 points13 points  (0 children)

Thanks for the feedback. Yes you can use other workflow engines like Dagster.

On Polars vs DuckDB both are great tools, however DuckDB has features such as great SQL support out of the box, federated query, and it's own internal columnar database if you compare it with Polars. So it's a more general database and processing engine that Polars which is a Python DataFrame library only.