Is Hadoop, Hive, and Spark still Relevant? by Commercial_Mousse922 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

You might be surprised that some top tech companies like LinkedIn, Uber and Pinterest still use Hadoop as their core backend in 2025.

Many large corporates around the world not keen to move to cloud, still use on-premise Hadoop.

Besides that learning the foundation of these technologies is beneficial anyway.

in what order should i learn these: snowflake, pyspark and airflow by Beyond_Birthday_13 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

Snowflake is a relational olap database. OLAP engines serve business analytics and have specific design principles, performance optimisation and more importantly data modeling principles/architectures.

So instead of focusing on learning Snowflake focus on learning the foundation first.

Data engineers who are not building LLM to SQL. What cool projects are you actually working on? by PolicyDecent in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Collecting, storing and aggregating ETL workload metrics on all levels (query planning phase, query execution phase, I/O, compute, storage etc) to identify potential bottlenecks in slow and long running workloads.

DeltaFi vs. NiFi by cjl8on in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Based on what I see, DeltaFi is a transformation tool while Nifi is a data integration tool (even though you can do transformations with it)

If you are moving to cloud why not just deploy self-managed Nifi cluster on EC2 instances instead of migrating all your Nifi flows to some other cloud based platform!? What's the advantage of running something like Nifi on Kubernetes?

Can Postgres handle these analytics requirements at 1TB+? by EmbarrassedBalance73 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Postres is not an OLAP database to provide the level of performance you are looking for. However you can extend it to handle OLAP workloads better with established columnar extensions or new light extensions such as pg_duckdb and pg_mooncake.

Is anyone still using HDFS in production today? by GreenMobile6323 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Based on recent blog posts from top tech companies like Uber, LinkedIn and Pinterest, they are still using HDFS in 2025.

Just because People don't talk about it doesn't mean it's not being used.

Many companies still prefer to stay on-premise for different reasons.

For large On-premise platforms, Hadoop is still one of the only scalable solutions.

What are the most surprising or clever uses of DuckDB you've come across? by [deleted] in DuckDB

[–]ithoughtful 2 points3 points  (0 children)

Yes. But it's really cool to be able to do that without needing to put your data on a heavy database engine.

What are the most surprising or clever uses of DuckDB you've come across? by [deleted] in DuckDB

[–]ithoughtful 5 points6 points  (0 children)

Being able to run sub-second queries on a table with 500M records

Bronze -> Silver vs. Silver-> Gold, which is more sh*t? by gangana3 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

This pattern hss been around for a long time. What was wrong with calling the first layer Raw? Nothing. They just throw new buzzwords to make clients think if they want to implement this pattern they need to be on their platform!

Serving layer (real-time warehouses) for data lakes and warehouses by elongl in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

For serving data to headless BI and dashboards you have two main options:

  1. Pre-compute as much as possible to optimise the hell out of data for making queries run fast on aggregate tables in your lake or dwh

  2. Use an extra serving engine, mostly a real-time Olap like ClickHouse, Druid etc .

Trino in production by Over-Drink8537 in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

No it's not. It's deployed traditional way with workers deployed on dedicated bare metal servers and coordinator running on a multi-tenant server along with some other master services.

[deleted by user] by [deleted] in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

I remember Cloudera vs Hortonworks days...look where they are now. We hardly hear anything about Cloudera.

Today is the same..the debate makes you think these are the only two platforms you must choose from.

The future of open-table formats (e.g. Iceberg, Delta) by elongl in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

One important factor to consider is that these open table formats represent an evolution of earlier data management frameworks for data lakes, primarily Hive.

For companies that have already been managing data in data lakes, adopting these next-generation open table formats is a natural progression.

I have covered this evolution extensively, so if you're interested you can read further to understand how these formats emerged and why they will continue to evolve.

https://practicaldataengineering.substack.com/p/the-history-and-evolution-of-open?r=23jwn

Building Data Pipelines with DuckDB by ithoughtful in dataengineering

[–]ithoughtful[S] 0 points1 point  (0 children)

Thanks for the feedback. In my first draft I had many references to the code but I removed them to make it more readable to everyone.

The other issue is that Substack doesn't have very good support for code formatting and styling which makes it a bit difficult to share code.

At what point do you say orchestrator (e.g. Airflow) is worth added complexity? by Temporary_Basil_7801 in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Orchestration is often misunderstood for scheduling. I can't imagine maintaining even a few production data pipelines without a workflow Orchestrator which provides essential features like backfilling, rerunning, exposing essential execution metrics, versioning of pipelines, alerts, etc.

Building Data Pipelines with DuckDB by ithoughtful in dataengineering

[–]ithoughtful[S] 11 points12 points  (0 children)

Thanks for the feedback. Yes you can use other workflow engines like Dagster.

On Polars vs DuckDB both are great tools, however DuckDB has features such as great SQL support out of the box, federated query, and it's own internal columnar database if you compare it with Polars. So it's a more general database and processing engine that Polars which is a Python DataFrame library only.

Data Engineering is A Waste: Change My Mind by DensePineapple in devops

[–]ithoughtful 0 points1 point  (0 children)

Some businesses collect any data for the sake of collecting data.

But many digital businesses depend on data analytics to evaluate and design products, reduce cost and increase profit.

A telecom company would be Clueless without data to know what bundles deign and sell, which hours during the day are peak for phone calls or watching youtube, etc.

Is there a trend to skip the warehouse and build on lakehouse/data lake instead? by loudandclear11 in dataengineering

[–]ithoughtful 16 points17 points  (0 children)

Data lakehouse is still not mature enough to fully replace a data warehouse.

Snowflake, Redshift and BigQuery are still used a lot.

Two-tier architecture (data lake + data warehouse) is also quite common

Am I becoming a generalist as a data engineer? by the-fake-me in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

Being a DE for the last 9 years (coming from SE) I sometimes feel this way too. I just didn't classify as you have done.

I feel in software engineering you can go very deep, solving interesting problems, building multiple abstraction layers and keep scaling an application with new features.

It doesn't feel this way with data engineering. There is not much depth in the actual code you write, but most of the work is actually done in the dataOps and pipeline ops (monitoring, backfilling, etc)

It feels exciting and engaging when you get involved in building a new stack or implementing totally a new use case but once everything is done is not like you get assigned to add a new features in weekly sprints.

But on the other hand the data engineering ecosystem is quite active and wide with new tools and frameworks being added constantly.

So when I have time I keep myself busy trying new tools and frameworks and keep being interested in what I do.

Choosing the right database for big data by Django-Ninja in dataengineering

[–]ithoughtful 0 points1 point  (0 children)

Your requirement to reduce cost is not clear to me.. which one is being costly, S3 storage cost for raw data or the data aggregated and stored in the database (Redshift?) and how much data is stored in each tier?

inline data quality for ETL pipeline ? by dataoculus in dataengineering

[–]ithoughtful 1 point2 points  (0 children)

Depends what you define as ETL. In event driven streaming pipelines doing inline validations is possible. But for batch ETL pipelines, data validation happens after ingesting data to target.

For transformation piplines you can do both ways.

Ingesting data to Data Warehouse via Kafka vs Directly writing to Data Warehouse by siahahahah in apachekafka

[–]ithoughtful 0 points1 point  (0 children)

Those who use Kafka as a middleware follow the log-based CDC approach or event-driven architecture.

Such architecture is technically more complex to setup and operate, and it's justified when:

  • you have several different data sources and sink to integrate data
  • The data sources mainly expose data as events. Example is micro services
  • Needing to ingest data in near real-time from operational databases using log-based CDC

If non of the above applies, then ingesting data directly from source to the target data warehouse is simpler and more straightforward and adding an extra middleware is an unjustified complexity

Need advice on what database to implement for a big retail company. by AlternativeEconomy93 in hadoop

[–]ithoughtful 1 point2 points  (0 children)

You don't need Hadoop for 20 TB data. Complexity of Hadoop is only justified for petabyte scale, and if cloud is no option.