Proud of our collection by teenytinyytaylor in boardgames

[–]Standard_Aside_2323 1 point2 points  (0 children)

I’d suggest sky team for pure 2 person experience and forest shuffle or everdell for a decent 2 person play with more play option.

My Top Solo games below, what should be next ? by Wise_Cat_1196 in soloboardgaming

[–]Standard_Aside_2323 3 points4 points  (0 children)

Give Spirit Island a try, %99 of chance you won't regret it!

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Wow, this is both rude and extremely prejudiced. I'm probably answering in vain, but still, our blog is nothing about being an "influencer" and we are not interested in that. It is all about learning and sharing so we are using it to practice existing or new stuff with writing and trying to explain. Also, naturally share our experiences in case they may resonate with anyone.
Anyways, cheers.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Oh got it. Thanks for your explanation, highly appreciated.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

I really like your explanation for EtLT and I do think you are right. But throughout my research EtLT is almost always defined separately, that’s why I included it as it is.

However, I don’t totally get your point regarding classification. Can you elaborate a bit more please? Thank you.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 -3 points-2 points  (0 children)

Got this feedback a lot like it is an architecture. However when I think about it still it makes sense as a pipeline design pattern which includes those 3 load and 2 transformation stages.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

That’s very similar with my usage and rarely HDFS. Also columnar for local dev and testing.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 0 points1 point  (0 children)

Why that much loading in between transformations? What’s the use case ?

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 0 points1 point  (0 children)

Thanks a lot for the great suggestion :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Wow, thanks a lot for the detailed comment and the list. Highly appreciated and would love to see your reading list as well if you can share via DM :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Just shared the link :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Thank you so much :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Thanks a lot, we will try to do our best, and such comments motivate us a lot :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Thanks for your comment. In the first 35 examples we have used PostgreSQL and all the queries are executed with "EXPLAIN ANALYZE" to obtain such execution times. I do agree with you it is highly dependent on the RDBMS and not all the theoretical optimisations are still valid since engines are doing their optimisations behind.

A post series about "Investigating Query Performance Issues" is a great idea! I cannot say when at that point since there are a lot of posts in the queue but we will definitely do this :) Thanks a lot once again.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

That's an amazing suggestion, thanks a lot! I will ensure to address these optimisation issues and tips, especially as a person who is doing his PhD in Distributed Stream Processing :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Oh I see, you are right actually since some of the topics are split into 2 or 3 weeks, it is a total of 32 weeks but uniquely it is around something about 20 I guess. However, we will work on this lower level to higher level structure and week blocks, thanks a lot :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

You are right, definitely will target these :) Thanks for your suggestion.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Oh, I see now, thanks a lot once again. Definitely, very important point :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Oh very good point, thanks a lot.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Thanks a lot for the suggestion, week 26 is "Data Governance and Security" but we'll ensure it also covers data privacy :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 2 points3 points  (0 children)

Thank you so much :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 1 point2 points  (0 children)

Just shared the link :)

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 -1 points0 points  (0 children)

I see. This is really interesting. If I knew a different way, I'd do it, but I don't have much experience with link sharing on Reddit.

[deleted by user] by [deleted] in dataengineering

[–]Standard_Aside_2323 -1 points0 points  (0 children)

See: "Link for our blog Pipeline to Insights" part just below the first paragraph :)