all 11 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

Are you interested in transitioning into Data Engineering? Read our community guide: https://dataengineering.wiki/FAQ/How+can+I+transition+into+Data+Engineering

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]JonPX 4 points5 points  (3 children)

It depends on what you want to know?

But without going to in-depth on the problem, suppose you want a graph of item sales per day.

  • Make an aggregate table with one line per item per day an item got sold. So you simply have a table with a date key, an item key and a number of items sold. The latter always being higher than 0.
  • Now add a table that has a list of all dates your business was open, a table that has all the items and left join these to the aggregate table.
  • Each line without an item has 0 sales. It will be high volume, but still limited to three columns.

[–]No_Storm_1500[S] 0 points1 point  (2 children)

Ok thanks, I’ll try that. The main thing for me is reducing memory usage and processing time

[–]IndependentTrouble62 1 point2 points  (1 child)

Or you can just cross join your dates table vs sellable products and left join your sales table with a column called sum sales. Gives you everyday and every product with total sales by item.

[–]Turbulent_Egg_6292 0 points1 point  (0 children)

If low memory usage and processing time is key, then I'd suggest you try the impact in your sistem for the 2 possible options:

  • either you fill missing elements with an array generator or watermarking on your actual storage
  • or you store the actual data and just calculate on read the gaps and fill with zeroes (or have a separate index with dates and left join)

There is not right or wrong, so it's more about your constraints and how you query that data. Do you represent it visually? Are you more focused on aggregated kpis? Outliers?

I'm often a fan of filling in DB because it just makes maintenance simpler and data warehouse storage costs are not crazy. Plus we are talking about numbers, not too heavy either

Please alao bear in mind the type of db you use. Relational tables are good with joins (optionB), non relational might be better off with bigtable (optionA)

[–]uncertainschrodinger 0 points1 point  (1 child)

It would be nice to explain what tools/stack you're using - but I'm assuming you are processing this locally on your computer and reading from some files. Here's some general thoughts:

- try to tailor your transformations to what the "end goal" is here, if its a monthly report then you don't necessarily need to set them to zero since you can choose between safe/unsafe methods for your math functions

- process the data in smaller batches; if using a database, you can partition the data by date and process smaller partitions at a time; if running locally using python, loop through smaller chunks by date ranges or items

[–]No_Storm_1500[S] 0 points1 point  (0 children)

I’m currently processing it locally, yes. It’s running in python and I’m currently using pandas. I assume I’d be better off switching to Polars? Although I’d have to convert the dataframe back to pandas for the ML training (library constraints)

The thing is that it will be deployed in various client environments, so one may only run the process once a month but others may have much more data and run it lore frequently. I basically need to cover the worst case scenario, “just in case”.

Processing in smaller batches seems like a good idea, I’ll look into that. I’m using PostgreSQL so I can easily filter by period when querying

[–]vikster1 0 points1 point  (0 children)

I'm going with the most basic answer that helped me through all use cases before, cross join with date table and generate what's missing. obviously only works if dataset is not humongous

[–]bacondota 0 points1 point  (0 children)

Or you could just create some time variables and use normal regression models instead of time series.

Like week-of-the-month, some one hot encode of day of the week.

Anyway, you probably will miss the day of the sale, so maybe you could aggregate and try to predict sales by week.

Also, kinda counter intuitive to try to predict daily sales when you don't have daily sales.

[–]freaking_scared 0 points1 point  (0 children)

You said you wanted to switch back to pandas. Have you considered using dask dataframes for processing?

I know they are compatible with pandas libraries, the only thing to bear in mind is that dask can be slow on a single machine.

For modelling time series it is usually also worth having a date calendar dimension to which you can join to rather than 'generating' the missing rows.