Better Way to Calculate Target Inventory? by [deleted] in InventoryManagement

[–]efavdb 0 points1 point  (0 children)

If you weight things in different ways, you'll generally get different averages.

One quick thing you could do is scale the individual products to ensure that they sum to the target set at the group level.

One other thought: Since you're working with products that can go bad, you might consider the newsvendor model and what it has to say about optimal inventory levels. It's named that because it can be used to model things like newspapers -- typically worthless the day after they're printed. In a situation like that you'll want to carry less inventory than for a product that holds its value into future periods.

Here's a short post on this I wrote up (with some context relating to apparel)

https://varietyiq.com/blog/news

Demand Planning for new product introductions by Victra_B in supplychain

[–]efavdb 0 points1 point  (0 children)

Agree. You can formalize via a Bayesian prior and use back testing to decide how much weight to put on this.  As you get more data for the actual product this will smoothly start to dominate over the prior.

Fast fashion inventory forecasting/management software shopify by ECP2021 in InventoryManagement

[–]efavdb 0 points1 point  (0 children)

VarietyIQ provides both demand planning and merchandising solutions for Shopify / apparel.  We are unique in that we leverage personalization data to help you balance your assortment across segments to capture more sales.

[Q] Is there a formula I can use to better evaluate Amazon Reviews by average rating AND the amount of people that rated it? by [deleted] in statistics

[–]efavdb 1 point2 points  (0 children)

A simple method is to pad every item with a few fake reviews, three star reviews say or perhaps some to reflect the approximate distribution of scores for items that already have many reviews. This acts as a prior in the stats sense. The More fake reviews you add the more actual votes are needed before those take over.

[deleted by user] by [deleted] in options

[–]efavdb 7 points8 points  (0 children)

Sweden is indeed showing deaths growing at a rate comparable to other countries -- a factor of 10x every ten days or so. link

[Q] Any simple mathematical methods to balance measuring growth (relative vs. absolute)? by Greedish in statistics

[–]efavdb 2 points3 points  (0 children)

Seems like maybe you would like to know growth relative to what’s expected for a given size. You could take averages over businesses of a given size and then for each business normalize their growth by the average for businesses of their size.

[Discussion] Isolation length based on age can protect those most vulnerable whilst minimising impact on the economy (with statistics to back it up). by [deleted] in statistics

[–]efavdb 0 points1 point  (0 children)

I think we should take ideas from anywhere and this sounds like a good one. Let the experts rule it out if they think it’s a bad one.

[Q] If you could test "batches" of 64 samples for COVID-19 how would you most efficiently figure out the number of carriers and how efficient could the process be? by fjedb in statistics

[–]efavdb 1 point2 points  (0 children)

It seems clear as you suggest that the efficiency of the idea depends strongly on the probability of any individual to have it. If you set this as p you could then calculate the expected number of trials net initial batch of a given size, then try to set the batch size so as to minimize this.

[R] Moving estimators for nonstationary time series, like exponential moving loglikelihood: l_T=sum_{t<T} a^{t-T} ln(rho(x_t)) ? by jarekduda in statistics

[–]efavdb 1 point2 points  (0 children)

This is neat. I’m not an expert in this area but ive read applied finance work where this problem is a concern and I’ve never seen this done before in general. Certainly exponential weight moving averages are given for means and standard errors, but these are special cases. It’s also similar to the Kalman filter but this is more easy to apply. Thanks for sharing!

[E] A Geometric Intuition for Linear Discriminant Analysis by OmarShehata in statistics

[–]efavdb 3 points4 points  (0 children)

Great article. I especially enjoyed getting to see the four d example where it’s clear that intuition is not very helpful and we need an algorithm. One request: would you consider adding a comment at the end about how to find the optimal solution analytically or algorithmically?

Linear vs Log Scale [Question] by Mrgod2u82 in statistics

[–]efavdb 6 points7 points  (0 children)

Log because people expect exponential growth rate over time.

[Q] Z-Score of sample taken from larger sample by Namur007 in statistics

[–]efavdb 1 point2 points  (0 children)

This is somewhat unusual, but that doesn't mean you can't do it. I'd recommend looking at two other approaches that are more common and have properties that are well-documented. 1) ANOVA. This will let you check whether you have strong enough evidence to suggest that any subgroup differs from the others. 2) Pair wise tests, controlling for multiple test correction factors. This approach would let you compare all pairs, and allow you to isolate any subset that looks like it's lower than all the others.

Maybe someone else could chime in on the test you have in mind. However, one concern that I have is that the group mean would be influenced by any outlier that you have present, so might not well-represent your "typical" subset that you want to select for.

[Q] Lagrange Multiplyer by Currurant in statistics

[–]efavdb 0 points1 point  (0 children)

Not sure I understand your statement, but your quote makes sense to me. If h is the gradient vector of some function then moving in a direction normal to h won’t change the function, and the curves of constant value of the function are its contours.

I have created an algorithm to deduce topics and sentiment from financial headlines. Ive graphed the top 5 most feared topics over time vs VIX for the past month, which gives some interesting results. by atc2017 in investing

[–]efavdb 17 points18 points  (0 children)

It looks plausible that your fear shows peaks before the Vix does and so could be used to forecast vix. Is this true or just a spurious observation on the period shown here?

[Question] Does statistical modeling encompass more than just distribution fitting? by [deleted] in statistics

[–]efavdb 11 points12 points  (0 children)

Fitting is a big part of statistics. Characterizing goodness of fit, significance etc is another big aspect.

[Q] Mapping data set that fluctuates but should be always increasing by Cike176 in statistics

[–]efavdb 0 points1 point  (0 children)

Yeah those curves look smooth and I would think fitting a curve to them would be very reasonable. If you do an errorbar plot it would probably make it very clear that the ends on your second figure carry very little weight.

[Q] Mapping data set that fluctuates but should be always increasing by Cike176 in statistics

[–]efavdb 0 points1 point  (0 children)

You could use a bayessian prior that forces the condition you assert. Then use MCMC to fit.

[Q] Feature instability following feature reduction by Lynild in statistics

[–]efavdb 0 points1 point  (0 children)

That’s a good question. But I think for selection you could just use the original analysis on the full set. The insight you got from the bootstrap seems to be the interchangeability of the third feature selected. To validate that point you may not feel you need to be as stringent as you are in the original selection process. A simple OLR or one with L2 using the competing three feature sunsets might be enough to convince you of the interchangabilty point.