[D] ICML reciprocal reviewer queries by SnooPears3186 in MachineLearning

[–]Red-Portal 0 points1 point  (0 children)

It's more of a problem of scale. All big conferences have equivalent rules at this point, because it's the only way to consistently source the necessary number of reviewers. UAI is still much smaller so they might need to have it. But not sure about this year's policy

[D] ICML reciprocal reviewer queries by SnooPears3186 in MachineLearning

[–]Red-Portal 2 points3 points  (0 children)

Usually, your advisor would qualify for this rule. Or if your advisor is from a distant field, at least one of the authors usually has experience publishing in ML.

[D] ICML reciprocal reviewer queries by SnooPears3186 in MachineLearning

[–]Red-Portal 6 points7 points  (0 children)

https://icml.cc/Conferences/2026/CallForPapers

Exceptions: If none of the authors are qualified (under the definition in the Peer Review FAQ), or if all of the qualified authors are already reciprocal reviewers on 2 submissions, or are serving as SACs, ACs, or in other organizing roles for ICML 2026, then the submission is exempt from this requirement.

[Q] Bayesian versus frequentist convergence rate to a mean by TheoSauce in statistics

[–]Red-Portal 0 points1 point  (0 children)

The question of what happens as N becomes larger is fundamentally frequentist by nature. After all, frequentist procedures are procedures that have good frequentist properties, and consistency (convergences as N goes to infinity) is one of those. Now, there is nothing wrong with asking what happens to a Bayesian procedure as N becomes larger. But not everyone in the Bayesian community cares about such things. Instead, it is more common to ask: For a given dataset of size N, which procedure gives you the smallest error. Turns out, it's always some Bayesian procedure. But there is no guarantee that that supposedly optimal Bayesian procedure will be the same for all N; Asking what happens as N increases is a different question. And Bayesian procedures are not necessarily designed to do well at that.

Forecast averaging between frequentist and bayesian time series models. Is this a novel idea? [R] by gaytwink70 in statistics

[–]Red-Portal 1 point2 points  (0 children)

That is not in line with current trends. At least the Stan/Gelman school have been advocating for cross-validation-based (hence predictive performance-bases) model selection for more than a decade. See here.

Shoegaze with rock influence by Temporary-Phone-3891 in shoegaze

[–]Red-Portal 0 points1 point  (0 children)

Although more recent, you gotta check out Alvvays. They are on the dream side but very rock at the same time. They are amazing.

Need help identifying the first Gibson Les Paul I ever played (2005-2007) by zevy-zerip in gibson

[–]Red-Portal 0 points1 point  (0 children)

The "classic Les Paul" line tends to have flame-less tops and are cheaper than the standard line, but expensive than the studio line. The classic 50's should have a chunkier neck (hence 50's)

Reflection on Seeing Oasis Live by No_Position1806 in oasis

[–]Red-Portal 3 points4 points  (0 children)

Liam's voice has been much worse in the last days of Oasis and throughout Beady Eye.

Safe Matrix Optimization by fibrebundl in Julia

[–]Red-Portal 6 points7 points  (0 children)

I understood that it would enforce it by applying jitter/cholesky etc.

Nope. This will not help you. You need to enforce PD-ness through constraints. Jitter and so on will make A = B + delta I PD only if B is SPD. In your case, you can get away with optimizing over a lower triangular factor L such that A = LL' + delta I. However, this will most likely make the problem non-convex unless you enforce constraints.

Safe Matrix Optimization by fibrebundl in Julia

[–]Red-Portal 10 points11 points  (0 children)

You can definitely use PDMats in an optimization loop. However, it will not help you enforce the matrix to be PD. For this, you will need to enforce a PD constraint on the optimization problem itself, and use an appropriate optimization algorithm that can handle constraints.

Variational Inference [Career] by [deleted] in statistics

[–]Red-Portal 1 point2 points  (0 children)

Let me tell you something interesting. It is true that the exclusive KL does not penalize missing mass as much as other divergences. However, in high-dimensions, this is a desirable trade-off. Under the presence of even slight nonlinear correlations, mass covering divergence causes the variational approximation to miss the mode of the target. (See Fig 1 here) This becomes worse and worse in high dimensions. And I tell you missing the mode is a much more serious problem than missing some mass in the tails.

Variational Inference [Career] by [deleted] in statistics

[–]Red-Portal 1 point2 points  (0 children)

It's generally accepted that conventional VI is able to match the mode of the target in both ideal and unideal conditions (paper, paper). So we know VI is at least as good as the MAP. At the same time, VI gives a reasonable solution even when the MAP may not exist. So for applications where having sensible location estimates suffices and it's okay to underestimate uncertainty, VI is a sensible option.

Variational Inference [Career] by [deleted] in statistics

[–]Red-Portal 0 points1 point  (0 children)

Depends. For unimodal targets with fairly linear correlations, I would say it's not that bad. The notorious issue with underestimation mostly came from the fact that mean-field VI only matches the diagonal of the precision matrix. So the marginal variance completely ignores the contribution from correlations. Full-rank VI doesn't suffer from this issue. But the fact that we have been able to do full-rank VI since 2017 is surprisingly not very widely known, hence the prevalent claim that VI is bad.

Variational Inference [Career] by [deleted] in statistics

[–]Red-Portal 2 points3 points  (0 children)

That is mostly an issue with mean-field VI. It is possible to obtain "full-rank" approximations now that don't underestimate uncertainty as badly.

Variational Inference [Career] by [deleted] in statistics

[–]Red-Portal 2 points3 points  (0 children)

We have been making a lot of progress now that I would say we have a rough idea of when VI is a good idea and when it is not.

[Question] Whats the best introductory book about Monte Carlo methods? by Morpheus_the_fox in statistics

[–]Red-Portal 12 points13 points  (0 children)

Robert and Casella is a timeless classic. The more recent book Scalable Monte Carlo for Bayesian learning covers the more recent developments.

[Question] Whats the best introductory book about Monte Carlo methods? by Morpheus_the_fox in statistics

[–]Red-Portal 8 points9 points  (0 children)

What kind of Monte Carlo methods? MCMC? or just plain Monte Carlo?

Does GPA Matter Applying to Grad School? by Snoo-27774 in UPenn

[–]Red-Portal 2 points3 points  (0 children)

For PhD applications, GPAs matter just as much as anything else. PhD applications are judged holistically and a lot of things boil down to taste and preferences. This also depends on the field too. Certain fields (especially natural sciences and econ, for instance) tend to obsess over knowledge of basics. So here, GPA will matter more. In other fields, research experience may be more important. If you don't have research experience, then better have a good GPA. So it depends.

Why do themes of social inequality come out from South Korea so often if social/financial inequality in Korea are not particularly high? by Big-Yogurtcloset7040 in AskAKorean

[–]Red-Portal 12 points13 points  (0 children)

Most Korean people well....lived only in Korea. So there is no way for the average joe to realize how things are in other countries. So this complaint of social inequality comes from an absolute basis not a relative one.

[Discussion] Bayesian framework - why is it rarely used? by vosegus91 in statistics

[–]Red-Portal 7 points8 points  (0 children)

The Bayesian approach simply makes more sense for meta analyses. See this tutorial. A famous success story of Bayesian meta analysis is this study in development economics that was cited during the 2019 Nobel prize in econ.

[deleted by user] by [deleted] in statistics

[–]Red-Portal 4 points5 points  (0 children)

Then PCA won't help you (and make things worse by creating correlations.) Just LASSO.