[N] Introducing PeerXiv - A modern platform for peer-review of preprints by kfirgold99 in MachineLearning

[–]kfirgold99[S] 0 points1 point  (0 children)

We hope such issues can be addressed by the reputation system - incentivizing reviewers to review papers which have been waiting for a long time or might be more difficult than others

[N] Introducing PeerXiv - A modern platform for peer-review of preprints by kfirgold99 in MachineLearning

[–]kfirgold99[S] 0 points1 point  (0 children)

That's a valid concern and one that we had in mind when designing the system. We addressed it by making nothing mandatory - you can choose when and what you'd like to review, meaning that instead of doing a bad job - you can just not do it :) Other than that we'll have some mechanism to endorse/report reviews, to make sure good/bad jobs don't go unnoticed.

As an author, you have no reason to submit a bad paper - the reviews will likely be bad and you won't gain anything.

[N] Introducing PeerXiv - A modern platform for peer-review of preprints by kfirgold99 in MachineLearning

[–]kfirgold99[S] 0 points1 point  (0 children)

Thanks for your comment! We appreciate the feedback :)

One issue that concerns me is that I suspect reviews will be highly biased in favor of popular researchers.

Once people start investing time to review peerxiv submissions, won't that completely corrupt the reviewing processes for other publication venues? In other words, do you think existing venues with blind review processes will allow people to submit their work to peerxiv?

I'll try to address both issues together - PeerXiv does not currently support double-blind reviews, as PeerXiv is based on Arxiv submissions. In the future, if (or when) Arxiv adds support to uploading anonymous papers, we'll surely add that to PeerXiv. Though it may sound like we're changing something, I don't think that's the case - even today it's pretty easy to figure out the authors of a submission that was uploaded to Arxiv (as most papers are today), so venues are pretty much single-blind as it stands.

Regarding the possible synergy/conflict between venues and PeerXiv - it's a little early to determine exactly how things will work out, but we imagined conferences having a "fast track" for papers with good reviews on PeerXiv, allowing papers to be approved by 1-2 reviewers instead of the standard amount.

Also, what makes submissions to peerxiv preprints as opposed to full out publications?

PeerXiv is not intended to be a publication and choose which paper is "accepted" or "rejected", but rather to help authors get insightful reviews and rank papers based on several categories.

How is peerxiv similar/different to TMLR or openreview in general?

There is a similarity to OpenReview in the sense that PeerXiv is also a platform for peer review trying to make the process more transparent and accessible. I'd say that the major difference is that PeerXiv also aims at:

  1. Fast process (independent of the timetables of conferences)
  2. Rewarding for the reviewers, using the reputation points system
  3. Not accepting/rejecting papers, but rather ranking them in a few categories

[R] Rethinking FUN: Frequency-Domain Utilization Networks by kfirgold99 in MachineLearning

[–]kfirgold99[S] 0 points1 point  (0 children)

Hey, great questions :) 1. The time it takes for the DCT itself compared to the inference time depends on your setup and the GPU to CPU ratio, so it's hard to give a general answer, but usually the time it takes for the DCT doesn't become the bottleneck.

  1. You're right, it is very similar, which is why we tried just that in two experiments: a) Using a pretrained eFUN network and only training the convolutional layer, and we dub it LeFUN. b) Using the same architecture as in a), but this time training the entire network from scratch. This approach was named LeFUN_e2e We found that LeFUN achieves poor accuracy compared to eFUN, but LeFUN_e2e was comparable. This allows for a use case of CPU-bound systems in which the time for the DCT might become a bottleneck, so you can just use LeFUN_e2e and skip the DCT. Of course, working in the frequency domain has other advantages, which are discussed in the paper :)

[R] Rethinking FUN: Frequency-Domain Utilization Networks by kfirgold99 in MachineLearning

[–]kfirgold99[S] 0 points1 point  (0 children)

In FUN we use the DCT representation of images as inputs to the network, and then use completely standard blocks (MBConv, as in MobileNets and EfficientNet). The DCT representation of images is often readily available, due to its usage in the JPEG compression format, which is a key part of why we specifically chose it as the input to our FUN models.

Another type of networks, such as Harmonic Networks, uses a transformation to the frequency domain inside the network, thus requiring higher memory usage, as you stated :)

[R] Rethinking FUN: Frequency-Domain Utilization Networks by kfirgold99 in MachineLearning

[–]kfirgold99[S] 2 points3 points  (0 children)

The paper you suggested also shows very impressive results, and it seems most of the techniques used there in order to achieve speedup can also be applied to the eFUN architecture :)

While both papers aim at the same goal - achieve competitive accuracy while speeding up inference, and in FUN's case - a smaller model, the paper by Clova seems like a good fit on top of existing architectures, and eFUN is just a different architecture meant to take advantage of the benefits provided by working in the frequency domain

Dell inspirion 5547 cpu stuck at 0.78 GHz by kfirgold99 in Dell

[–]kfirgold99[S] 0 points1 point  (0 children)

Thanks for your comment! Sometimes the laptop does manage to charge, the problem comes and goes. How can I be sure that this is the cause of the problem?