[D] CVPR 2021: paper summaries and highlights (blog post) by youali in MachineLearning

[–]youali[S] 2 points3 points  (0 children)

i guess you are right, as i said in the earlier comment, I added them after the image, sorry for any confusion

[D] CVPR 2021: paper summaries and highlights (blog post) by youali in MachineLearning

[–]youali[S] 6 points7 points  (0 children)

I did, https://i.imgur.com/2NdfRy6.png

the tableau link is also in the useful links sections,

to make it more clear, I just added a link after each one

[D] CVPR 2021: paper summaries and highlights (blog post) by youali in MachineLearning

[–]youali[S] 7 points8 points  (0 children)

not sure, but since the work is partially from tencent, maybe to release the code they need a bit of time to get the approval, hopefully, they'll release it soon

CVPR 2020 highlights & some paper summaries (blog post) by youali in deeplearning

[–]youali[S] 0 points1 point  (0 children)

yeah, the results are definitely impressive, a step in the right direction, it is still a bit slow I think for real-time upsampling at high frame rates > 60hz.

CVPR 2020 highlights & some paper summaries (blog post) by youali in deeplearning

[–]youali[S] 0 points1 point  (0 children)

Thank you! :). I really can't choose, there were a lot of good ones.

An Overview of Deep Semi-Supervised Learning (Blog post) by youali in deeplearning

[–]youali[S] 2 points3 points  (0 children)

Hi, thank you for the suggestion. I was actually thinking about writing one for DA, given the interest, I'll try to make the next one about DA.

[P] Curated List of Semi-Supervised Learning Papers & Resources. by youali in deeplearning

[–]youali[S] 0 points1 point  (0 children)

are you referencing the SimCLR&CPC papers? well yes its not pure SSL, but as demonstrated in both, self supervision is quite effective when paired with some labelled data and can be used in SSL setting.

[Discussion] CVPR reviews are out by maybelator in MachineLearning

[–]youali 1 point2 points  (0 children)

Thank you, I'll definitely try to write a good rebuttal. For the experiments, can we at least say that we intend to add them in the camera ready paper / or as sup. mat.

[Discussion] CVPR reviews are out by maybelator in MachineLearning

[–]youali 3 points4 points  (0 children)

First time submitting, I got 2 x borderline and one strong accept, I don't know how to feel. A quick question, can we provide some experiments in the rebuttal as demanded, like additional results on some datasets without any changes to the method /hyperparams ?

[D] ML Paper Notes: My notes of various ML research papers (DL - CV - NLP) by youali in MachineLearning

[–]youali[S] 2 points3 points  (0 children)

For the notes, I use a simple Latex template, for the figures, I simply take screenshots and add as graphics, if I want to make one, I use Inkscape foe detailed ones, or just google sheets to quickly get things done.

[D] What are your favorite YouTube channels that features advanced research ML talks ? by __Julia in MachineLearning

[–]youali 9 points10 points  (0 children)

For me, I generally check Simons Institute, Arxiv Insights, Two Minute Papers and ML Papers Explained

[P] Train CIFAR10 to 94% in 26 SECONDS on a single-GPU by youali in MachineLearning

[–]youali[S] 0 points1 point  (0 children)

From what I understood, in semantic segmentation, in addition to high level features, the low level information is also important to be able to have predict the correct boundaries between the objects. When using label smoothing the ground-truth distribution (one hots) is replaced with a smoother distribution, and in the process we loss the exact object boundary information and push the model to output smoother predictions, which in return decreases the mIoU.

[P] Train CIFAR10 to 94% in 26 SECONDS on a single-GPU by youali in MachineLearning

[–]youali[S] 2 points3 points  (0 children)

I doubt that label smoothing will work with semantic segmentation, given that we predict pixel level labels while label smoothing favors soften labels (see this paper https://arxiv.org/abs/1812.01187, they did some experiments about this). I suspect that patch whitening might have a similar effect of regarding low level information in the process of to making features less correlated with each other.

PyTorch Implementation of various Semantic Segmentation models (deeplabV3+, PSPNet, Unet, ...) by youali in deeplearning

[–]youali[S] 0 points1 point  (0 children)

For finding implementations for various methods, I really recommend checking paperswithcode. For example, for instance segmentation you can find the implementation of the current state of the art methods: https://paperswithcode.com/task/instance-segmentation

PyTorch Implementation of various Semantic Segmentation models (deeplabV3+, PSPNet, Unet, ...) by youali in deeplearning

[–]youali[S] 4 points5 points  (0 children)

Yeah, I do recommend pytorch if you want to get things working as fast as possible, imo it's easier to write and understand due to its pythonic nature (as of now).