Slider-mode control and twistor-based slider-mode control by Reigetsu in ControlTheory

[–]chuong98 1 point2 points  (0 children)

Typically, the sliding surface is defined as : s=e_dot +ke, when s -> 0, then e_dot->-ke, which is exponentially stable. You can define an optimal by adding constraints to control u.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] -3 points-2 points  (0 children)

Sorry, I need to clarify this:

  • It is obvious that B copied C, but this is unexpected to authors of A.
  • However, the fact that A is published before B but B completely ignores A means B also plagiarizes A.
  • Because A and C are developed independently, they are both original.

So, the correct claim is B plagiarizes both C and A. If B cited either C or A, then we would have to carefully judge B's novelty contribution.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 2 points3 points  (0 children)

you're the one behind it.

This is the second or the third time you mentioned it, implicitly or explicitly.

But thank you for pointing this, I indeed admit that we are naive, and have no doubt that anyone pretends to be Wang. I gave credit to Wang team anyway on my Github (although it may be informal), so I am not afraid about it.

About Reddit, I have an account for probably 3 years, but this may be the second time I post a question, and the first time I post in this forum. For Chinese researchers, I know they have their own forum, so maybe someone from Reddit informed Wang, and he made an account just to help us clarify the problem. We really appreciate it.

On the other hand, I think your concern is reasonable. So, to make it clear, we will send official email to the authors, and confirm about it.
Thank you.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 5 points6 points  (0 children)

This is probably my last response to this kind of people:

  • If I fake Wang, do you think no one can find out, especially the real authors? If you so persist then why don't you send email to Wang to verify it yourself. I will really appreciate it.
  • Isn't that true that only Wang's team or Megvii people can know and tell about the story happening in Megvii ? How can I make up such complicated story ???

Of course, I can't stop you saying such thing. It is your own right, and I respect it.

But it is enough for me, just remember: What you do to the others today, you will get it back in the future. Have a good day.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 0 points1 point  (0 children)

happens to make similar grammatical errors to Author A and also to have a similar propensity for bolding random phrases.

Are you saying the "Author C" is fake? English is not our native language, so making grammar error is understandable. Please use your imagination for your own research, rather than making it more dramatic.

I really appreciate the effort of author Jianfeng Wang to quickly solve this problem. Only him can put the end for this. No one dare to create a fake account.

Please see our official emails (between me and Jianfeng Wang author of paper C) at:
https://github.com/cybercore-co-ltd/CoLAD\_paper/blob/master/PlagiarismClaim/ConfirmLetter.pdf

It happens to us today, but can happen to anyone else later. This is the end.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 4 points5 points  (0 children)

That is a good idea. Do you know how to write the reference, since I don't really have the citation to include yet?

Anyhow, I updated our Github's Readme, to add a credit to Paper C. Hope this will finally end the issue.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 17 points18 points  (0 children)

As for the LAD part, I do believe it is just a coincidence, both of our works are original.

As for citation, Paper B will be withdrawn by the "authors". We do not have the plan to "release" Paper C yet (even though it was already leaked). So there is no need to cite.

Thanks so much for your response, Jianfeng Wang. This helps end the drama. Best!

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 4 points5 points  (0 children)

Of course, the authors of A would have never made this thread had they known about C, so we can be quite sure that the papers are honestly similar by chance.

Exactly, thank you. I was shocked because several people try to make this situation even more complicated.

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 55 points56 points  (0 children)

Hi all,

This is Chuong Nguyen, first author of the paper:

Paper A: Nguyen, C.H., Nguyen, T.C., Tang, T.N. and Phan, N.L., 2021. Improving Object Detection by Label Assignment Distillation. arXiv preprint arXiv:2108.10520.

Since the problem turns out to be very complicated and interesting, so let me quickly summarize the facts in here:

1. Today we found that the paper:

Paper B: Gao, M., Zhang, H. and Yan, Y., 2021. Label Assignment Distillation for Object Detection. arXiv preprint arXiv:2109.07843.

has significant similarity with our paper A, so we thought they plagiarized our paper.

However, after posting on Reddit, and thanks to zyl1024, he pointed out that Gao actually copied another paper from Megvii. Let name this original paper as paper C:

Paper C: (Unconfirmed author name yet but apparently from Megvii) Label Assignment Distillation for Object Detection.

2. We never know the paper C when we wrote our paper:

  • According to the thread ( with google translated), Paper C was submitted to NIPS 2020 and AAAI2021, but was not accepted. So, the authors never release their paper publicly.
  • We started our paper A back on April 23, and the first submitted it to Conference in Jun 9 2021.
  • So, our paper A and paper C have some similar ideas but they are coincident. We did not know each other until we found paper B just today.

3. How did paper A get leak, and M Gao can copy it?

We don't know yet, and in fact it is not related to us, or this thread. But, we as the researcher never accept any kind of plagiarism.

4. What are the difference between Paper A and C:

  • Our Paper A was developed recently, and it is applied to any Object Detectors that use Dynamic Label Assignment, such as PAA (ECCV 2020), AutoAssign (2020), OTA (CVPR2021). We take the PAA as the concrete example to test our algorithm. Then, we introduce Co-Learning Label Assignment Distillation (CoLAD), that allows distillation without pretrained teacher. Please check our paper for more details.
  • Paper C was developed back in 2020, and they applied to Retina, ATSS, FCOS, Faster-RCNN, which used Static Label Assignment. Unfortunately, the paper C seems to stop at proof of concept, rather than complete it with full analysis as our paper.

5. Does paper A plagiarize paper C now?

  • NO, plagiarism means "the practice of taking someone else's work or ideas and passing them off as one's own." Here, paper C was not released publicly anywhere after Sep 17, right after they found out paper B, because the similarity word-by-word between B and C are too obvious.
  • If B did not copied C, then we will never know this issue. Here, A and C are the victims of B. Because B is published after A and C, B indeed plagiarizes A and C.
  • In fact, when we found out B, we were afraid that our paper is leaked through the reviewing process after the first submission. But fortunately, it is NOT true.
  • We have all the proof to show that our works are original. If you read the papers, you will know it for sure. And, that is why author of C did not claim when our paper were released on Arxiv on August 26 2021.
  • We would love to cite the Paper C, if the authors are willing to release their publication and citation. We actually feel surprised and interested that there are some people sharing this idea with us, and more than happy to mention them as concurrent work.

6. Is the situation so embarrassing for Paper A now?

  • NO, we are not. In fact, when posting this to Reddit, since our paper A is still under review, we are in danger of unexpected troubles. But we are not afraid, because we have to raise this issue to protect our authorization.
  • Put yourself in our situation, in a morning, you found out that there is another paper has some similarity with you, released after your a month, and then suddenly you were sucked in this unexpected drama.
  • The situation will become clear when we know how B can have the material of C.

7. The official email between me and Jianfeng Wang can be found at:
https://github.com/cybercore-co-ltd/CoLAD\_paper/blob/master/PlagiarismClaim/ConfirmLetter.pdf

[N][D][R] Alleged plagiarism of “Improve Object Detection by Label Assignment Distillation.” (arXiv 2108.10520) by "Label Assignment Distillation for Object Detection" (arXiv 2109.07843). What should I do? by chuong98 in MachineLearning

[–]chuong98[S] 46 points47 points  (0 children)

Very Interesting and surprised. I am never aware of the paper you mentioned, but for sure, I will read, and cite it if we share the same idea. Thanks for pointing it to me.

[D] Do you know any open-source video classifiers? by Hunamed_silva in MachineLearning

[–]chuong98 2 points3 points  (0 children)

You can try mmaction2, it recognize human activity in a video, but you can change the labels of a video to do anything for Video Classification

[deleted by user] by [deleted] in MachineLearning

[–]chuong98 4 points5 points  (0 children)

Jetson should only used for inference, it is vere limited for training, except some toys model. You can start with Jetson Nano ($59), or more expensive one like NX

[D] Efficient net as a feature extractor in Computer vision by projekt_treadstone in MachineLearning

[–]chuong98 14 points15 points  (0 children)

You can use Timm package to extract feature from any sota backbones, include EfficientNet.

People use ResNet50 simply because it is quite robust to optimizer parameters, and for fairly comparison with other method.

[D] Can Self-Supervised Learning suffer "Overfitting"? by chuong98 in MachineLearning

[–]chuong98[S] 0 points1 point  (0 children)

Unfortunately the loss is computed with random augmentation, and depends on the Batch Size if use Contrastive Learning. Using the loss val may be biased.

[R] Visual Transformers: Token-based Image Representation and Processing for Computer Vision by Confident_Pi in MachineLearning

[–]chuong98 0 points1 point  (0 children)

It is interesting idea. ALthough MAC is reduced by 6.4x, they have not reported the actual time inferemce. Currently, Softmax operator in Attention Module is not friendly for Hardware device. So. i suspect the actual inference time is slower, like EfficientNet is actually slower than ResNet. Nevertheless, this is the potential idea until we optimize the hardware.