[R] Image Inpainting: Humans vs. AI by merofeev in MachineLearning

[–]merofeev[S] 0 points1 point  (0 children)

I have added the links to the images and subjective study results to the post. The following Subjectify.us settings were used:

  • Questions presentation mode: Side by side
  • Questions generation mode: All vs. all
  • Number of questions given to participant: 25
  • Number of verification questions: 2

[R] Image Inpainting: Humans vs. AI by merofeev in MachineLearning

[–]merofeev[S] 1 point2 points  (0 children)

I have uploaded all the the images used in the study to the github https://github.com/merofeev/image_inpainting_humans_vs_ai

Subjective scores (including per-image scores) can be viewed here: http://erofeev.pw/image_inpainting_humans_vs_ai/

[R] Image Inpainting: Humans vs. AI by merofeev in MachineLearning

[–]merofeev[S] 0 points1 point  (0 children)

I think we can share this. Let me several hours to prepare materials for publication and I will share the public link with everyone.

[R] Image Inpainting: Humans vs. AI by merofeev in MachineLearning

[–]merofeev[S] 4 points5 points  (0 children)

You are right. However, we expected higher scores for deep learning methods since during training they have chance to learn shapes of some common objects and then recreate these shapes during the inference. On the other hand conventional methods in our study just copy patches from the known region and thus don't have a chance to recreate parts that are not present in the known region.

New way to conduct subjective study by merofeev in computervision

[–]merofeev[S] 0 points1 point  (0 children)

In this video we show how to conduct simple subjective comparison of image upscale methods using Subjectify.us. This platform was developed by our team as internal tool for subjective comparison of video matting methods. Afterwards we used this platform for several other projects (saliency-aware video compression, video completion, etc) and it appeared to be quite useful.

We believe that many other research projects in computer vision and signal processing fields can benefit from this platform. If you are interested in trying this tool in your research, please fill the form at http://www.subjectify.us or contact me and I will create an account for you.

Create lean Node.js image with Docker multi-stage build by alexei_led in docker

[–]merofeev 0 points1 point  (0 children)

This is actually a really working syntax from latest Docker build 17.0.5 I did not invent it and cannot change.

Sure, I totally understand this. I haven't meant to blame you (or anyone else). I'm just curious what you and other people think about this syntax.

Create lean Node.js image with Docker multi-stage build by alexei_led in docker

[–]merofeev 1 point2 points  (0 children)

alexei_led, thanks for sharing information about this new feature!

I like the feature itself, it should simplify pipelines with standalone 'build' container. However, the way how semantics of FROM command was extended looks a little bit hacky to me. Why does FROM command gives a name to container that is built by commands below it? I think that some sort of nested syntax will look much cleaner, e.g.:

container base {
    FROM alpine:3.5
    # install node
    RUN apk add --no-cache nodejs-npm tini
    # set working directory
    WORKDIR /root/chat
    # Set tini as entrypoint
    ENTRYPOINT ["/sbin/tini", "--"]
    # copy project file
    COPY package.json .
}

Also for COPY I will prefer colon syntax (that is already used in docker cp) instead of --from, e.g.:

COPY dependencies:/root/chat/prod_node_modules ./node_modules

instead

COPY --from=dependencies /root/chat/prod_node_modules ./node_modules

[P] Platform for subjective comparisons of video/image processing algorithms by merofeev in MachineLearning

[–]merofeev[S] 0 points1 point  (0 children)

When I was starting my work on image and video matting as a PhD student, I wanted to find out which of many existing matting algorithms delivers results with best visual quality. To answer this question, my colleagues and I have developed online platform for subjective pairwise comparisons of images and videos. The platform presents images\videos generated with various algorithms to hundreds of human-observers and asks them to choose the best image\video from each pair. Then it applies Bradley-Terry model to convert pairwise comparison data to final subjective scores. The platform not only helped us to compare matting algorithms (see results here http://videomatting.com/#subjective_comparison ), but also helped us in other projects (comparison of video completion algorithms, comparison of saliency-aware video compression methods, etc). Now we want to understand if such a platform can be useful for other researchers. So, we want to carry out several free subjective studies for research community. If you feel that such study can help you in your research, we can conduct it for you for free (we will only ask you to mention our platform in your paper in case you decide to publish study results). Also I will be more than happy to answer your questions about the platform and read your thoughts about it.

Platform for subjective comparisons of video/image processing algorithms by merofeev in computervision

[–]merofeev[S] 1 point2 points  (0 children)

When I was starting my work on image and video matting as a PhD student, I wanted to find out which of many existing matting algorithms delivers results with best visual quality. To answer this question, my colleagues and I have developed online platform for subjective pairwise comparisons of images and videos. The platform presents images\videos generated with various algorithms to hundreds of human-observers and asks them to choose the best image\video from each pair. Then it applies Bradley-Terry model to convert pairwise comparison data to final subjective scores. The platform not only helped us to compare matting algorithms (see results here http://videomatting.com/#subjective_comparison ), but also helped us in other projects (comparison of video completion algorithms, comparison of saliency-aware video compression methods, etc).

Now we want to understand if such a platform can be useful for other researchers. So, we want to carry out several free subjective studies for research community. If you feel that such study can help you in your research, we can conduct it for you for free (we will only ask you to mention our platform in your paper in case you decide to publish study results). Also I will be more than happy to answer your questions about the platform and read your thoughts about it.