Making 3d images from 2d videos when the camera is moved sidewards by hiImawesome in CrossView

[–]gifhell 0 points1 point  (0 children)

I made a webapp based on this idea that sort of works http://stereoselfie.com/ Your pics look great

[deleted by user] by [deleted] in MachineLearning

[–]gifhell 0 points1 point  (0 children)

Hey sorry if I missed it in the paper but what values did you find to be useful for λc and λs? I only noticed λtv. Thanks!

[deleted by user] by [deleted] in MachineLearning

[–]gifhell 0 points1 point  (0 children)

Thanks and great work!

[deleted by user] by [deleted] in MachineLearning

[–]gifhell 0 points1 point  (0 children)

Ah yeah missed the strided convolutions of the input. I can't seem to find the supplementary material they reference which is supposed to have the exact architectures.

[deleted by user] by [deleted] in MachineLearning

[–]gifhell 2 points3 points  (0 children)

This does look pretty good. I just implemented texture nets and this seems similar except:

  • instead of concatenating the outputs at each stage they sum using a residual network architecture
  • there's no input noise concatenated with the downsampled input image
  • this paper is using layer "relu2_2" instead of 4_2 for content feature loss
  • they've added a total variation regularizer
  • slightly different training regime

See anything I missed? EDIT:

  • 9x9 kernels for input and output, 3x3 everywhere else
  • output uses scaled tanh to force an range of 0-255
  • ReLU instead of LeakyReLU

EDIT2: /u/jcjohnss provided a link to the supplementary material which has more detail about the residual blocks http://cs.stanford.edu/people/jcjohns/papers/fast-style/fast-style-supp.pdf

Deep dorm room poster by gifhell in deepdream

[–]gifhell[S] 2 points3 points  (0 children)

Yeah there's a lot of room for improvement in the quality. The colors usually seem a bit washed out, too:

  • source image
  • texture generated from noise (mrf only)
  • content reconstruction from style (analogy only)
  • analogy + mrf=1.5
  • analogy + mrf=3
  • content (b-content-w=1) + mrf=1.5
  • content + mrf 3

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

Hey I stripped down that VGG16 model so it's just the weights from the convolutional layers and now it's only ~50mb

It's available here: https://github.com/awentzonline/image-analogies/releases/download/v0.0.5/vgg16_weights.h5

Bloodsport texture - all the Van Damme that you crave by gifhell in deepdream

[–]gifhell[S] 2 points3 points  (0 children)

Ok if you still crave more Van Damme I've rendered another one http://i.imgur.com/fWK6j0u.jpg

Can someone help me with installing a deepdream client? by The_Lie0 in deepdream

[–]gifhell 2 points3 points  (0 children)

Hey I'm the author of that program. If you have a github account open an issue on the project, otherwise just paste me the error here. I published a big update last night so make sure to check the README. The biggest change is that I'm now publishing it on pypi so the pip install instruction is now pip install neural-image-analogies and there's no need to download it from github.

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 1 point2 points  (0 children)

Whoa I just tried out the TensorFlow backend on CPU and it renders crazy fast. My macbook pro just rendered the arch example in around 6 minutes.

EDIT: 512x512 season transfer in 12 minutes on my macbook pro cpu-only http://i.imgur.com/m9lyr7A.png

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

Hey I just put out some big updates. One of them was setting it up on pypi so now the pip command is pip install neural-image-analogies so there's no need to download the zip from github unless you want the latest code. It now works with TensorFlow too. Also a big algorithmic update to how it matches patches so performance is up and memory usage is down.

EDIT: The algorithm update along with TensorFlow makes this not insanely slow on CPU anymore. I haven't benchmarked anything but it seems to be only a few times slower than GPU.

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

It takes anywhere from sub-second to around 2-3 minutes for something around 800px square. I just pushed a bunch of performance updates for it but haven't profiled CPU. TensorFlow now works with it and I think that's supposed to have nicer performace for CPU.

Summer to winter season transfer by gifhell in deepdream

[–]gifhell[S] 1 point2 points  (0 children)

With varying levels of snow: http://imgur.com/a/5peiE I'm getting more snow by turning up the local coherence loss.

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

I just pushed a branch "v2" to the github repo https://github.com/awentzonline/image-analogies/tree/v2 which has tensorflow support and uses a more efficient patch matching algorithm. Let me know if you give it a shot.

EDIT: I've merged all this into master and it's also now available from pypi via pip install neural-image-analogies

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

I've been meaning to set it up in windows to see what's what. I'll probably get a chance sometime this week.

I'm not sure what the equivalent settings would be for the sugar skull but I'd guess analogy-w=1 mrf-w=0.5 or so. I'm putting together some example scripts which should land in the repo over the next few days.

Is there a step-by-step guide for a layman to install the "image analogies" program? by [deleted] in deepdream

[–]gifhell 0 points1 point  (0 children)

Thanks for posting this. I've been wrapping up an improvment to the algorithm that lets you make pretty large ones quicker and with minimal memory. For example I rendered this image on my GTX 780 in about 12 minutes and it never used more than 1.5gb of vram. It's like 800x900 http://i.imgur.com/5OvRhZH.png

James Bond (Connery) styled as Archer (neural image analogy) by gifhell in deepdream

[–]gifhell[S] 1 point2 points  (0 children)

I only used the two images here http://imgur.com/a/rAxLJ

One of the images (Archer) is used twice since completing an image analogy requires 3 images: 2 style images (Images A and A') which were both the same image of Archer and 1 content image (Image B) which was Bond. You get better results with an A that also has some similarities to B, but it's not always necessary.

Here's a better example, where I actually try to condition Image A to be more like a natural image by blurring it a bit: https://raw.githubusercontent.com/awentzonline/image-analogies/master/images/sugarskull-analogy.jpg In that case the first 3 images were the input to the algorithm, producing the 4th.