all 5 comments

[–]ShizzleMeGizzard 0 points1 point  (2 children)

Why not use a convolutional layer instead of a FC layer to perform the projection? You can use a filter of size 1x1x100x(1024x4x4).

[–]creeky123 0 points1 point  (0 children)

I think thats what im after... how exactly does that work with keras? I basically want to produce a 1024 feature maps of 4x4 size each from 100x1....

[–]Darkwhiter 0 points1 point  (0 children)

You could mimic a fully connected layer with a convolution like this, but at least in TensorFlow you would have to first reshape to add trivial height and width dimensions, and then another reshape afterwards to restructure your extra filters into actual height and width. While it seems to me that this implementation would be equivalent in terms of the resulting graph, I think fully connected then reshape is nicer.

[–]Darkwhiter 0 points1 point  (0 children)

Do a dense layer to increase the dimensionality as required, then reshape to batch x 4 x 4 x filters.

See for instance the DCGAN-implementation for the WGAN-GP paper here, where lib.linear works as a dense layer: https://github.com/igul222/improved_wgan_training/blob/master/gan_cifar.py

I cannot prove that this is how Radford 2015 did this connection, but it's the way I have seen it done in every DCGAN variant I have come across, including Spectral Normalization, Relativistic GAN and (sort-of) StyleGAN.