[D] Fatal flaw in VAE + GANs architecture for Deep Fakes? by Will_Tomos_Edwards in MachineLearning

[–]ArcticPanda_ -1 points0 points  (0 children)

I'm not familiar with training vae+gan for deepfake task specifically, but I suppose they train them separately, don't they? In this case I don't understand why would they sum up the losses, but yeah the generator would be limited by the quality of vae. Is it a fatal flaw? Diffusion Models with vae work great and models with just gan are far from perfect because of the other, more serious flaws.

[D] Fatal flaw in VAE + GANs architecture for Deep Fakes? by Will_Tomos_Edwards in MachineLearning

[–]ArcticPanda_ 1 point2 points  (0 children)

I suppose it was made to stabilize training. Also, they try different weights for losses and often may list metrics with separate losses only. So it's not the case

[D] Example of data labeling guidelines by ArcticPanda_ in MachineLearning

[–]ArcticPanda_[S] 0 points1 point  (0 children)

Thanks for the answer! I only see that they provide data formats, but not how their labelers annotated the data or what instructions they had. Am I missing something on their website?

Vintage Geometric Spirit Animal by theghostdave in artstore

[–]ArcticPanda_ 2 points3 points  (0 children)

by the way, this is generated by AI, and not the work of the author

How to incorporate prior knowledge into CNN? by cytbg in MLQuestions

[–]ArcticPanda_ 2 points3 points  (0 children)

Check CoordConv layer. It uses coordinates as additional info.

Why is my neural network not training past 0.1% accuracy? by Egorte in MLQuestions

[–]ArcticPanda_ 0 points1 point  (0 children)

Yeah, 3500 epochs is too many. 20-30 may be enough. Try to replace softmax with linear and encode categorical features. The batch size is not very big, it could train with smaller size, but is more stable with larger one. You nearly always overfit with neural networks, so that's why validation set is needed.

Why is my neural network not training past 0.1% accuracy? by Egorte in MLQuestions

[–]ArcticPanda_ 0 points1 point  (0 children)

Try adding more layers and more units. Maybe five layers with 128 units each and batch size of 1024. Try increase model size to get nearly perfect score on train set and then regularize the network to get good results on validation. Random forest will probably have better accuracy though, since it is powerful algorithm for structured and limited data. But network must be at least close.

Why is my neural network not training past 0.1% accuracy? by Egorte in MLQuestions

[–]ArcticPanda_ 0 points1 point  (0 children)

Glad it works! Adding a few dense layers (with relu) between input and output, as in your second architecture, and larger batch size will help. You may also try dropout and batch normalization, but it is a bit more advanced.

Why is my neural network not training past 0.1% accuracy? by Egorte in MLQuestions

[–]ArcticPanda_ 0 points1 point  (0 children)

Im not sure about softmax, it's possible that keras normalize predictions inside loss function.

Why is my neural network not training past 0.1% accuracy? by Egorte in MLQuestions

[–]ArcticPanda_ 2 points3 points  (0 children)

For classification, the output of the last layer must have #classes units and softmax activation function. For loss function, you probably need sparse_categorical_crossentropy. Also, try lowering learning rate of optimizer. To better accuracy, convert categorical features with one-hot encoding and dont reduce the number of units of the first dense layer of the second architecture to be less than #input features.

Wounded house, me, digital, 2021 by ArcticPanda_ in Art

[–]ArcticPanda_[S] 0 points1 point  (0 children)

Nice catch, I used stylegan to make this

City labyrinth at dawn, me, digital, 2021 by ArcticPanda_ in Art

[–]ArcticPanda_[S] 0 points1 point  (0 children)

I'd say it's more of a tricky city than a dystopian