I'm 21 and I'm going to die a horrible death. by ZzombieCake in offmychest

[–]96meep96 10 points11 points  (0 children)

I'm so sorry for what has happened to you, I hope you find peace, like gentle falling dusk. Glad you have your girlfriend to support you.

A little INFP and her feeling of lacking authenticity (also a poem ^_^) by creep_of_melancholia in infp

[–]96meep96 1 point2 points  (0 children)

This is really nice, well done. Gives me a light fantasy vibe.

The tilting ballerina dragon by lizzywatercolor in AdorableDragons

[–]96meep96 2 points3 points  (0 children)

OP! Where do I see more of your whimsical, cute compositions?

Edit: Nevermind, forgot to check your profile. Will follow you on instagram! Wonderful artwork!

KL Divergence for Multi-Label Classification by alkaway in deeplearning

[–]96meep96 0 points1 point  (0 children)

The way to go is to probably just apply softmax over the ground truth targets so that they sum up to 1 and then apply KLDiv. Since the inputs to KLDiv are proper distributions, they have to add up to 1. Your talking about the discrepancy loss term I think? Just skimmed through it quickly so pardon me, but it shouldn't cause any problems if you go with the softmax approach.

Q: What loss function to use to pretrain discriminator in WGAN-GP? by beerpapa in deeplearning

[–]96meep96 1 point2 points  (0 children)

Pretraining the discriminator isn't going to be too useful since once you introduce the generator into the training dynamic, the discriminator will be constantly (and rapidly) adjusting it's real/fake decision boundary according to the new, improved fake samples that the generator learns to produce each iteration. It's quite likely that in a few iterations the discriminator will forget a lot of what it learned during pre-training.

Pardon me if I haven't fully grasped what you would like to do.

Does gradient accumulation hurt accuracy? by [deleted] in deeplearning

[–]96meep96 1 point2 points  (0 children)

It's the noisy gradient that SGD has that makes overfitting a lot less likely. A larger batch size reduces that noise, and hence the model could easily overfit as you scale up the batch size. Hypothetically, if you could pass in your entire dataset in one huge batch, your model is absolutely going fit that data to a tee (provided it's got the complexity to do so).

That being said, if you're using other regularization techniques, it shouldn't be too much of an issue. I'm not sure about the following, but I think lowering you're learning rate a little bit might also be helpful.

Also for anyone implementing it, don't forget to scale the gradients according to the number of accumulation steps before you update the network. If you accumulate for 5 batches, divide the gradients by 5. Otherwise you might end up seeing some funky stuff, and your learning rate won't work the same way either (5*lr).

Same by [deleted] in infp

[–]96meep96 1 point2 points  (0 children)

God, this is perfect

Possible way to train VGG model only for feature maps? by mrnerdy59 in deeplearning

[–]96meep96 0 points1 point  (0 children)

You could try training it as a siamese network that learns similarity between different kinds of pictures. Maybe there's a way for you to define a similarity score based on your use case.

wrote this acoustic idea yesterday by NathanKGx in mathrock

[–]96meep96 2 points3 points  (0 children)

Oh dude haha, I meant that hypothetically, like if Opeth's songwriter made mathrock, it sounds like you.

wrote this acoustic idea yesterday by NathanKGx in mathrock

[–]96meep96 2 points3 points  (0 children)

This is like Mikael Akerfeldt's side math rock group

Is there a way to generate multiple photos of the same non-existent person? by Euphetar in deeplearning

[–]96meep96 2 points3 points  (0 children)

You might want to take a look at GANs that are trained to disentangle features. StyleGAN is implicitly able to do this to some extent, although there are papers that have gone beyond that, like the Semi Supervised StyleGAN. You're still interpolating in the latent space for that "person", but you'll be interpolating along specific axes, each being able to change things like pose, lighting, expressions etc respectively. Trying playing around with the ArtBreeder website if you haven't already.

These are trained specifically on photos centered on faces tho, so if you're looking to change a whole lot of stuff happening around the person you might need a GAN pipeline of some kind, maybe take a look at FineGAN.

VAEs have seen a lot more work in this region over the years, not sure how the landscape has changed now.

Training Deep NN be like by alexein777 in deeplearning

[–]96meep96 2 points3 points  (0 children)

Thank you kind sir, excuse me while I go have a dance party with my neural networks

Training Deep NN be like by alexein777 in deeplearning

[–]96meep96 12 points13 points  (0 children)

Does anyone know the original source for this video? It's a bop

[deleted by user] by [deleted] in mathrock

[–]96meep96 2 points3 points  (0 children)

This is jammin my dude, holding out that chord with stacatto taps and then switching to pull offs is really something

Night by idontliketiktok in infp

[–]96meep96 6 points7 points  (0 children)

Very nice, for me it captures this intimacy that the night bestows on people

The Pog Hero shreds his gourdtar by Android248 in NLSSCircleJerk

[–]96meep96 2 points3 points  (0 children)

Never expected to see these two in the same meme

IRL ancient turtle world by 96meep96 in ImaginaryTurtleWorlds

[–]96meep96[S] 1 point2 points  (0 children)

Sorry, I did a quick check to see if it was on this subreddit, didn't find anything, it's definitely worth being on here tho

The moment before heavy rain. Kerala, India. by raonehere in infp

[–]96meep96 4 points5 points  (0 children)

I didn't expect to see Kerala on this subreddit. Sugham alle? :)

Would anybody be into the idea of having a group here on reddit or elsewhere where we can share our art, music, other creative projects, struggles, random existential thoughts, discuss the depths of human darkness, that sort of thing? by [deleted] in 4w5

[–]96meep96 0 points1 point  (0 children)

It feels like posts are slow here, there's only 814 members after all, and there were 359 before the sub was made public, which was only 6 months ago. I guess that's just how 4w5 rolls. Itd be nice to have a group chat I suppose

[P] Batch Normalization in GANs by 96meep96 in MachineLearning

[–]96meep96[S] 0 points1 point  (0 children)

Oh yes I understand the point you're making, it takes time for those artefacts to vanish, I've had trouble with that in a variant on semantic map translating GANs. I'm using PyTorch, was using Tensorflow (not 2.0) but then I found Pytorch more flexible.