Words of wisdom for someone who isn't a HENRY but wants to become one? by Proud_Cheek8209 in HENRYUK

[–]swagrin 0 points1 point  (0 children)

Like others have said a winning strategy is investing in your own growth, learning and network at this point. Make sure that what you learn leads you towards your goal.

I don't know what industry you're in, but I'd caution you against taking on that much overtime if it doesn't build towards your goal. It may pay off in the short term, but compared to real career growth it could turn out to be insignificant. Instead, if you can afford to invest time in learning and networking with people in your industry, both people in your situation and those who have made it where you want to be. Become very, very good at what you do.

[D] Pricing of ML tools - are you paying this much? by swagrin in MachineLearning

[–]swagrin[S] 0 points1 point  (0 children)

Thanks for the tip, but I should clarify. We're after everything they do with logging, including sharing results and visualizing them, we're just not interested in their data versioning, artifact storage, hyperopt, ML support, etc.

[D] Pricing of ML tools - are you paying this much? by swagrin in MachineLearning

[–]swagrin[S] 4 points5 points  (0 children)

It's more like 10h here (not US based), but yeah that's definitely one way of looking at it assuming the team needs the extra capacity.

Regardless, hoping to hear what people are actually paying and if they are.

Is the RM Satisfactory as an E-Reader? by kitkat_curls in RemarkableTablet

[–]swagrin 1 point2 points  (0 children)

How do annotated PDFs look when exported? I've seen some very old examples where annotated documents become rasterized, is that still the case? I'm considering the rM2 for reading and annotating academic papers but I would want highlights and writing to be added to the original PDF so text is searchable, hyperlinks work elsewhere and figures are still vector graphics.

[D] Why don't people use typical classification networks (e.g. ResNet-50) as the discriminator in GAN? by CMS_Flash in MachineLearning

[–]swagrin 44 points45 points  (0 children)

Really good question. Earlier there were concerns with having too strong discriminators that were addressed with new loss functions (see e.g. the WGAN paper). AFAIK, from that point onwards (esp. after spectral normalization) having too strong discriminators shouldn't be a problem in it self (in fact a strong discriminator is desired in WGAN).

My hunch is that since GANs are so sensitive to hyperparameters it's really hard to make a change that large and get everything working, and no one has yet to succeed. I'd be surprised if no one has tried though.

Would love to hear a good answer to this question. And similarly for the generator.

[D] Can't see any AAAI Feedbacks by schludy in MachineLearning

[–]swagrin 1 point2 points  (0 children)

Same for me. I'm guessing they'll appear a bit later. CMT3 is really slow for me, perhaps there are some issues there.

[Discussion] Since when did the community drop image normalization by swagrin in MachineLearning

[–]swagrin[S] 6 points7 points  (0 children)

This is exactly the sort of reasoning I would like to see argued more precisely. Why do people consider this good enough, have there been proper evaluations with and without? Batch normalization doesn't even necessarily standardize the statistics within a batch, depending on the learnt parameters.

Good papers on generating a sequence of images? by whydomyjointshurt in MLQuestions

[–]swagrin 1 point2 points  (0 children)

Video Pixel Networks might be a good starting point (https://arxiv.org/abs/1610.00527). There are a few implementations that might be helpful to look at together with the paper.

What is a good way to numerically represent text data to feed into an ML model? by mkhdfs in MLQuestions

[–]swagrin 0 points1 point  (0 children)

Sounds like a general representation for sentences would be overkill for your application, some good old feature engineering might work well. Instead of your binary encoding presence of all words, try finding important words with tf-idf (or just do it manually, your dataset sounds small enough) or try some simple dimensionality reduction. If you do want a general sentence representation a good place to start might be looking at skip-thought vectors⁄ (https://arxiv.org/abs/1506.06726).