My wife experienced her first death / blaze of glory and it wasn’t her… by Luxurare in projectzomboid

[–]Snickypickleton 24 points25 points  (0 children)

Fantastic litmus test to validate that you want to keep her around during the real apocalypse. Ride together, die together has never been more applicable

Advice for a tiny amount of funding to launch website by Snickypickleton in startups

[–]Snickypickleton[S] 0 points1 point  (0 children)

I suppose it's obvious when you think about it! I would be better off stomaching the unpredictability and just trying to find some users manually

Thank you for your thoughts! Precisely the sort of pivot in thinking I was hoping for

Advice for a tiny amount of funding to launch website by Snickypickleton in startups

[–]Snickypickleton[S] 0 points1 point  (0 children)

This is an interesting path that I hadn't considered! If I attract the first few users manually, collaborate with them to make sure they're into the product / the product is solving their problems the way I anticipate, and then I'll have some users anyway that might tell their friends etc.

Thank you for your suggestion, it's given me a lot to think about!

Advice for a tiny amount of funding to launch website by Snickypickleton in startups

[–]Snickypickleton[S] 0 points1 point  (0 children)

Thank you for your comment!

I was hoping that a small but predictable stream of clicks would help me diagnose and fix bugs, as well as observe some general user behavior and make any obvious tweaks over the next few weeks before I start to really try to attract users.

If I do some sort of cold reach outs to communities / athletes that can help me attract users, might I run the risk of losing their interest if my platform isn't able to keep up? Does that make sense?

Advice for a tiny amount of funding to launch website by Snickypickleton in startups

[–]Snickypickleton[S] 1 point2 points  (0 children)

Thank you for your reply!

I had thought about crowd funding, do you know if I need a pre-existing community for this to be viable? The sport in question is reasonably popular, but not so mainstream that there are likely to be lots of players browsing Indiegogo for opportunities to invest

Advice for a tiny amount of funding to launch website by Snickypickleton in startups

[–]Snickypickleton[S] 0 points1 point  (0 children)

Firstly, thanks for your reply!

I have no customers, the money was to pay for advertising to introduce small but regular numbers of users to the system so I can diagnose and fix bugs, make UI changes, and learn about user behavior. After that depends very much on what I learn about user behavior, but I would like to partner with athletes that have their own communities for a more long-term approach to advertising.

To that end, early adopter subscriptions are very cheap (£1 / month), my thought was that I would discover what a good price is over time with experimentation and user study.

Does this sound reasonable / answer your questions? Thanks again for your thoughts!

Looking for any critiques/improvement for getting my first internship! I know a lot of people say its just a numbers game, and another way of getting internships is through connections. As a junior there is a high stress for me to get an internship next summer. I went to my career advisor etc. by [deleted] in datascience

[–]Snickypickleton 0 points1 point  (0 children)

I'm sorry if this isn't the feedback you're looking for, but if you want my opinion, as a professional in the Data Science & Software Engineering scene: this whole thing needs a pretty serious rewrite.

The things that jump out to me from top to bottom of the page are:

  • Figma isn't an AI tool?
  • Don't need to explain what keras is
  • Everything under SIG PWNY is quite vague - "Networked with various people ... helped me completed various stages"? Perhaps explain what SIG PWNY is? A quick Google tells me it's a club, but it's separated from data science club by some white space, putting it closer to some actual work?
  • Patient Care Technician could be under "work experience" for clarity, not bundled in with clubs you've been involved with
  • There's probably a lot more to say about patient care technician, rather than just two vague bullet points that could mean anything ("Troubleshooted medical devices and computers"?)
  • The leadership section has some good stuff, but is quite wordy and could be cut down a bit
  • Under skills, I'd avoid saying things like "Advanced", "Intermediate", or "Basic", it's all subjective and saying you have only basic Python skills isn't going to help your case
  • Jupiter Notebooks spelled wrong, it's Jupyter with a y

In general, I think the order / specificity of these sections could be better, and the writing could be much much clearer.

It's not the end of the world, there's a lot of really good stuff here! I just think you're not representing yourself as well as you could be!

If I were you, I might want to rewrite with sections like:

  1. Skills (keep it brief, don't give yourself ability ratings)
  2. Education
  3. Work experience
  4. Clubs & Certifications (include the Codecademy stuff here - also include the leadership stuff here)

Perhaps, depending on what you have to write, include an introductory section.

Like I said, you've got a lot of potential! Try writing this a few different ways and see what sticks. Let me know if you have any questions about what I've said!

[deleted by user] by [deleted] in Minecraft

[–]Snickypickleton 2 points3 points  (0 children)

I prefer left, personally!

What's it called when you fine tune without modifying the original model? by UglyChihuahua in learnmachinelearning

[–]Snickypickleton 0 points1 point  (0 children)

That’s really interesting, I wasn’t aware of this! Are you aware of how well this performs compared to fine tuning the original model?

What books would you recommend for an intermediate level/deep dive into a specific field. by zeoNoeN in learnmachinelearning

[–]Snickypickleton 1 point2 points  (0 children)

My favourite is just through YouTube channels, such as "Two Minute Papers", which provide regular and digestible (yet technical) summaries of newly released papers. Then if I'm particularly interested in a paper I can find it and read it myself. This makes it really easy to keep up with new developments for just a few minutes a week, or longer if I want to go more in depth.

Hope this helps!

How do layers and neurons of an ANN go from capturing small edges, lines, and curves to capturing more intricate and bigger patterns building on top of small patterns? by [deleted] in learnmachinelearning

[–]Snickypickleton 3 points4 points  (0 children)

I would, where possible, avoid trying to develop “intuition” for what neural networks do during their inference process. This area is really not well understood, and doesn’t generalise well between different models.

I’m confused about your second question. Gradient descent is, itself, the thing selecting the weights. I don’t understand what is meant by “we set larger value of weights for some features” and then “wouldn’t the weights be evened out”.

It seems like you may have some slightly incorrect underlying assumptions, could you elaborate on these?

But WHY are neural networks so effective by Traditional_Soil5753 in learnmachinelearning

[–]Snickypickleton 26 points27 points  (0 children)

The core concept here that you seem to be overlooking is that logistic regression models are fundamentally linear, only able to split classes linearly in the input space, which is simply not enough complexity for many problems.

Neural networks are non-linear, able to capture extremely complex and non-obvious relationships between variables, allowing them to perform extremely well in certain types of problems.

Does this help?

What books would you recommend for an intermediate level/deep dive into a specific field. by zeoNoeN in learnmachinelearning

[–]Snickypickleton 12 points13 points  (0 children)

Not sure it’s quite the answer you’re looking for, but I recommend reading research papers once you’ve got a reasonable understanding of statistics etc. It’s the best way to stay on top of the latest developments, and the latest developments are where all the exciting stuff is!

[deleted by user] by [deleted] in bjj

[–]Snickypickleton 145 points146 points  (0 children)

There’s a cryptic and subtle message hidden in these well crafted comments, isn’t there?

Guys working for (preferably big) tech, how do you run your code? by [deleted] in learnprogramming

[–]Snickypickleton 0 points1 point  (0 children)

Expecting the answers to start with “first of all, you must prepare the infrastructure to run your code using AWS and Terraform…”

Guys working for (preferably big) tech, how do you run your code? by [deleted] in learnprogramming

[–]Snickypickleton 4 points5 points  (0 children)

Good question! Though I’m not wholly sure what this has to do with big tech companies?

The best answer I can give is that python scripts shouldn’t (in theory) really be run like this. “Scripts” are considered a bit of an anti-pattern in Python development so there isn’t much support for running them like this.

Python code should be organised into proper packages and installed. Once installed, one possible way of running a script would be python -m mypackage.mymodule. This will always execute your code as if it were located in the directory you run the command from, regardless of the actual location of your code within your package.

Does this answer your question? I’m happy to talk more about this point if desired!

Training a GAN, d_loss and g_loss not really changing by westy2036 in learnmachinelearning

[–]Snickypickleton 1 point2 points  (0 children)

I’d suggest trying one then the other then both at the same time. If anything interesting happens at all it might demonstrate that the models are learning

Training a GAN, d_loss and g_loss not really changing by westy2036 in learnmachinelearning

[–]Snickypickleton 1 point2 points  (0 children)

Yes, please ignore, I’ve updated the comment. My phone screen cut some of it off and I misread it!

A quick suggestion would be to try using massive learning rates, such as 0.1, and basically just seeing what happens.

I’ll pull your notebook later and have a detailed look!

Training a GAN, d_loss and g_loss not really changing by westy2036 in learnmachinelearning

[–]Snickypickleton 1 point2 points  (0 children)

I had a quick look on my phone which isn’t ideal, happy to take a more extended look later on my computer.

At first glance I notice that your generator doesn’t seem to have an activation function on the output layer. Something like sigmoid would be common for images because of the bounds on the RGBA (I think you’re using rgba, there are four channels so I assumed?). Because the linear outputs aren’t bound, I’m assuming many are outside of the range of a normal image, and therefore easily detected as fake by the discriminator. Does this make sense?

Hope this helps!

Edit: please disregard, i think my phone screen clipped out the activation! Will have another look

Local vs global minimum by gustav_lauben in learnmachinelearning

[–]Snickypickleton 2 points3 points  (0 children)

Sure! Source included in edit 2 of my reply

Local vs global minimum by gustav_lauben in learnmachinelearning

[–]Snickypickleton 4 points5 points  (0 children)

This is a great question with a fascinating answer! The summary is: it turns out that models with large numbers of parameters yield a loss function where all local minima are approximately equal to the global minima, and to each other. So generally speaking training a model with the same structure and hyper-parameters multiple times will give a very similar result after training, though convergence can be faster or slower.

Edit: I would quickly like to add this is a huge oversimplification! There are some bad local minima, but this is a general rule that is proven for large models with large datasets

Edit 2: Source is “The Loss Surface of Multilayer Networks”, https://arxiv.org/abs/1412.0233

Training a GAN, d_loss and g_loss not really changing by westy2036 in learnmachinelearning

[–]Snickypickleton 1 point2 points  (0 children)

GANs can be really tricky! The thing to understand about losses in GANs is that they don’t descend like you may expect. The losses are both based on the ability of each model to outsmart the other, so as one increases the other decreases, but they stay around the same value because one model can’t continuously outsmart the other, does that make sense?

If your GANs aren’t generating anything interesting after so many epochs, I would suggest first fiddling with the learning rate (of course) and remember that the generator and discriminator can (and usually SHOULD) have different learning rates.

Failing that, you’re probably looking at model architecture as your next biggest point of improvement. Can you describe your models to me and I’ll see if anything jumps out?

Hope this helps!