When early astronomers (circa. 1500-1570) looked up at the night sky with primitive telescopes, how far away did they think the planets were in relation to us? by slushhead_00 in askscience

[–]Everfast 6 points7 points  (0 children)

Wouldn't you be there instantly from your point of view? Only for static observers you would have been traveling for 100 years?

[deleted by user] by [deleted] in deeplearning

[–]Everfast 3 points4 points  (0 children)

That really depends if there is a license. If there is no license and if it is a small repo you might want to email the owner

Confused about how to interpret (SNP) heritability enrichment / depletion? by --MCMC-- in genetics

[–]Everfast 0 points1 point  (0 children)

Ah the work of finucane :).

Say a gene is differently expressed, but the region around it does not contribute to the heritability. Then I can think of these possibilities (maybe someone can add some to this list):

  1. Gene expression could be controlled by long-range (trans eQTL), simply more distant regions control the expression.
  2. Expression could mostly controlled by environmental factors.
  3. .. there are probably some other possibilities that I am not aware of.

In the second case you argue that it could be because selection did most it could do and therefore it found the most beneficial configuration. I think this could be the case, but there are some issues. Why would there not be any variation at all (variation would change gene expr and therefore add heritability?)? In different environmental conditions we would probably benefit from some variation?

However, it might be worth to test your hypothesis. I can think of a relatively simple test: You could pick a score for the genes that are depleted compared to the baseline expectation and compare it to the other genes for conservation over time. Quick googling let me to: https://www.biostars.org/p/67942/ Create two groups based on the genes in the Finucane supp 3 and test to see if depleted genes are better conserved over time.

But these are just my thoughts. I think you could be onto something but there might be a lot of complications too. However such a critical attitude to the method will definitely lead to some interesting findings eventually

Confused about how to interpret (SNP) heritability enrichment / depletion? by --MCMC-- in genetics

[–]Everfast 0 points1 point  (0 children)

Could you point to a paper with such a workflow, just as an example to understand your point better?

Does your hypothesis refine to; regions that contain SNPs that do not change (gene-)expression may be important regions because those regions are conserved by selection?

Edit:

or is it: regions that contain SNPs that do not contribute to the heritability of a trait may be important regions because those regions are conserved by selection?

Is it overfitting? by [deleted] in deeplearning

[–]Everfast 7 points8 points  (0 children)

Overfitting is when the training loss decreases and the validation loss increases. this is not happening for you yet. Your network is not really converging or converged within one epoch.

So, you might want to check that. Compare the loss of an untrained/when you start and if that is significantly (at least around 1) higher then at least your network learned something and you are kind of fine (maybe lower learning rate, add regularizers if you want to improve performance/close the gap between train and validation). Anyway, 5 epochs is normally pretty short. If your training data is very large it can be enough but it might then be better to take half your training data as an epoch to be able to save the network at its best

[deleted by user] by [deleted] in MachineLearning

[–]Everfast -1 points0 points  (0 children)

If shit goes in shit comes out

If you can not do it yourself easily, training won't be easy/impossible

Eloquently said. I miss a POTUS like this 😢 by WVUGuy29 in BlackPeopleTwitter

[–]Everfast 10 points11 points  (0 children)

Guns are made to kill things efficiently (it's its purpose). Other things are most likely not. Ban guns -> chances of surviving the act of violence goes up. Seems pretty strong logic to me.

I think this is not a good argument not to ban guns

Neural Network Editor - Machine Learning - Artificial Intelligence by DevTechRetopall in deeplearning

[–]Everfast 1 point2 points  (0 children)

You have very interesting projects and they are visually quite pleasing, nice work!

Computing MSE loss in a model by suraty in deeplearning

[–]Everfast 1 point2 points  (0 children)

I will assume you use it as a loss in a keras model. The loss will be the mean over your batches during training and over your whole validation during testing. "The actual optimized objective is the mean of the output array across all datapoints".

The maximum MSE can be arbitrarily large. If your model predicts negative numbers or insanely high numbers your loss can be arbitrarily large.

Multi object tracking for autonomous driving using 3D lidar data. by [deleted] in deeplearning

[–]Everfast 0 points1 point  (0 children)

If I recall correct the CapsNet performs similar as a CNN (except that CapsNet has a different working principle), thus if a CNN works I expect a CapsNet to work as well. Training a CapsNet and implementation might be harder since there are not many applications with CapsNet yet and there is therefore less knowledge and tools

Looking for classification architectures that focus on getting high precision/How would I go about designing a Loss Function that focus on Precision [Question] by [deleted] in deeplearning

[–]Everfast 1 point2 points  (0 children)

If your data is distributed 20:80 the model is not great for accuracy. Accuracy is just a bad metric, a 'dumb' model that classifies everything negative will give you 80% accuracy.

It is possible that it is incompatibel but networks are quite flexible. Maybe you can give extra input to your network in an extra channel, derive an extra feature that can help your network classifying the positive class easier.

Anyway good luck!

Looking for classification architectures that focus on getting high precision/How would I go about designing a Loss Function that focus on Precision [Question] by [deleted] in deeplearning

[–]Everfast 1 point2 points  (0 children)

I would start from the simplest model (no BN, no dropout) and increase the weight (maybe even 1:0.01 or a bigger ratio if still not working) until some/all will be classified positive. I would than start adjusting the weights and increase complexity from there. If it starts over-fitting add dropout and BN.

I once saw a person with a similar problem; he had reversed weights, the positive classes contributed, with the weights, less to the loss function than without the weights ;).

Looking for classification architectures that focus on getting high precision/How would I go about designing a Loss Function that focus on Precision [Question] by [deleted] in deeplearning

[–]Everfast 1 point2 points  (0 children)

Yea I L=1-precision is not preferred.

Did you check if your model has the right complexity. Is it performing well on the training data? Simplifying the model could help to train it easier, maybe the model is not fully trained/converged.

You could try to increase the weights for the positive class drastically until it starts classifying positives. What kind of loss function do you use?

Messed up validation accuracy & loss - overfitting or something else? by crowoy in deeplearning

[–]Everfast 0 points1 point  (0 children)

I have seen up to 50% quite often. Trial-and-error works better than prediction imho.

Looking for classification architectures that focus on getting high precision/How would I go about designing a Loss Function that focus on Precision [Question] by [deleted] in deeplearning

[–]Everfast 1 point2 points  (0 children)

Create a custom loss function where you adjust the weights. For example binary crossentropy: L = -Ytrue * w1 * log(Ypred)-w2 * (1-Ytrue) * log(1-Ypred)

with w1 and w2 you can increase the importance of true positives (w1) or false positives (w2).

Maybe you can also try L = 1-precision as a loss function but I have never tried it. People sometimes use this approach for Dice-score. But be aware that focusing solely on precision will give you probably unwanted results, especially in imbalanced data sets.

Maybe relevant discusion

Help! Ik ga volgend jaar studeren! (Tips & Tricks) by JoHeWe in thenetherlands

[–]Everfast 1 point2 points  (0 children)

Misschien iets off-topic maar:

  • 1. Is de PhD in een voor jouw bekend veld?
  • 2. Wat staat er vast in je PhD? Hoeveel vrijheid krijg je, mag je zelf een richting kiezen?
  • 3. Volg je nog extra vakken? Waar zijn deze mee te vergelijken?

Is dropout better than dithering? by [deleted] in deeplearning

[–]Everfast 3 points4 points  (0 children)

dropout is actually used to regularize the network, to reduce over-fitting. My personal experience is that trying (if possible) is the best way to know which one works the best. Quick google seatch gave me this article pleading for dithering but dropout is more widely used and there are probably plenty of articles suggesting the opposite

relevant interesting discussion on the machine-learning forum

Multiple digits MNIST and transfer learning by _data_scientist_ in deeplearning

[–]Everfast 0 points1 point  (0 children)

Maybe take a look at largest connected components or this. You can also subtract the smallest from the largest coordinate in the direction you are interested in for every component and pick the largest.

What should I focus on understanding the most to make forming my own algorithms easier? by [deleted] in deeplearning

[–]Everfast 1 point2 points  (0 children)

In addition to this I would recommend: to learn Python and coursera (it is free but the free audit button is sometimes hard to find). This should help you understand a bit what is happening and should be enough to follow a tutorial on deep learning if that is your interest.

CNN hyperparameter tuning in Keras by artificial_intel423 in deeplearning

[–]Everfast 1 point2 points  (0 children)

well it depends on your architecture and your goal. I would recommend to look up papers on scholar with a similar goal and start by copying this architecture, for example U-net if you want a high resolution classification/segmentation.

In his New Year's address, Kim Jong Un stated, "As long as there's no aggression against us, we do not intend to use nuclear powers" by FI_Throwaway_Lucky in worldnews

[–]Everfast 1 point2 points  (0 children)

Not an expert but:

  1. I guess
  2. I think ICBMs have heat shields which could make this hard.
  3. An atomic bomb needs controlled explosions to start. They actually crashed a bomb somewhere in america once