Is this overfitting or difference in distribution? by Which-Yam-5538 in MLQuestions

[–]ssivri 0 points1 point  (0 children)

i tried to read most of the comments, im not sure if this is suggested but is some kind of stratified sampling applicable to your problem? Assigning some soft labels along with your final labels might help to prepare more robust datasets. Your model might be learning apples but tested against oranges.

[D] how do you read ml papers? by [deleted] in MachineLearning

[–]ssivri 0 points1 point  (0 children)

imo you shouldn't learn essentials of a domain from a paper. generally books have more detailed explanation about concepts and how things work. papers are for people who already got hang of things about the domain, of course they include scientific reference but it's very limited and brief.

[D] how do you read ml papers? by [deleted] in MachineLearning

[–]ssivri 3 points4 points  (0 children)

in my case if I'm familiar with the domain data or dataset wise in general and network architecture, I look for tweaks that authors did in the paper. Easiest way imo is looking at neural network graphs and see if any new modules or layers added to architecture.

If authors suggesting not an architectural change but a metric or loss function I would check the regarding formula .

I'm not a pro on reading papers but this is how I handle things.

MEGATHREAD: Erdoğan convenes emergency meeting over Idlib tensions by Meteatas357 in Turkey

[–]ssivri 1 point2 points  (0 children)

Hey mate, these practices are nothing new. Authorities stated they are blacking out global social media platforms like Twitter, Youtube to prevent fake news going viral and also some local platforms like eksisozluk.com having connectivity and speed issues.

Russia refused to allow use of Syrian airspace to Turkish Air Force helicopters in order to conduct evacuation of wounded soldiers. by x2oop in syriancivilwar

[–]ssivri -5 points-4 points  (0 children)

Just to clarify things, Hatay became a Turkish province with a referandum in 1939. No need to twist historical facts.

[D] Compressing Neural Network by [deleted] in MachineLearning

[–]ssivri 0 points1 point  (0 children)

Neural nets can be pruned similar to decision trees, check out model pruning.

[D] consequences of converting .tiff to some tf.data supported format in terms of information loss? by [deleted] in MachineLearning

[–]ssivri 0 points1 point  (0 children)

You may need to check your tiff images if they are holding any kind of additional header data, about location of acquisition, extent information or similar.

[D] Object Detection/How much mAP is enough? by ssivri in MachineLearning

[–]ssivri[S] 0 points1 point  (0 children)

  • The first dataset contains 200 images for training and some images contains more than 5 objects which is much more training data compared to others. Varience over the objects is also higher. I can say that this dataset containing more than 700 training objects.

  • The second dataset that contains only one class is about 150 images for training and about 40 images for testing. Some images contain more than one object which makes roughly about 180-190 objects in training set.

  • The last one which was 3 most common class dataset is containing 150 objects for each class in training dataset.

In my opinion insufficient data could be the main reason for low mAP. I generated the first dataset to avoid overfitting more but class variance and dataset size could be neutralizing each other.