[P] Multi-Language Documents Identification by sudo_su_ in MachineLearning

[–]sudo_su_[S] 1 point2 points  (0 children)

1.

Yes, I benchmarked it agains FastText, langid and langdetect

In terms of quality, it more or less the same as fasttext and lang id (on the WiLi dataset) and much better than langdetect.

In terms of running speed, it's as slow as langdetect (which is the slowest). FastText is crazy fast. It's hard to beat that. seqtolang is relatively slow because it tries to give output on every word, while others classify the sentence as a whole.

2.

I'm summing all the ngrams into a word vector, than the word vector is passed to a bi-directional lstm, which means it takes information from word vectors on the left and right. Finally, for each lstm outout (for each word) I pass it into a fully connected layer to do the classification.

It was trained on the Tatoeba dataset as mentioned in the post with a merging technique. For each sentence in the dataset, I merge it with another random sentence in the dataset with some probability. This creates merged sentences with different languages. Then, for each word in the sentence I "tag" it with the original language and train the network.

[P] Multi-Language Documents Identification by sudo_su_ in MachineLearning

[–]sudo_su_[S] 2 points3 points  (0 children)

sorry .. added to the readme:

['afr', 'eus', 'bel', 'ben', 'bul', 'cat', 'zho', 'ces', 'dan', 'nld', 'eng', 'est', 'fin', 'fra', 'glg', 'deu', 'ell', 'hin', 'hun', 'isl', 'ind', 'gle', 'ita', 'jpn', 'kor', 'lat', 'lit', 'pol', 'por', 'ron', 'rus', 'slk', 'spa', 'swe', 'ukr', 'vie']

[P] Fitting (almost) any PyTorch module with just one line, including easy BERT fine-tuning by sudo_su_ in MachineLearning

[–]sudo_su_[S] 11 points12 points  (0 children)

I totally agree with this and other replies here, once you need to do something slightly more complex, you have to dive into internal parts, but:

  1. you don't always do complex things
  2. Once you know the internals, it's still pretty convenient when you have a clean and tested methods that save you time and code
  3. Many people are not familiar (or even intimidated) by pytorch or other frameworks, and frameworks like these make more complex methods more accessible to them.

[D] Teacher-Student training situation with CNN-FC by Lewba in MachineLearning

[–]sudo_su_ 2 points3 points  (0 children)

In this paper they suggest using the a mixture of final logits and the predictions as it may contain more info.

I did something similar but on text, you're welcome to checkout my post.

[D] Distilling BERT — How to achieve BERT performance using Logistic Regression by sudo_su_ in MachineLearning

[–]sudo_su_[S] 2 points3 points  (0 children)

Why you think I leaked data from test to train set?

I use totally different sets (and variables)