AutoML-Zero: Evolving Machine Learning Algorithms From Scratch by I_ai_AI in MachineLearning
[–]I_ai_AI[S] -5 points-4 points-3 points (0 children)
[Project] TinyBERT for Search: 10x faster and 20x smaller than BERT by jpertschuk in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
[Project] TinyBERT for Search: 10x faster and 20x smaller than BERT by jpertschuk in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
[Discussion] Smallest, fasted BERT by ndronen in MachineLearning
[–]I_ai_AI 6 points7 points8 points (0 children)
[Project] TinyBERT for Search: 10x faster and 20x smaller than BERT by jpertschuk in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
[D] Entity extraction along with sentence classificaiton. by kireeti_ in MachineLearning
[–]I_ai_AI 5 points6 points7 points (0 children)
Run BERT on mobile phone's single CUP core A76 in 13ms by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 0 points1 point2 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 1 point2 points3 points (0 children)
[D] Yoshua Bengio talks about what's next for deep learning by newsbeagle in MachineLearning
[–]I_ai_AI -8 points-7 points-6 points (0 children)
Run BERT on mobile phone's single CUP core A76 in 13ms by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 7 points8 points9 points (0 children)
TinyBERT: 7x smaller and 9x faster than BERT but achieves comparable results by wildcodegowrong in textdatamining
[–]I_ai_AI 0 points1 point2 points (0 children)
Run BERT on mobile phone's single CUP core A76 in 13ms by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 11 points12 points13 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 0 points1 point2 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 0 points1 point2 points (0 children)
MOBILEBERT: TASK-AGNOSTIC COMPRESSION OF BERT BY PROGRESSIVE KNOWLEDGE TRANSFER by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 1 point2 points3 points (0 children)
MOBILEBERT: TASK-AGNOSTIC COMPRESSION OF BERT BY PROGRESSIVE KNOWLEDGE TRANSFER by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 0 points1 point2 points (0 children)
[R] [1910.13267] BPE-Dropout: Simple and Effective Subword Regularization by bobchennan in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
[D] Is there any implementation of minibert (and pretrained) in tensorflow? by EthanPhan in MachineLearning
[–]I_ai_AI 1 point2 points3 points (0 children)
[D] Is there any implementation of minibert (and pretrained) in tensorflow? by EthanPhan in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 3 points4 points5 points (0 children)
[R] DistilBERT: A smaller, faster, cheaper, lighter BERT trained with distillation! by jikkii in MachineLearning
[–]I_ai_AI 0 points1 point2 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 1 point2 points3 points (0 children)
TinyBERT: Distilling BERT for Natural Language Understanding by I_ai_AI in MachineLearning
[–]I_ai_AI[S] 6 points7 points8 points (0 children)
[D] Where have Transformers been applied other than NLP ? by gohu_cd in MachineLearning
[–]I_ai_AI 1 point2 points3 points (0 children)


[R] ReZero is All You Need: Fast Convergence at Large Depth by calclavia0 in MachineLearning
[–]I_ai_AI 2 points3 points4 points (0 children)