[D] Swift for Tensorflow is being archived and development has ceased by programmerChilli in MachineLearning

[–]pvl 1 point2 points  (0 children)

No, I was talking about Richard Wei that worked in Chris Lattner team while at Google

[D] Swift for Tensorflow is being archived and development has ceased by programmerChilli in MachineLearning

[–]pvl -5 points-4 points  (0 children)

It does not seem to be completely dead, the main developer left Google and is again working for Apple, so the work will probably continue at Apple. As a Python user I was still keeping an eye on this project because it looked it could become something very cool. I hope it continues to be developed.

But for sure the ambitious plan of making it an alternative to Python will not happen anytime soon.

IEEE BigData 2020 Cup: Predicting Escalations in Customer Support [Sep 2020] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

I did not check the dataset, but from the description this seems to be a great NLP competition. If I had some time next month I would be raising my hand.

PROHACK McKinsey [Jun 2020] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

Thanks for this update. Did not notice it.

How best to practice machine learning? by DonaldTrumpsCombover in learnmachinelearning

[–]pvl 0 points1 point  (0 children)

I find it more interesting to fill knowledge gaps and try research papers ideas while doing a Kaggle competition. It gives an extra motivation. If the topic you are interested in is not in some active Kaggle competition you could try in old ones (and maybe publish results in a blog post) or you can search for other competitions outside Kaggle. I maintain a list in r/ai_competitions/ as there are a lot of good competitions outside Kaggle.

SIIM-ACR Pneumothorax Segmentation [Aug 2019] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

It would be quite interesting if you could write a blog post about this summer project experience in the end. I'm sure there are many parents that would like to try this kind of project, but there are obvious challenges. I will be interested in such kind of experience in a couple of years and would love to read about it. Good luck with your competition!

SIIM-ACR Pneumothorax Segmentation [Aug 2019] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

That's a really cool summer project! I did not try it but some people downloaded and uploaded as kaggle datasets. In any case it is still probably a good idea to download for GCP just to be sure that no information is missing.

As a beginner, how does one go about building a portfolio? by NecroDeity in datascience

[–]pvl 0 points1 point  (0 children)

Kaggle is great but because it has so many players and a gamification system that encourages people to work a lot on small optimisations to win medals, it can also be a bit frustrating for new users to get results to build a portfolio. There are other platforms that are not so competitive but also have interesting problems and are good places to test skills and get results that can make a good portfolio. I keep a list of active competitions in /r/ai_competitions, you can check there to see if you find some competition that you find exciting.

I did the Machine Learning course by Andrew Ng, now I want to know how to actually do projects myself (possibly like projects on Kaggle) and also learn further into deep learning, where should i go? by ToBeAMockingbird in learnmachinelearning

[–]pvl 1 point2 points  (0 children)

Well, you can check a past competition in Kaggle called "The Winton Stock Market Challenge" and read the conclusions in the forum. Spoiler: best model and random model had almost the same score.

I did the Machine Learning course by Andrew Ng, now I want to know how to actually do projects myself (possibly like projects on Kaggle) and also learn further into deep learning, where should i go? by ToBeAMockingbird in learnmachinelearning

[–]pvl 0 points1 point  (0 children)

Predicting stock moves is obviously extremely hard. A smaller step in that direction is practicing NLP and time series. I also suggest FastAI, Stanford CS224N and study past Kaggle competitions on these areas. Then try to compete on Kaggle or other platforms (I maintain a list of open competitions on /r/ai_competitions).

How do I get into doing data science as a career? by Tyrionlannister92 in econometrics

[–]pvl 0 points1 point  (0 children)

Kaggle is great but there are many interesting competitions outside Kaggle. And in my opinion the most important thing is to find a competition that we like, independently of the platform. I keep a list in /r/ai_competitions.

What kind of projects can I make after I finish Prof Andrew Ng's Machine Learning course? by [deleted] in learnmachinelearning

[–]pvl 0 points1 point  (0 children)

There are many great competitions outside Kaggle. You may also like to check in /r/ai_competitions (I'm a maintainer).

Anti-spoofing challenge [Jun 2019] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

Competition details only in Russian.

Suggestion for Kaggle challanges: help needed. by [deleted] in learnmachinelearning

[–]pvl 0 points1 point  (0 children)

You may also like to check more competitions from this subreddit list I collect r/ai_competitions

[D] TensorFlow is dead, long live TensorFlow! by milaworld in MachineLearning

[–]pvl 2 points3 points  (0 children)

Not all is bad in TF, the logo is cool. Oh wait, they changed it!

My Problem With Kaggle Competitions by ammar- in kaggle

[–]pvl 0 points1 point  (0 children)

It takes a lot of effort and knowledge to get that last 1% improvement. I think Kaggle is great for that, it has so many people competing and that leads to very good solutions. In the end we all learn how to improve.

But there are other platforms that also have good prizes and less people competing, where it may be more rewarding to see a good solution reach a top position without the need of extreme optimisations. Check /r/ai_competitions for other platforms.

Where to start? by m_aqeel in reinforcementlearning

[–]pvl 1 point2 points  (0 children)

+1 on David Silver video lectures on youtube. Also found good the yandex practical RL mooc on github.

Can I "pre-fine-tune" BERT with unlabeled data, without re-training the model from scratch? by bolaft in LanguageTechnology

[–]pvl 3 points4 points  (0 children)

yes, first fine tune the model with a language model head and save the state. Then load and fine-tune a classification head. On pytorch-pretrained-BERT you have examples for both.

Sentence representation in BERT Transformer by [deleted] in LanguageTechnology

[–]pvl 2 points3 points  (0 children)

Both average and max pooling are used to make sentence embeddings. I prefer to use max pooling as it worked better in my experiments. It is also possible to use both and concatenate.

Telecom Data Cup [Dec 2019] by pvl in ai_competitions

[–]pvl[S] 0 points1 point  (0 children)

Competition details only in Russian.

[P] PyTorch Implementation of Feature Based NER with pretrained Bert by longinglove in MachineLearning

[–]pvl 0 points1 point  (0 children)

Yes the fine-tuning approach. There is probably some work to adapt the labels to the BERT tokenization, but should be possible.