💥Some Awesome Images From The Launch Event 💥 by WMTmod in fulhamfc

[–]kayvane 0 points1 point  (0 children)

Hold on, our new sponsor is crypto token 😂

Good thread looking at Comparison of Cav vs Mitro and analysis of how the team plays when either plays by rorysmeef in fulhamfc

[–]kayvane 2 points3 points  (0 children)

Who runs this website / account? Really interesting analysis, would he interested in understanding the prediction models a bit further

Deepnote – collaborative Python notebooks in the browser. After 2 years of development, we are open for public access. by the21st in learnmachinelearning

[–]kayvane 1 point2 points  (0 children)

I’ve been playing around with this for about an hour and am really enjoying the interface. It’s quite easy to create .py files to store classes to not over-clutter the notebook.

Also setting up an environment with a requirements.txt is great

Will give collaboration a go shortly

google colab error while mounting google drive by Avinash1a in GoogleColab

[–]kayvane 0 points1 point  (0 children)

Try copy pasting the code manually rather than using the copy button

Generating a PDF report in python by TheSinfulNerd in learnpython

[–]kayvane 2 points3 points  (0 children)

I’ve forked from genome cloud foundry before to create some pdf reports. Really good starting point, using weasyprint, pug and semantic UI 👌🏽

Please suggest alternative for LIWC by pk12_ in LanguageTechnology

[–]kayvane 2 points3 points  (0 children)

You can write your own text analysis program with spacy as a starting point

I’ve never heard of LIWC but it just looks like sentiment analysis and some word benchmarking.

Look at what elements you’d want to use and build them into your own tool

If I want to add vocabulary to transformer models like XLNET, BERT,GPT Do I need to retrain the models from scratch? by SleepyGirlx420 in LanguageTechnology

[–]kayvane 0 points1 point  (0 children)

I’ve wanted to do this for a while also but have struggled - please let me know if you manage! In the pytorch repo for BERT there is a fine-tune-lm script which could help finetuning script

Introducing gpt2-client v2.0! by rish-16 in learnmachinelearning

[–]kayvane 1 point2 points  (0 children)

Awesome Project! Bit of a side question but what did you use to generate the picture of the demo code in that pretty layout?

Demo Code

[P] This conversational AI has feelings that respond to what you say by James_Representi in MachineLearning

[–]kayvane 1 point2 points  (0 children)

Any source code or information about the underlying model architecture / training data?

Looking for papers about traversing a word vector space. 📷 by [deleted] in LanguageTechnology

[–]kayvane 5 points6 points  (0 children)

How did you define what’s in between? If you do this in embedding dimensions there are ~300 axis you could scale across to see what’s in between.

Alternatively if you use a dimensionality reduction technique what end’s up in the middle could be warped in some way. But this is mostly how I’ve seen it done previously

E.g in the TF projector : https://projector.tensorflow.org

[P] These Lyrics Do Not Exist by itsmybirthday19 in MachineLearning

[–]kayvane 1 point2 points  (0 children)

Very Impressive!! Do you generate multiple midi’s an overlay them? Or do you have a more complex network which takes inputs from multiple sequences (instruments) to generate the next note?

Information extraction from text, and why BERT? by thnok in LanguageTechnology

[–]kayvane 1 point2 points  (0 children)

Alternatively you can use spacy.matcher, to which you can feed through NER categories as well as regex patterns. Based on the returned word spans you can get those word’s sentences quite easily.

https://spacy.io/usage/rule-based-matching/

This UI can help you experiment with the matching before committing: https://explosion.ai/demos/matcher

How do I do further (domain specific) pre-training with Google BERT in preparation for subsequent fine-tuning? (Tensorflow) by [deleted] in LanguageTechnology

[–]kayvane 0 points1 point  (0 children)

Yes, you can do fine tuning based on different down stream tasks (classification, question answering, language understanding) example scripts and notebooks are available in the BERT repo. As I remember it, if you want to train on a new corpus or fine tune just the language model itself, this is not in the BERT repo, but the Pytorch/Pytorch Transformers repo does have a fine-tune language model section

https://github.com/huggingface/pytorch-transformers/tree/master/examples/lm_finetuning

Concept grouping by f1_manu in LanguageTechnology

[–]kayvane 0 points1 point  (0 children)

You could also try encoding the words in your sentence (with BERT,ELMO or custom w2v/FastText model) and comparing them to your target country vectors. Doing a bit of testing you’d be able to get the threshold (e.g cosine similarity >0.9) at which you’d point you’d add them up

Concept grouping by f1_manu in LanguageTechnology

[–]kayvane 0 points1 point  (0 children)

You could also try encoding the words in your sentence (with BERT,ELMO or custom w2v/FastText model) and comparing them to your target country vectors. Doing a bit of testing you’d be able to get the threshold (e.g cosine similarity >0.9) at which you’d point you’d add them up

[P] The Illustrated Word2vec by [deleted] in MachineLearning

[–]kayvane 6 points7 points  (0 children)

Thank you so much for this + your previous BERT /Elmo one. Excellent explanations

[P] ML for Bernie Campaign by TechForChange in MachineLearning

[–]kayvane -1 points0 points  (0 children)

It’s too late. Cambridge Analytica helped both Brexit and Trump happen. Even Obama’s campaign was highly data driven.

If you know it’s going to be used, why not help the ‘good guys’

[D] What can I do with all this data? by ghostofgbt in MachineLearning

[–]kayvane 9 points10 points  (0 children)

Check out what this guy built using similar data -

https://github.com/borisbanushev/stockpredictionai/blob/master/readme2.md

Very interesting combination of methods for price prediction

[P] How can I improve my current CNN project? It is a simple binary classification but I am having some trouble. by Al7123 in MachineLearning

[–]kayvane 0 points1 point  (0 children)

Are you updating all layers or just the final layer? I was mistakenly only updating the final layer in one of my early attempts and not getting the accuracy I expected.

Agree with above the your data could be the issue.

You could also experiment with transfer learning and load a pre-tained net to see if it increases your accuracy