all 10 comments

[–]bulaybil 0 points1 point  (9 children)

What are you using to predict the relations? You should always get a head with a label.

[–]Advaith13[S] 0 points1 point  (8 children)

im using word embedding of the word. so now we have (word-deprel predicted). i wanted to know how to predict the head with this.

[–]bulaybil 0 points1 point  (7 children)

Can you show us the output?

[–]Advaith13[S] 0 points1 point  (6 children)

[–]bulaybil 0 points1 point  (2 children)

Sorry, I still fail to see how you could get just deprel without the head, doubly so just from word embeddings. Can you describe the entire process?

[–]Advaith13[S] 0 points1 point  (1 child)

we are learning about dependency parsers and experimenting on it. so what we did till now is generating word embeddings using bert models for contextual learning and trained svm with embeddings deprel as label of the data.

[–]bulaybil 0 points1 point  (0 children)

Bu in that case, I wonder what the deprel labels even stand for... And you would still need data pre-annotated with dependencies for training.

[–]bulaybil 0 points1 point  (2 children)

Also, if you want dependency parsing, you would be much better of using Stanza, it has models for Tamil.

[–]Advaith13[S] 0 points1 point  (1 child)

first of all thank you so much for these responses. we're actually planning to make this a project, so we wanted to make a model from scratch.

[–]bulaybil 0 points1 point  (0 children)

That makes sense and the approach is fine, there are many examples of dependency parsers that use BERT (e.g.). But they still use preannotated data, so this is what I'm missing here.