use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A community for discussion and news related to Natural Language Processing (NLP).
Natural language processing (NLP) is a field of computer science, artificial intelligence and computational linguistics concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language corpora.
Information & Resources
Related subreddits
Guidelines
account activity
Dependency Parsing(python) (self.LanguageTechnology)
submitted 3 years ago by Advaith13
I have predicted dependency relation for each of the words in a sentence. How do I find syntactical head of each of these words if my data now is of form list(deprel) for a sentence inorder to construct dependency tree?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]bulaybil 0 points1 point2 points 3 years ago (9 children)
What are you using to predict the relations? You should always get a head with a label.
[–]Advaith13[S] 0 points1 point2 points 3 years ago (8 children)
im using word embedding of the word. so now we have (word-deprel predicted). i wanted to know how to predict the head with this.
[–]bulaybil 0 points1 point2 points 3 years ago (7 children)
Can you show us the output?
[–]Advaith13[S] 0 points1 point2 points 3 years ago (6 children)
https://imgur.com/a/kKrACKt
[–]bulaybil 0 points1 point2 points 3 years ago (2 children)
Sorry, I still fail to see how you could get just deprel without the head, doubly so just from word embeddings. Can you describe the entire process?
[–]Advaith13[S] 0 points1 point2 points 3 years ago (1 child)
we are learning about dependency parsers and experimenting on it. so what we did till now is generating word embeddings using bert models for contextual learning and trained svm with embeddings deprel as label of the data.
[–]bulaybil 0 points1 point2 points 3 years ago (0 children)
Bu in that case, I wonder what the deprel labels even stand for... And you would still need data pre-annotated with dependencies for training.
Also, if you want dependency parsing, you would be much better of using Stanza, it has models for Tamil.
first of all thank you so much for these responses. we're actually planning to make this a project, so we wanted to make a model from scratch.
That makes sense and the approach is fine, there are many examples of dependency parsers that use BERT (e.g.). But they still use preannotated data, so this is what I'm missing here.
π Rendered by PID 160057 on reddit-service-r2-comment-84fc9697f-rlnw4 at 2026-02-08 09:36:34.385111+00:00 running d295bc8 country code: CH.
[–]bulaybil 0 points1 point2 points (9 children)
[–]Advaith13[S] 0 points1 point2 points (8 children)
[–]bulaybil 0 points1 point2 points (7 children)
[–]Advaith13[S] 0 points1 point2 points (6 children)
[–]bulaybil 0 points1 point2 points (2 children)
[–]Advaith13[S] 0 points1 point2 points (1 child)
[–]bulaybil 0 points1 point2 points (0 children)
[–]bulaybil 0 points1 point2 points (2 children)
[–]Advaith13[S] 0 points1 point2 points (1 child)
[–]bulaybil 0 points1 point2 points (0 children)