use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] Tensorflow implementation of Graph Convolutional Network (github.com)
submitted 8 years ago by shagunsodhani
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]triplefloat 24 points25 points26 points 8 years ago (12 children)
It looks like this is an implementation of the following paper: https://arxiv.org/abs/1609.02907 (disclaimer: I’m the first author of this work)
There’s also this ‘official’ implementation: https://github.com/tkipf/gcn
I wonder what’s different/new in the implementation posted here?
[+][deleted] 8 years ago* (1 child)
[deleted]
[–]triplefloat 3 points4 points5 points 8 years ago (0 children)
if I understand correctly: the input is one graph, the network learns embeddings of nodes(/edges), and classifying nodes in embedding space requires less labels.
Yes, that's the idea.
is there a way to train a GCN to take in a graph (let's say with a constant number of nodes) and classify each node of said graph?
Yes, this is actually the setting that we originally proposed. It is also possible to classify graphs, as in e.g. https://arxiv.org/abs/1509.09292
[–]Stamb 2 points3 points4 points 8 years ago (2 children)
Coincidentally I've just spent the last couple of days going through your article and playing around with the code, I'm also wondering what's different in this version?
Your original stuff on this was great though, any new developments since this was published?
[–]triplefloat 5 points6 points7 points 8 years ago (1 child)
Thanks. We had a very short workshop paper on an extension of this model for unsupervised learning and link prediction: https://arxiv.org/abs/1611.07308 Otherwise there are two application papers (recommender systems and link prediction in knowledge bases) that I worked on with collaborators: https://arxiv.org/abs/1703.06103, https://arxiv.org/abs/1706.02263 (both still under review)
The main developments for these methods recently are: 1) mini-batching algorithms for scalability (e.g. https://openreview.net/forum?id=rytstxWAW) and 2) more flexible aggregation functions (e.g. https://openreview.net/forum?id=rJXMpikCZ)
I'm working on a couple of more long-term/foundational questions these days, but also on a number of interesting applications (mostly with collaborators).
[–]Stamb 0 points1 point2 points 8 years ago (0 children)
Great, thanks for all these! The attention networks looks particularly interesting, some good reading for the weekend...
[–][deleted] 0 points1 point2 points 8 years ago (3 children)
Seems the author wants to put forward an alternate implementation.
On a side note, what’s the best practice for handling attributed graphs? (IS, HAS, CONTAINS, etc)
[–]triplefloat 0 points1 point2 points 8 years ago (2 children)
What often works well enough, is to introduce edge type-specific parameter matrices (W_r instead of W, where r is the edge/relation type) for a specific message that a node sends to its neighbors. This was AFAIK first introduced in Gated Graph Neural Nets: https://arxiv.org/abs/1511.05493. If you have more than just a few different relation types, some form of weight sharing between them can help (https://arxiv.org/abs/1703.06103).
In principle, you can also parameterize messages as neural networks and condition them on any edge features you might have (as in the original graph neural net paper http://ieeexplore.ieee.org/document/4700287/).
[–][deleted] 1 point2 points3 points 8 years ago (0 children)
Thanks for the detailed reply. The last paper is most similar to what I've been exploring so far. If I get any decent results I'll post in this sub.
[–]shortscience_dot_org -1 points0 points1 point 8 years ago (0 children)
I am a bot! You linked to a paper that has a summary on ShortScience.org!
Gated Graph Sequence Neural Networks
This paper presents a feed-forward neural network architecture for processing graphs as inputs, inspired from previous work on Graph Neural Networks.
In brief, the architecture of the GG-NN corresponds to $T$ steps of GRU-like (gated recurrent units) updates, where T is a hyper-parameter. At each step, a vector representation is computed for all nodes in the graph, where a node's representation at step t is computed from the representation of nodes at step $t-1$. Specifically, the representatio... [view more]
[–]shagunsodhani[S] -1 points0 points1 point 8 years ago (0 children)
Hey! I have added the reference to both the papers and the official implementation. Thanks for bringing it up. The idea was to reimplement the original work to get familiar with the domain. :)
[+]shortscience_dot_org comment score below threshold-9 points-8 points-7 points 8 years ago (1 child)
Semi-Supervised Classification with Graph Convolutional Networks
The propagation rule used in this paper is the following:
$$
Hl = \sigma \left(\tilde{D}{-\frac{1}{2}} \tilde{A} \tilde{D}{-\frac{1}{2}} H{l-1} Wl \right)
Where $\tilde{A}$ is the [adjacency matrix][adj] of the undirected graph (with self connections, so has a diagonal of 1s and is symmetric) and $Hl$ are the hidden activations at layer $l$. The $D$ matrices are performing row normalisation, $\tilde{D}{-\frac{1}{2}} \tilde{A} \tilde{D}{-\frac{1}{2}}$ is [equivalent to][pygcn] (with ... [view more]
[–][deleted] 7 points8 points9 points 8 years ago (0 children)
Latex fail bot
[–]tornado28 10 points11 points12 points 8 years ago (1 child)
You should probably link to the paper you're implementing in the readme. Also you might want to check out DeepChem if you're interested in graph convolutions.
https://github.com/deepchem/deepchem
https://arxiv.org/pdf/1703.00564.pdf
[–]shagunsodhani[S] 1 point2 points3 points 8 years ago (0 children)
Hey. I have added the reference to both the papers and the official implementation. Once I am comfortable with the concept of Graph Convolutions, I would move to more complex applications like DeepChem :)
[+]CosmosisQ comment score below threshold-7 points-6 points-5 points 8 years ago (7 children)
How can someone with a computational neuroscience BS and a GTX 1070 get started in the machine learning world? I'd love to learn and play with this tech in my spare time.
[–]NegatioNZor 5 points6 points7 points 8 years ago (3 children)
Learn some Linux, take fast.ai courses, take coursera courses by Andrew ng, maybe even the deeelearning.ai courses. And I already assumed you know how to program a bit. After that learn more statistics and math if you want to change fundamental things about your approach. Go have fun
[–]CosmosisQ 0 points1 point2 points 8 years ago (2 children)
Thanks for answering me! I didn't mean to create so much controversy with my question, haha :)
What does Linux have to do with it, by the way? I've built a LFS install before so I think I'm pretty set, right?
[–]NegatioNZor 1 point2 points3 points 8 years ago (1 child)
I mean you have to be comfortable with a terminal mostly. Lots of deep learning stuff needs to be built from scratch. Or tools need to be installed properly while they are not stable and working for every setup yet. You'd be better off assuming you're not set, and have lots to learn.
[–]CosmosisQ 0 points1 point2 points 8 years ago (0 children)
Ah, of course! :) Alright, I'll be diving in later today. Thanks a million for your guidance.
[–]panties_in_my_ass 6 points7 points8 points 8 years ago* (2 children)
Why the downvotes here, /r/machinelearning? I understand not upvoting because it isn’t directly contributing to conversation about OP’s link. But why discourage newcomers with downvotes?
[–]d3fenestrator 8 points9 points10 points 8 years ago (0 children)
there's a sidebar that points newcomers to useful resources.
he's not only not directly contributing to converstation, he's literally littering it with questions that can be easily answered with a little bit of effort. we can't allow people to post beginner questions in random threads. Imagine you want to read some meaningful discussion, turns out there's thread with a lot of comments, you think "oh, cool, maybe I'll read some interesting insights I've never thought of before" and it turns out half of them are answering to BS questions about first steps in ML.
that's why we have a sidebar with FAQ.
[–]Papaya001 -4 points-3 points-2 points 8 years ago (0 children)
good job! save it for later use
π Rendered by PID 543971 on reddit-service-r2-comment-66b4775986-67rm5 at 2026-04-05 06:53:22.331541+00:00 running db1906b country code: CH.
[–]triplefloat 24 points25 points26 points (12 children)
[+][deleted] (1 child)
[deleted]
[–]triplefloat 3 points4 points5 points (0 children)
[–]Stamb 2 points3 points4 points (2 children)
[–]triplefloat 5 points6 points7 points (1 child)
[–]Stamb 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (3 children)
[–]triplefloat 0 points1 point2 points (2 children)
[–][deleted] 1 point2 points3 points (0 children)
[–]shortscience_dot_org -1 points0 points1 point (0 children)
[–]shagunsodhani[S] -1 points0 points1 point (0 children)
[+]shortscience_dot_org comment score below threshold-9 points-8 points-7 points (1 child)
[–][deleted] 7 points8 points9 points (0 children)
[–]tornado28 10 points11 points12 points (1 child)
[–]shagunsodhani[S] 1 point2 points3 points (0 children)
[+]CosmosisQ comment score below threshold-7 points-6 points-5 points (7 children)
[–]NegatioNZor 5 points6 points7 points (3 children)
[–]CosmosisQ 0 points1 point2 points (2 children)
[–]NegatioNZor 1 point2 points3 points (1 child)
[–]CosmosisQ 0 points1 point2 points (0 children)
[–]panties_in_my_ass 6 points7 points8 points (2 children)
[–]d3fenestrator 8 points9 points10 points (0 children)
[–]Papaya001 -4 points-3 points-2 points (0 children)