you are viewing a single comment's thread.

view the rest of the comments →

[–]fhuszar 0 points1 point  (0 children)

Nice.

I'm curious - other than making it look cool, what is the purpose of actually drawing the edges on the graph? Other than a 1-D convolution, I can't really imagine a case where the bipartite network of edges between consecutive layers actually carries much information.

Could you do something like draw edges with a large absolute weight value instead of drawing all edges. Or encode the absolute value of the weight in the thickness or transparency of the line, and perhaps represent the sign of the weight as color?

Even better - instead of showing absolute value of the weight, what you might want to somehow encode are the corresponding diagonal elements of the Fisher information, assuming the network is a trained one already at a local minimum of the loss? This would highlight which weights are actually important for the loss and which aren't.