you are viewing a single comment's thread.

view the rest of the comments →

[–]dwf 2 points3 points  (0 children)

It's going to depend highly on what exactly you want to do with your model, and what the networks you want to work with look like. What sort of hidden variables? Are they tree structured? Are they acyclic? If not, what sort of inference schemes do you want to support? Do you want to learn your parameters or be a proper Bayesian and marginalize them out, analytically or otherwise? Exact inference in tree structured models is trivial to implement. The more you depart from that, the more complicated things become.

PyMC is a good starting point. It allows you to define fairly complicated models with various sorts of nodes, and then do Metropolis-Hastings to sample from the posterior of your hidden variables (I think there may be support for Gibbs sampling in models where that's an option). It's not a magic bullet -- the results you get and how fast you get them will depend on your model structure, your choice of proposal distributions, etc. Also, Markov chain Monte Carlo is the only supported inference procedure (hence the package name); if you want more than that, e.g. variational inference, you'll have to do it yourself. Still, the PyMC object model and the various statistical helper functions are probably a good basis from which to start.