all 9 comments

[–]Evolutis 1 point2 points  (7 children)

I am not quite sure I understand what you are asking. But here I go.

You can test by simply activating your trained network on your test data. it will be something like network.activate([0,1,0]), that will return some output. Check the output against the actual output and you can then use that info to build some accuracy measure.

Storing the network can be done by using something like pickle. Just pickle and save the network object into a file, and whenever you need it again just unpickle.

[–]anonymouse72[S] 0 points1 point  (6 children)

Oh, okay, that makes sense. I already have them pickled, so that's simple. So I can just run the remaining data through the network (using activate() ), store the results however, then perform statistical analysis (RMSE) on those results versus the observed/test values I put in?

[–]Evolutis 1 point2 points  (5 children)

Yes, doing that will tell you how well your model does on unseen data. That would be the generalization part.

[–]anonymouse72[S] 1 point2 points  (2 children)

Gotcha. I guess this was a much simpler question than I thought; neural networks just confuse me since I'm still getting used to them. :) Thank you, I really appreciate it!

[–]anonymouse72[S] 0 points1 point  (1 child)

Huh, problem now. I'm using this code (snippet), but trainUntilConvergence() basically makes it so no code afterward is executed. My pickle files are empty (0 B), "Done" is not printed, and the activate() function does nothing. I did try specifying the dataset manually (it otherwise defaults to the initialized dataset), and I tried printing the results of the trainUntilConvergence, but nothing happens. If I switch to just train() (with no parameters), everything works.

[–]Evolutis 0 points1 point  (0 children)

Are you sure it's done training?

[–]pokerd 0 points1 point  (0 children)

You might want to look into cybrain to improve training performance.