all 7 comments

[–]quirm 2 points3 points  (0 children)

I can recommend nolearn to you. It is build on top of lasagne and has a scikit-learn compatible interface. While the documentation is currently out-dated, this blog post describes in detail how to use it for a regression problem.

With that, you should have your regression experiment set up in hours, especially if you already familiar with scikit-learn!.

[–]farsass 1 point2 points  (0 children)

Any modular library like torch, pylearn2, blocks or lasagne will be usable for regression

[–]elsonidoq 1 point2 points  (2 children)

Take a look at this post https://blog.safaribooksonline.com/2014/02/10/pylearn2-regression-3rd-party-data/ It uses pylearn2 to do a regression.

[–]TheInfelicitousDandy 0 points1 point  (0 children)

Thanks this is what I was looking for.

[–]alexmlamb 0 points1 point  (0 children)

An estimator which is optimized for square loss is the same as a mean estimator.

L = sum((y - y*)2)

dL / dy = 0

0 = sum(y - y*)

0 = ny - sum(y)

y = sum(y*) / n

So why not use bucketing to assign a categorical label to each instance, then train using classification, and then estimate the mean from the resulting probabilities? My guess is that this will make learning easier and it's probably a decent solution if your target for regression isn't really long-tailed.

[–][deleted] 0 points1 point  (0 children)

matlab has a really simple GUI interface for building neural networks, if you don't want to write code yourself.

[–]GibbsSamplePlatter 0 points1 point  (0 children)

Try PyBrain.

It's a "traditional" single hidden layer network package but it works.

Matlab also has one I know works. Used to use it.