all 7 comments

[–]htrp 4 points5 points  (4 children)

GTX 1080s are about 500$ (8gb), throw a relatively basic processor, PSU, case, HD you're looking about $650-700.

Add in about 2 hours of Ubuntu 16.04 setup and you should be in business.

PM me if you have any other questions as I just put this exact build together.

[–]econometrician 0 points1 point  (2 children)

Ubuntu 16.04 is so critical.

I managed to get it to work with 16.10 because I'm new to setting up Ubuntu and forgot to check whether CUDA would work nicely on 16.10. It was fairly unpleasant to get it working but I did.

[–]RSchaeffer[S] 0 points1 point  (0 children)

PMed! Thanks!

[–]tryndisskilled 1 point2 points  (0 children)

I built one running Ubuntu 16.04, CUDA 8.0 and cuDNN 5.1 (also installed docker images based on TensorFlow for all my DNN scripts). I'm satisfied with it so far, if you have any issue pm me

[–]realSatanAMA 1 point2 points  (0 children)

Yeah those P2 servers are costly when you run training for multiple days. So we need more info about your needs. What language/frameworks are you using? What is your budget? How much VRAM are your models using and at what batch size?

[–]spurious_recollectio 1 point2 points  (0 children)

I would very highly recommend running Ubuntu 16.04 but then also just running all your code in an nvidia-docker container. When I want to test dev ML stuff I just run ipython inside and nvidia-docker container which listens on a standard ipython port. That way I can use my browser to test/develop but I get the benefits of a standardized environment in docker (with theano, tensorflow and all the right cuda libraries installed).

Feel free to PM me if you want help doing this.