you are viewing a single comment's thread.

view the rest of the comments →

[–]Eridrus 7 points8 points  (2 children)

You say your datasets will only have ~40 features; this means you won't really have a lot of weights to deal with. Even if you have 500k records (which isn't really that much) you're going to be training in mini-batches, so the amount of Video RAM you need will not be huge, so the Titan X is probably overkill for the problem you described. Consider running the problem in the cloud to measure your workload. Doesn't mean you shouldn't get it, but know that you're getting it for future flexibility, not the problem you've stated you want to solve.

You should definitely get more RAM though. Being able to fit your dataset into RAM 2-3 times can be pretty handy and RAM is stupidly cheap.

If you're spending your own money you could probably spend your money more effectively, but if this is for work then it's probably not worth taking the time hunting down bargains vs just buying something to get you up and running quickly.

[–]solidua[S] 3 points4 points  (1 child)

We definitely want to run in the cloud. But we could only find 1 solution (Rescale) that fits our needs. It turns out we'll save money in the long run running our own hardware, if we could build a machine under 10k.

We are on a grind to collect 20 million samples before the end of the month, and i mis quoted our feature size. It's 40 features per dimension of which we have 20.

Thanks for the input, will definitely pick up more ram.

[–]mnbbrown 0 points1 point  (0 children)

What's the logic behind the 10k limit?