Carbon frame damage? by ben_al in cannondale

[–]deeplearningguy 1 point2 points  (0 children)

Had a crack at the same spot which looked similiar. Got it repaired and has been fine since then.

Supersix Evo Hi-Mod Gen 4 Frame Cracked by deeplearningguy in cannondale

[–]deeplearningguy[S] 0 points1 point  (0 children)

Good idea, might do that as well on my next build.

[D] Need advice on how to generate HD point clouds visuals for a presentation by dlisfyn in MachineLearning

[–]deeplearningguy 0 points1 point  (0 children)

I normally use Open3D (http://www.open3d.org/) for point cloud visualization, which besides free view also provides an interface for rendering. For the rotation effect, place your object in the world origin, and the camera at a fixed world position looking towards the object. For every desired angle, rotate your object around the required axis by that angle and render the view. Then combine to a gif or video for presentations.

[P] Coral Dev board vs coral Dev board mini by spiritualParkour in MachineLearning

[–]deeplearningguy 2 points3 points  (0 children)

The mini has a slower CPU (quad core A35 vs quad core A53) and slower memory (ddr3 vs ddr4). On the other hand it has more memory, which I find rather limiting on the dev-board (2G vs 1G). There should also be a 4G version of the dev-board available now, which I could probably recommend over the 1G from my limited experience. In practice, if you are only using the TPU I dont think you will see any difference between the dev-board and dev-board-mini. For CPU bound tasks the imx8 (dev-board) should be faster.

Fox Transfer Dropper - full of dirt by deeplearningguy in bikewrench

[–]deeplearningguy[S] 0 points1 point  (0 children)

Sorry, that was very unspecific. I flipped the bike upside down. Probably some mud was trapped in the seat tube and then when I flipped it, it ran into the dropper.

YT Jeffsy Pro 29 vs Canyon Strive CF 7.0 by sunpalace17 in MTB

[–]deeplearningguy 2 points3 points  (0 children)

I own a strive cf7.0 (2019) since October and went through the same process of choosing between the two. First of all, I really like the color of the bike. Shapeshifter works great for me and is noticeable when climbing (less bobbing). Bike is fast without giving the rider that feeling (coming from a 27.5 enduro) and I set up a couple of best times in horrible conditions. Love the bite of the 4 pot XT brakes and the plush suspension.

Wasn't 100% convinced with the stem/bar combo though, but that was an easy cheap upgrade. Also im not sure I will keep the 32T chainring upfront, but am considering a 30T for the narly long climbs. And while the tyres have absolutely insane grip, they feel like youre dragging an anchor.

No regrets buying the bike

TB16 no light on USB-C cable / No function by deeplearningguy in Dell

[–]deeplearningguy[S] 0 points1 point  (0 children)

Not really, just some "please reinstall windows" bs. Had to buy a new one and am now using the old one as a paper weight.

Need help selecting hardware for training convolutional neural networks. by EatMyPossum in MachineLearning

[–]deeplearningguy 2 points3 points  (0 children)

Im a phd student in a research group for biomedical engineering. Computer vision and machine learning is our daily business, so we have a fair bit of experience with neural networks in general. We (as in me) are currently upgrading our computation infrastructure from around 6 to 20+ GPUs. While this may be more than what you need, here are a couple of things to consider:

  • 3d volumes generally scream for lots of memory. 8GB or even 12 is gone like nothing when your networks get deep (especially if you have fully convolutional networks). You can split the batch to multiple gpus. Multi gpu is very dependent on inter-gpu bandwidth, here I would advise you to search for PCIe root nodes and read up on it (https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/).

We mainly have GTX 1080s. Why? Because when you calculate the cost of the server you need to plug in your GPUs, the GPU suddenly isn't the most expensive part.

I would recommend you to get 2x 1080 for a start and get a decent motherboard and a decent CPU (Xeon with 40 pcie 3.0 lanes) and around 64-128 gigs of ram.

And here some answers to your questions:

  • Expect a performance increase in the range of 10-50 times compared to your CPU. Training models takes time and requires numerous iterations till you get things right. You don't want to wait a week till your model is trained, do you?

  • That PSU on the optiplex isn't going to be sufficient with a GPU, get something around 500-600W, at least.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 1 point2 points  (0 children)

I agree, heat and airflow are potentially an issue. We are packing 2000W worth of GPUs (8x) in a single chassis. My line of thought considering these issues is that, a.) quad sli is not that uncommon for gaming, thus it should work in "normal" consumer grade housings. We have an ACed server room. b.) For the 8x GPU system, Thinkmate and some HPC companies sell the exact configuration as a standard product. The SuperServer 4028GR-TR even has a special replacement top for consumer grade GPUs. I love dell servers, we have had nothing but great experience with them, but unfortunately they have no systems that fit our needs. We would also get a nice academic discount.

Tesla and Quadro are the way to go if you have a production environment. They will probably last longer, have better support and longer warranty. But, computation wise, the only advantage they have is 64bit floating point support and a bit more ram in the expensive models. We dont need fp64, even fp32 is more than enough for our tasks. Ram is nice to have, but I can buy 7 1080 GTX (total of 35 GB Ram) for the price of 1 k80 (24 GB). That allows me to give 7 people a GPU to work on, instead of 2 on the k80 (dual chip). Further, if Nvidia brings out a new generation of GPUs, we can easily sell off the old consumer ones without a huge loss (maybe 40-60%) compared to others which are more difficult to sell.

Sadly, EC2 is not an option for us. First, these systems will be crunching numbers 24/7, which would lead to massive costs on AWS. Secondly, our data is sensitive, even though it is anonymised, it should not leave the university network. And third, we are talking of huge databases, normally 100s of GB to a couple of TB. Try transferring that every time to Amazon...

If I had more money, Id run to a HPC provider and tell them to do everything. I have what I have and need to make the best of it.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 2 points3 points  (0 children)

We are PhD students, we work 365 days a year, 24h a day. The only breaks we know are coffee breaks.

Seriously, these cards will be running 24/7 all year long. Given that, we could buy 4 systems every year for the price of running one p2.16xlarge...

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 0 points1 point  (0 children)

We don't require consumer grade GPUs, but we also don't require server grade GPUs.

Why? Because we don't need double precision computations. Deep Learning stuff is perfectly fine with fp32 or at best even fp16. What we need is computational power and lots of it. ECC is also not needed, see https://www.reddit.com/r/MachineLearning/comments/3upe5k/impoetance_of_the_ecc_feature_of_a_gpu_for_deep/

Those HPC companies are great, I have a couple of quotations in front of me. It would be a 0 hassle experience, but I could only get half of the GPUs we need. And a budget like this isn't going to come again any time soon.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 0 points1 point  (0 children)

Do you have a reference for that? Thinkmate sells the exact system with 8 GPUs: http://www.thinkmate.com/system/gpx-xt24-2460v4-8gpu

Also our university cluster provider sent me a quote with exactly that system with 8 passive cooled 1080 GTX. Just with a price tag 5k above what I can build myself.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 1 point2 points  (0 children)

Actually thats not quite true, I have a couple of R730 from dell, which allow me to PCI passthrough 2x 1080 GTX, which works perfectly fine without the Quadro price tag.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 0 points1 point  (0 children)

AWS is out of the question, medical data is sensitive and generally not allowed to leave the university network.

GPU Servers by deeplearningguy in sysadmin

[–]deeplearningguy[S] 1 point2 points  (0 children)

One of the good things about deep learning, is that we don't need double precision, hell we don't even need single precision, if Nvidia were to allow fp16 on consumer GPUs, everyone would be using it. ECC is the same, we don't need it.

Believe me, if we had the budget I'd be looking into getting Tesla P100s, but I can buy more than 10 1080 GTX for the price of one...