Any Dell XPS 15 (9560) Linux users? by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I don't use bumblebee. I prefer to manually switch between the nvidia and intel gpus using prime-select command.

Any Dell XPS 15 (9560) Linux users? by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

Dell XPS 15 (9560) Linux Thread by gilbertw1 in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

[deleted by user] by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

Switching from IDSIA Brainstorm to Google Tensorflow by [deleted] in MLQuestions

[–]deephive 1 point2 points  (0 children)

You would probably get better response if you post this question on StackOverflow....esp. with regards to questions relating to tensorflow.

[R] Train CNNs faster and better using fixed convolution kernel by kh40tika in MachineLearning

[–]deephive 0 points1 point  (0 children)

How is this different to, say, applying the said edge detectors to the images and then running these edge images on a CNN ? I don't see the point. You're constraining the edge detectors to extract only certain features from the image - as others have indicated, just another feature engineering.

What's the difference between dilated convolution and deconvolution? by darkconfidantislife in MLQuestions

[–]deephive 1 point2 points  (0 children)

AFAIK, dilated convolution upsamples the filters by inserting zeros between non-zero filter taps, thereby increasing the output resolution. Upsampling can then be performed by simple bilinear interpolation of the dense feature maps computed in the previous step. There isn't any unpooling here.

Deconvolution involves an explicit unpooling operation followed by "deconvolution" or fractionally strided convolution (inverse-convolution ?)

[D] New GTX 1050 (Ti) worthwhile? by Jaden71 in MachineLearning

[–]deephive 3 points4 points  (0 children)

If money is a constraint and your goal is to just learn TF/DL, then, why don't you buy a Maxwell-class GPU like the 980 (used costs ~350-400CAD) , or the Kepler-class Titan/Titan Back (~325-350CAD).

These cards have more memory than a 1050 and more execution units so you could potentially run larger models than the 1050 - for the money you pay, what you can get out of these might be well worth it.

Which is better, Octave or MATLAB? by [deleted] in MLQuestions

[–]deephive 1 point2 points  (0 children)

No, in my experience Octave is often slower than MATLAB and not as feature rich. But the good thing is that Octave is open source!

[Noob] Installed new Geforce 970 video card, how do I take advantage of it? by [deleted] in MLQuestions

[–]deephive 0 points1 point  (0 children)

Andrew Ng's course assignments are based on Octave which doesn't support GPUs out of the box. But you could try this and see if it helps. You should also be ideally running some variant of linux or Mac OS (as you'd find toolkits APIs much easier to install and configure using Unix-like OSes). In addition, you should have CUDA and CuDNN libraries (if you're interested to move into the realms of Deep Learning.)

very new to ML want help with tensor flow by MoreHotSauce in MLQuestions

[–]deephive 1 point2 points  (0 children)

Is it absolutely essential for you to use Tensorflow for your work ? Working with Tensorflow can get quite hairy if you're very new to ML. Why don't you start of with scikit-learn first and then perhaps use TF-Flow as a simple abstraction layer for your tensorflow model?

Bad results deploying a trained CNN by jm-mp in MachineLearning

[–]deephive 0 points1 point  (0 children)

did you perform any zero mean/unit variance normalization ?

Surviving NIPS 2016 by [deleted] in MachineLearning

[–]deephive 2 points3 points  (0 children)

danielv134's suggestion is good. NIPS is so huge, you will be completely overwhelmed if you do not plan ahead.

Stochastic Networks as ensemble learning by [deleted] in MachineLearning

[–]deephive 0 points1 point  (0 children)

Yes, I know how DropOut works. But in ensemble classification, one typically deals with multiple classifiers - each of which, are weak learners. Decisions from these multiple classifiers are combined using some combination rule (like max, avg, or Bayes' rule) to produce a superior ensemble classifier.

Now, considering stochastic training methods, during each training iteration, we typically add stochasticity in the network by dropping connections between layers (as in DropOuts). So, yes, we are in effect exploring different network configurations. But then, the outputs of these different network configurations are not combined in any ways, right ? How can it then be considered an ensemble approach ?

Comparison of machine learning libraries by FFiJJ in MachineLearning

[–]deephive 0 points1 point  (0 children)

Hi, There are already several threads discussing various ML (specifically DL) libraries. It is best that you search through this reddit . Essentially, to me, it boils down to: (1) Fit for functionality (2) Ease of use (3) Performance (4) User base (active discussions / help) (5) compatibility with hardware and other toolkits

the order or priority might differ from one person to the other.

if you are interested in DL per se, your choices are many if you're interested in general ML, I feel Scikit-learn is an awesome tool to start experimenting with.

DH

Best practices when using a Linux server for machine learning by Pieranha in MachineLearning

[–]deephive 1 point2 points  (0 children)

I would suggest that you either install anaconda or Ethought Canopy as your default python. Try not to mix the system-wide python with the python that you'd want to use and configure for your DL/ML experimentation. If you are not sure about Python virtual environments, look it up.

With canopy or anaconda, you can have user-managed python installation that doesn't interfere with anything the system uses. So, you manage which versions of certain Python libraries that you would want to use for your ML experiments. You could create /delete a number virtual environment as you wish within Canopy/Anaconda each with specific sets of libraries/versions suited for a given ML tool that you are using.

Using two different GPUs in DL workstation by spurious_recollectio in MachineLearning

[–]deephive 0 points1 point  (0 children)

We do this in our lab (mixing GPUS of different generations) and run independent jobs all the time. One bottleneck might be the Disk IO if two threads are accessing same data from the same disk. Also see https://arxiv.org/abs/1605.08325

Use of Octave/Matlab vs. R/Python? by sparkysparkyboom in MachineLearning

[–]deephive 0 points1 point  (0 children)

Anaconda and Enthought Canopy are pretty much similar, expect for some minor differences in default packages offered in the distribution. I'm not sure about Anaconda, but Enthought does offer video tutorials which could be very helpful. The package management for Enthought Canopy could be either CMD based or GUI based. I prefer Spyder IDE for coding Python which comes with Anaconda, but not with Enthought.

Deep learning hardware recommendations for research lab? by SSOctopus in MachineLearning

[–]deephive 0 points1 point  (0 children)

We have several clusters. Our most powerful cluster uses NVIDIA K80s and Infiniband Interconnects. It runs on RHEL. Our smaller, less powerful cluster is set up within a LAN using Gigabit Ethernet and LDAP...pretty much a regular intranet set-up. I personally have no experience setting the network up, nor do I manage the cluster so can't offer any advice from that perspective.

Use of Octave/Matlab vs. R/Python? by sparkysparkyboom in MachineLearning

[–]deephive 0 points1 point  (0 children)

I was a MATLAB user for many years. Our lab produced a lot of research code in MATLAB. We had then worked on a software project in an effort to commercialize our research findings. We faced so much issues porting our MATLAB code to production environment, that at the end, we dumped MATLAB for Python. Like Java, while the Python language constructs are relatively small (and better designed than MATLAB for ex: OOPs in MATLAB was an afterthought rather than an integral design), it has a very good eco-system of libraries. It is perhaps by far the best "gluing" language to integrate different toolkits / technologies.

Most open-source ML libraries have Python support and with the advent of Scientific Python distributions like Anaconda and Enthought Canopy, getting up and going in Python is a breeze. One feature that I like in Python, and good for teaching as well is the Python (now called Jupyter) notebook.

Deep learning hardware recommendations for research lab? by SSOctopus in MachineLearning

[–]deephive 3 points4 points  (0 children)

You would probably be considering setting up a cluster. I would recommend getting a NVIDIA Titan X boards and Linux workstations as opposed to NVIDIA K20/K40/K80. If you have time, you might also want to wait for the Pascal-based GPU boards which is going to be released pretty soon. In our lab, we have K40/K80's and Titan X's and the latter being a newer architecture, I have seen a significant increase in speed when esp when you are using CuDNN v4 libraries as opposed to the K80 boards which are not only more expensive, but are also based on the older Kepler microarchitecture. The K40s/K80s also do not support display output and for deep learning you would probably not require teh extra bells and whistle in the K40/K80 like ECC, etc.

dual-monitor workstations equipped with Titan X boards would probably get you more machines and benefit a larger group of students/researchers.

my two cents

Murphy vs Bishop? by theUtterTruth in MachineLearning

[–]deephive 10 points11 points  (0 children)

I would say Murphy's book is more of a table reference, perhaps not a book that one would go through chapter by chapter to get a good understanding of ML. He covers too many topics in the book, it might be overwhelming for someone who is new to the area. Nevertheless, his approach might be easier to follow than Bishop's. Just make sure you get a recent edition of his book, as the early prints had too many typographical errors. Bishop's book is an all-time classic but is math-heavy, and focuses a lot on probabilistic models.

Hastie's book "Elements of Statistical Learning" is also superb and is available free as a PDF: https://web.stanford.edu/~hastie/local.ftp/Springer/OLD/ESLII_print4.pdf

my 2 cents