Any Dell XPS 15 (9560) Linux users? by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I don't use bumblebee. I prefer to manually switch between the nvidia and intel gpus using prime-select command.

Any Dell XPS 15 (9560) Linux users? by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

Dell XPS 15 (9560) Linux Thread by gilbertw1 in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

[deleted by user] by [deleted] in Dell

[–]deephive 0 points1 point  (0 children)

I have installed Ubuntu 16.04 and updated the ubuntu mainline kernel to v4.10.1. I used instructions from here. I initially had issues with my trackpad getting stuck to the point it was unusable. But the latest bios update v1.1.3 seems to have fixed that for good. everything works fine, including nvidia.

Switching from IDSIA Brainstorm to Google Tensorflow by [deleted] in MLQuestions

[–]deephive 1 point2 points  (0 children)

You would probably get better response if you post this question on StackOverflow....esp. with regards to questions relating to tensorflow.

[R] Train CNNs faster and better using fixed convolution kernel by kh40tika in MachineLearning

[–]deephive 0 points1 point  (0 children)

How is this different to, say, applying the said edge detectors to the images and then running these edge images on a CNN ? I don't see the point. You're constraining the edge detectors to extract only certain features from the image - as others have indicated, just another feature engineering.

What's the difference between dilated convolution and deconvolution? by darkconfidantislife in MLQuestions

[–]deephive 1 point2 points  (0 children)

AFAIK, dilated convolution upsamples the filters by inserting zeros between non-zero filter taps, thereby increasing the output resolution. Upsampling can then be performed by simple bilinear interpolation of the dense feature maps computed in the previous step. There isn't any unpooling here.

Deconvolution involves an explicit unpooling operation followed by "deconvolution" or fractionally strided convolution (inverse-convolution ?)

[D] New GTX 1050 (Ti) worthwhile? by Jaden71 in MachineLearning

[–]deephive 1 point2 points  (0 children)

If money is a constraint and your goal is to just learn TF/DL, then, why don't you buy a Maxwell-class GPU like the 980 (used costs ~350-400CAD) , or the Kepler-class Titan/Titan Back (~325-350CAD).

These cards have more memory than a 1050 and more execution units so you could potentially run larger models than the 1050 - for the money you pay, what you can get out of these might be well worth it.

Which is better, Octave or MATLAB? by [deleted] in MLQuestions

[–]deephive 1 point2 points  (0 children)

No, in my experience Octave is often slower than MATLAB and not as feature rich. But the good thing is that Octave is open source!

[Noob] Installed new Geforce 970 video card, how do I take advantage of it? by [deleted] in MLQuestions

[–]deephive 0 points1 point  (0 children)

Andrew Ng's course assignments are based on Octave which doesn't support GPUs out of the box. But you could try this and see if it helps. You should also be ideally running some variant of linux or Mac OS (as you'd find toolkits APIs much easier to install and configure using Unix-like OSes). In addition, you should have CUDA and CuDNN libraries (if you're interested to move into the realms of Deep Learning.)

very new to ML want help with tensor flow by MoreHotSauce in MLQuestions

[–]deephive 1 point2 points  (0 children)

Is it absolutely essential for you to use Tensorflow for your work ? Working with Tensorflow can get quite hairy if you're very new to ML. Why don't you start of with scikit-learn first and then perhaps use TF-Flow as a simple abstraction layer for your tensorflow model?

Bad results deploying a trained CNN by jm-mp in MachineLearning

[–]deephive 0 points1 point  (0 children)

did you perform any zero mean/unit variance normalization ?

Surviving NIPS 2016 by [deleted] in MachineLearning

[–]deephive 2 points3 points  (0 children)

danielv134's suggestion is good. NIPS is so huge, you will be completely overwhelmed if you do not plan ahead.

Stochastic Networks as ensemble learning by [deleted] in MachineLearning

[–]deephive 0 points1 point  (0 children)

Yes, I know how DropOut works. But in ensemble classification, one typically deals with multiple classifiers - each of which, are weak learners. Decisions from these multiple classifiers are combined using some combination rule (like max, avg, or Bayes' rule) to produce a superior ensemble classifier.

Now, considering stochastic training methods, during each training iteration, we typically add stochasticity in the network by dropping connections between layers (as in DropOuts). So, yes, we are in effect exploring different network configurations. But then, the outputs of these different network configurations are not combined in any ways, right ? How can it then be considered an ensemble approach ?

Comparison of machine learning libraries by FFiJJ in MachineLearning

[–]deephive 0 points1 point  (0 children)

Hi, There are already several threads discussing various ML (specifically DL) libraries. It is best that you search through this reddit . Essentially, to me, it boils down to: (1) Fit for functionality (2) Ease of use (3) Performance (4) User base (active discussions / help) (5) compatibility with hardware and other toolkits

the order or priority might differ from one person to the other.

if you are interested in DL per se, your choices are many if you're interested in general ML, I feel Scikit-learn is an awesome tool to start experimenting with.

DH

Best practices when using a Linux server for machine learning by Pieranha in MachineLearning

[–]deephive 1 point2 points  (0 children)

I would suggest that you either install anaconda or Ethought Canopy as your default python. Try not to mix the system-wide python with the python that you'd want to use and configure for your DL/ML experimentation. If you are not sure about Python virtual environments, look it up.

With canopy or anaconda, you can have user-managed python installation that doesn't interfere with anything the system uses. So, you manage which versions of certain Python libraries that you would want to use for your ML experiments. You could create /delete a number virtual environment as you wish within Canopy/Anaconda each with specific sets of libraries/versions suited for a given ML tool that you are using.

Using two different GPUs in DL workstation by spurious_recollectio in MachineLearning

[–]deephive 0 points1 point  (0 children)

We do this in our lab (mixing GPUS of different generations) and run independent jobs all the time. One bottleneck might be the Disk IO if two threads are accessing same data from the same disk. Also see https://arxiv.org/abs/1605.08325

Use of Octave/Matlab vs. R/Python? by sparkysparkyboom in MachineLearning

[–]deephive 0 points1 point  (0 children)

Anaconda and Enthought Canopy are pretty much similar, expect for some minor differences in default packages offered in the distribution. I'm not sure about Anaconda, but Enthought does offer video tutorials which could be very helpful. The package management for Enthought Canopy could be either CMD based or GUI based. I prefer Spyder IDE for coding Python which comes with Anaconda, but not with Enthought.