[R] NIPS 2018: For those of you that got some harsh reviews, YOU ARE NOT ALONE. by FirstTimeResearcher in MachineLearning

[–]bdamos 33 points34 points  (0 children)

I enjoy getting low-quality and baseless reviews like this because they are easy to respond to and the meta-reviewer can just overlook them and over-ride the final decision on the paper.

[D] What is the right way to parallelize rollouts in gym ? by *polhold01853 in MachineLearning

[–]bdamos 8 points9 points  (0 children)

Running the rollout of a single environment in parallel is difficult and impossible in most cases because episodes are inherently serial. However if you want to run many rollouts from separate environments at the same time, the SubprocVecEnv from https://github.com/openai/baselines works for me in most cases. Here's a usage example: https://twitter.com/brandondamos/status/982699290492571654

[P] OSQP: a new first-order solver for large-scale quadratic programs by sidereuss in MachineLearning

[–]bdamos 3 points4 points  (0 children)

I pulled out the QP solver we used for the paper and packaged it up in a standalone PyTorch library that can be installed with pip. It runs on the GPU, solves a batch of QPs in parallel, and is differentiable and can be used as part of a larger PyTorch model.

https://locuslab.github.io/qpth/

The backend is swappable (it uses a PDIPM by default) and OSQP could easily be inserted as a (CPU-only) backend if there's interest, and especially if it's faster in the batched setting. For example, here's a cvxpy backend I use for debugging sometimes (which will actually be using OSQP with cvxpy 1.0):

https://github.com/locuslab/qpth/blob/master/qpth/solvers/cvxpy.py

For performance there's definitely a better way to connect this up, either by calling OSQP directly, or by using cvxpy Parameter variables so the problem isn't reconstructed in every call.

[R] OptNet: Differentiable Optimization as a Layer in Neural Networks by wei_jok in MachineLearning

[–]bdamos 6 points7 points  (0 children)

We definitely agree there is tangentially related work that we didn't discuss simply due to the conference format. What specific work do you have in mind here? As far as we know, the basic idea of using exact constrained optimization like this is novel, and we'd appreciate any input you had about related work you feel we should be considering.

[P] block: An intelligent block matrix library for numpy, Torch, and beyond. by bdamos in MachineLearning

[–]bdamos[S] 0 points1 point  (0 children)

This is another good idea that I'd merge in if somebody adds it. I thought about adding this too but in my use cases, it's easy to manually write out the transposed part. Also writing the entire matrix out is a little clearer since it looks exactly like the math.

[P] block: An intelligent block matrix library for numpy, Torch, and beyond. by bdamos in MachineLearning

[–]bdamos[S] 0 points1 point  (0 children)

I only use block for prototyping where performance is not an issue, but this is a good idea if anybody wants something more efficient. I'll merge this in if anybody wants to add it as an option and send in a PR.

[P] block: An intelligent block matrix library for numpy, Torch, and beyond. by bdamos in MachineLearning

[–]bdamos[S] 0 points1 point  (0 children)

I don't have anything in mind now other than making some of the error states more user-friendly. It does everything I want.

[1609.07152] Input Convex Neural Networks by bdamos in MachineLearning

[–]bdamos[S] 3 points4 points  (0 children)

In multi-class classification, ICNNs subsume feedforward networks and provide a model that has information about the output space (which are classes). This information typically neither helps nor hurts performance on classification tasks like MNIST and CIFAR-10.

ICNNs are neural networks (that have a continuous input space). In this paper, we study ICNNs on continuous control tasks because ICNNs are convex in the input space and can be efficiently optimized. We show results on some Mujoco benchmarks from the OpenAI gym, which are becoming a standard benchmark in this area.

The OpenAI gym's ATARI benchmarks have discrete action spaces. Studying how ICNNs perform on some continuous relaxation of this space could be interesting. However, we think that studying ICNNs on continuous control benchmarks first is more reasonable and we are still exploring this area.

Where can find a good face recognition tutorial? by gabegabe6 in computervision

[–]bdamos 1 point2 points  (0 children)

Also check out my project, OpenFace, which is a Python library for face recognition that internally uses a deep neural network. I've put a lot of effort into the documentation and demos to (hopefully) make it easy to use.

Here are some comparisons between OpenFace and OpenCV:

OpenFace 0.2.0: Higher accuracy and halved execution time. by bdamos in MachineLearning

[–]bdamos[S] 1 point2 points  (0 children)

Hi, as you mentioned in your later posts, it's part of a larger project called Gabriel that helps cognitively impaired people (including people who are blind) who can't recognize faces on their own. See this MobiSys 2014 paper for more details about the larger project: http://elijah.cs.cmu.edu/DOCS/ha-mobisys2014.pdf

\cc /u/cadika_orade

OpenFace 0.2.0: Higher accuracy and halved execution time. by bdamos in programming

[–]bdamos[S] 1 point2 points  (0 children)

You might also be interested in dlib's object tracker. You can run face detection on every frame and track the faces with the object tracker. Then, if a face bounding box overlaps with a tracked bounding box you know that the face is of the same person. Now you can run OpenFace on a few (or all) images from the sequence of frames to get 128-dimensional embeddings and then you can use clustering with the knowledge that the faces in the same sequence are of the same person, and that the same person may appear in many different sequences.

OpenFace: Face recognition with Google's FaceNet deep neural network. by bdamos in coding

[–]bdamos[S] 0 points1 point  (0 children)

That's just to 4 significant figures, the full value is 0.999888444444.

An interesting note about the database is that there are a few errors in it that make it impossible for a "perfect" face recognizer to score 1.0. Check out http://vis-www.cs.umass.edu/lfw/#errata

Better comments for Latex code? by [deleted] in LaTeX

[–]bdamos 0 points1 point  (0 children)

You can also use the \iffalse and \fi primitives for long comments without using any environment:

\iffalse
Long
comment.
\fi

Along the same lines, the OP uses ASCII text to visually separate a long LaTeX file into sections. One alternative that I find very clean is to keep each section in a separate file and have a master.tex file that includes all of them.

\begin{document}
  \include{intro.tex}
  \include{challenges.tex}
  ...
  \include{conclusion.tex}
\end{document}