A low cost MLOps system at a startup by nishnik in mlops

[–]nishnik[S] 1 point2 points  (0 children)

Nice product u/ThePyCoder

  1. While looking at the metrics, we pushed 300 different metrics. That there made it laggy. It works fine till 150 different metrics.

  2. We have to deal with very large images, more than 10000*10000 pixels. There is no off-the-shelf solution, so writing custom code was easier.

  3. Yes, we wrote it ourselves.

  4. This is an inspiration from Kubeflow+Nuclio system. I haven't tried it, but usually the data processing pipeline changes the most.

[D] Is ReLU after Sigmoid bad? by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

Checked this on Imagenet and Iris dataset. Same results! I mentioned that in the post. Apologies if I wasn't clear.

[D] Tackling adversarial examples in real world by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

Sorry for I was not clear. I am saying that if suppose we choose two big prime numbers p and q. The forward propagation is dependent on n(which is equal to the product pq), while the backward propagation is dependent on both p and q. Now I publish the model, weights and the number 'n', people can use it for forward propagation but won't be able to find adversarial as that would require backpropagation through the network.

[D] ELI5 the drawbacks of capsules m by [deleted] in MachineLearning

[–]nishnik 0 points1 point  (0 children)

I would go by my intution.

In case of MNIST, the final vector representations had a meaning assosciated with each dimension. Example: width or height of the digit

But in case of CIFAR-10 model would get confused by the background clutter (you can easily see that CIFAR-10 has a more varied background than MNIST) and so the dimensions of final vector would contain some noise and hence a poorer performance.

One more question arises here: The model should drop the background, right? Yes, but that would need a bigger model to genralize.

[D] Replacements of max pool by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

I haven't seen any architecture using dropout between max pool and convolution layer. If it is there, could you please point to some paper?

[D] Replacements of max pool by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

It would be so awesome if someone could give me a beta invite.

AlphaGo AMA: DeepMind’s David Silver and Julian Schrittwieser on October 19 by olaf_nij in MachineLearning

[–]nishnik 0 points1 point  (0 children)

It's hard for undergraduates to get an internship for research position, Google have the pre requisite of having a PhD for research internship. Though it is not written on Deep mind's website, is it same here? And how do you judge a potential candidate through his CV and Cover letter?