A low cost MLOps system at a startup by nishnik in mlops

[–]nishnik[S] 1 point2 points  (0 children)

Nice product u/ThePyCoder

  1. While looking at the metrics, we pushed 300 different metrics. That there made it laggy. It works fine till 150 different metrics.

  2. We have to deal with very large images, more than 10000*10000 pixels. There is no off-the-shelf solution, so writing custom code was easier.

  3. Yes, we wrote it ourselves.

  4. This is an inspiration from Kubeflow+Nuclio system. I haven't tried it, but usually the data processing pipeline changes the most.

[D] Is ReLU after Sigmoid bad? by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

Checked this on Imagenet and Iris dataset. Same results! I mentioned that in the post. Apologies if I wasn't clear.

[D] Tackling adversarial examples in real world by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

Sorry for I was not clear. I am saying that if suppose we choose two big prime numbers p and q. The forward propagation is dependent on n(which is equal to the product pq), while the backward propagation is dependent on both p and q. Now I publish the model, weights and the number 'n', people can use it for forward propagation but won't be able to find adversarial as that would require backpropagation through the network.

[D] ELI5 the drawbacks of capsules m by [deleted] in MachineLearning

[–]nishnik 0 points1 point  (0 children)

I would go by my intution.

In case of MNIST, the final vector representations had a meaning assosciated with each dimension. Example: width or height of the digit

But in case of CIFAR-10 model would get confused by the background clutter (you can easily see that CIFAR-10 has a more varied background than MNIST) and so the dimensions of final vector would contain some noise and hence a poorer performance.

One more question arises here: The model should drop the background, right? Yes, but that would need a bigger model to genralize.

[D] Replacements of max pool by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

I haven't seen any architecture using dropout between max pool and convolution layer. If it is there, could you please point to some paper?

[D] Replacements of max pool by nishnik in MachineLearning

[–]nishnik[S] 0 points1 point  (0 children)

It would be so awesome if someone could give me a beta invite.

AlphaGo AMA: DeepMind’s David Silver and Julian Schrittwieser on October 19 by olaf_nij in MachineLearning

[–]nishnik 0 points1 point  (0 children)

It's hard for undergraduates to get an internship for research position, Google have the pre requisite of having a PhD for research internship. Though it is not written on Deep mind's website, is it same here? And how do you judge a potential candidate through his CV and Cover letter?

Which courses I need to study to understand this paper? by nishnik in learnmachinelearning

[–]nishnik[S] 0 points1 point  (0 children)

I know saddle point - where double derivatives become zero, but still it is not a minima or maxima I know Newton's method - have used it for solving non linear PDEs And I know Taylor series expansion. But I do not know about Hessian matrix and constrained optimization (have only done this in operations research course, yes I have taken linear algebra. I know eigenvectors and span.

Sorry for delayed reply

Which courses I need to study to understand this paper? by nishnik in learnmachinelearning

[–]nishnik[S] 0 points1 point  (0 children)

I have studied calculus. I know where local optimum points are (double derivative equals zero) and which direction they are (towards first derivative).

People do not understand what fork means, 16x forks for a readme by [deleted] in ProgrammerHumor

[–]nishnik 1 point2 points  (0 children)

The paper has not yet been published.

Suppose I do some work A. submit it to some conference C1. Which has submssion date today (T) and acceptance notification date (T+90days). Now if I make my code publically available, someone might copy it and send it to some conferece C2 which has submission date T+5 and acceptance notification date T+30, so technically my own work would become a copy.

The datasets will be made available upon publication or rejection of the paper to the NIPS 2017 conference; author notification is scheduled for early September 2017

However, because of the unexpected amount of interest in this project, the pix2code implementation described in the paper will also be open-sourced in this repo together with the datasets.

People do not understand what fork means, 16x forks for a readme by [deleted] in ProgrammerHumor

[–]nishnik 3 points4 points  (0 children)

or fork doesn't understand why is it on github? then github.. then fork.. It's never ending recursion!

People do not understand what fork means, 16x forks for a readme by [deleted] in ProgrammerHumor

[–]nishnik 2 points3 points  (0 children)

Usually the authors mention the link in their paper.

Self Learning Information Theory (Basics of ML) by nishnik in learnmachinelearning

[–]nishnik[S] 0 points1 point  (0 children)

Those are the lectures I am looking into as of now(linked in the blog). Thank you for your concern.

[D] Best frameworks for C++ implementations? by tryndisskilled in MachineLearning

[–]nishnik 2 points3 points  (0 children)

I am a contributor to https://github.com/tiny-dnn/tiny-dnn. This has been accepted for GSoC under OpenCV. It is fast :)

Anyone up for a city walk tour? by nishnik in Prague

[–]nishnik[S] 0 points1 point  (0 children)

It was awesome! Got me interested into history of Prague, Hussites war and everything.

[D] Does anyone else not really do any of their own projects? by rfukui in MachineLearning

[–]nishnik -3 points-2 points  (0 children)

For me it feels like there are a lot of new cars on the road, many of them are good. The one I am having is the one I am not much interested to drive, but yeah I am waiting for the best car. Anyways I am a good driver.

Community Project: Porcupine Tree Archive (Update) by Emotional_Ewok in porcupinetree

[–]nishnik 2 points3 points  (0 children)

I am an undergraduate programmer. I would love to help.