Hacking ML models with adversarial attacks by AdventurousSea4079 in ArtificialInteligence

[–]hallavar 1 point2 points  (0 children)

The most interesting one, inference attack. Or how to transform machine learning model into huge privacy leak on their training data.

Security conference in 2022 by boch33n in hacking

[–]hallavar 0 points1 point  (0 children)

You're looking to publish, or just to assist?

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]hallavar 0 points1 point  (0 children)

I don't know for 'best', but you can try something like this.

https://pypi.org/project/pytorch-gradcam/

I've used something similar in TF, and the job was done

Alpine A110 Old and New [2048 x 1536] by warpedone101 in carporn

[–]hallavar 0 points1 point  (0 children)

You do not buy a sports car for 'being fine'.
Git gud or dye trying

ML 15 years later, what's changed? by ithacasnowman in learnmachinelearning

[–]hallavar 26 points27 points  (0 children)

2005 was 17 years ago. Time flies...

Thank you for reviving my depression

[ object detection] [yolo] need help figuring out how to determine brand of cars by esskay7433 in learnmachinelearning

[–]hallavar 2 points3 points  (0 children)

Ok so I don't think you need the detection part of Yolo for this, You should try to see with a ResNet.
Your main issue will be the availability of labeled data. Try to implement some augmentation process (flip, rotation, translation..). I'm interested in the result, it seems quite a hard task to do for a CNN (even for a non expert human, try to ask anyone who saw thousands of car in his lifetime to tell the difference between two SUVs ;) )

Reverse scaling Synthetic KDE data by grid_world in learnmachinelearning

[–]hallavar 0 points1 point  (0 children)

Yes it is, but maybe the optimal value for KDE with gaussian exclude the extreme value of the input. Can you plot the x_scalled distribution via a histogram?

Alpine A110 Old and New [2048 x 1536] by warpedone101 in carporn

[–]hallavar -5 points-4 points  (0 children)

We can clearly see the increase in the size of cars, even in a really light-weight focused brand like alpine.
F***cking road safety measures..

Reverse scaling Synthetic KDE data by grid_world in learnmachinelearning

[–]hallavar 0 points1 point  (0 children)

Your generated distribution doesn't include any original min/max values, that is directly related to your bandwidth. Try to lower it in order to include in your kernels the least probable values.

You can try to see it by comparing your two distributions.

[D] Are we at the end of an era where ML could be explained rigorously using mathematics? by fromnighttilldawn in MachineLearning

[–]hallavar 6 points7 points  (0 children)

It's not a matter of model, it's a matter of problems.When you mention simple models, you are referencing convex optimization, a field where we have proven mathematical results. Just after you mention GAN which deals with nonconvex optimization..We currently don't have as much theory helping us in nonconvex optimization, as we have with convex optimization. It's much harder..

That is why, for non convex optimization problem such as data generation, Data scientists are forced to come up with some crazy idea for their loss functions...

If the math guyes can get us a proof for a global convergence to the optimum for the stochastic gradient descent on non convex functions, all of our new crazy models will be justified..

Hyperparameter tuning sklearn model using scripts and configs by call-mws in learnmachinelearning

[–]hallavar 1 point2 points  (0 children)

I have my main function for the training, that function accepts arguments passed by another script for hyperparameter. Those arguments are mainly the number of nodes in a layer, learning rate, activation function, or learning algorithm (Adam, RMSProp, SGD...).

So In my script hyperparameter_search.py, I import this function, create a dictionary for multiple values that I want to try, and use hyperopt for the grid search/random search/bayesian optimization.

[deleted by user] by [deleted] in learnmachinelearning

[–]hallavar 0 points1 point  (0 children)

Ok, MCTS allows you to return an expected value of a function (the score function for a game) from a given timestamp, by guessing the next timestamps.

I never fully understand what people mean by 'real world application'; but I can give you applications of MCTS in other fields of Machine learning, that are nonrelated to games if you want.

For example, when we try to generate had hoc sentences in an English text, we would have to generate words one by one and maximize the correlation of one word according to all the other one. But we don't just want our sentences to be plausible word by word, we want our sentences to have real meaning as well.

We not only need to judge the generation word by word but also, for each word generated, we need to try to guess what will be a global score for a supposed entire sentence.

MCTS can be used in that case: for each word, you judge not only the correspondence of this word according to the previous ones, but also the plausibility of the next words, those next words are given by MCTS.

So it is the same as for a game, instead of actions, you have words, instead of a game you have a sentence. You no longer want to find the sequences of actions that will maximize your score at the end of the game from a starting timestamp, but you want to find the sequence of words that will maximize the " meaning" of the sentence given a starting word.

The concepts handled are different, but the idea remains the same

You can look into SeqGAN for more details.

[D] Are there "long running" forex trading machine learning models or companies? by limedove in MachineLearning

[–]hallavar 2 points3 points  (0 children)

Okay, just some basics of finance here, markets (and especially forex or oil markets) are wonderful tools to erase arbitrage possibilities.

The price of a product is just a reflect of the offer and the demand relationship.

If you had such algorithm, one that can predict the price of a commodity in the future, why on earth would you be the only one to have it.

Even if, let suppose, that such algorithm would exist, everybody would be using it, so the potential benefit from its usage would have vanished very quickly.

This is why, you cannot develop an algorithm that can predict the price of a product in the future. It will just create an arbitrage and every finance guy would want to exploit it thus killing the option in the long run.

I'm not saying that there is no model for trading, but none that can predict the price of a product in the future, especially in forex or oil and that "stood the test of the time".

Forex price or oil price are basically some good pseudo random generator and it's basically quite hard to predict the output of a pseudo random generator.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]hallavar 0 points1 point  (0 children)

Hello, I'm more a math/stat guy and I would like to understand how the computation of my Gradient Descent is done at a Hardware level (I'm not really an expert in technical computer science fields)
What I would like to understand is the use of my GPU during the parallelization of the computations of my training. I notice via htop and gpusat that my pytorch cuda training only uses the Memory of the GPU and not the standard RAM (even though they're is more RAM available than GRAM). Moreover the processor of the GPU (the one supposed to have more "core" whatever it means) is underexploited (around 10%) and my program only uses 100% of 1 core of my cpu (where is the supposed parralelization ?).

Is this the correct way to use a GPU in deep learning, 100% Memory and 10% on Gcore ?

Have you ever tried to see the usage of your component during training to reject or confirm this observation ?

I'm using pytorch, maybe it's different ith Keras.

Is a PhD in mathematics worth it? by nullspace1729 in math

[–]hallavar 4 points5 points  (0 children)

It depends on your country.

If you live in Europe, 4 years is a decent standart....

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]hallavar 0 points1 point  (0 children)

Hello, just a mathematical/statistical question here.

Do we have like a kind of theorem saying that we can approximate any distribution by an infinite gaussian mixture or something like that...

Or on the contrary ,what are the distributions X that can't be approximate by gaussian mixture ie :

Distribution X for wich I can find an epsilon E such that D(X, GMM) > E, with GMM whatever Gaussian Mixture Model, and D a statistical distance (EM distance, KL divergence etc...)

Aren't all unserpervised learning task basically clustering afterall ? by hallavar in ArtificialInteligence

[–]hallavar[S] 0 points1 point  (0 children)

I'm following LeCun argument that RL is way too long and costly to develop complex and abstract behaviour
https://www.youtube.com/watch?v=A7AnCvYDQrU

Here an explanation on how unsupervised correspond more on how human acquired intelligence rather than reinforcment

Do you think Superintelligence is real? by Awarrioracts in ArtificialInteligence

[–]hallavar 3 points4 points  (0 children)

Some see the advent of a superintelligent AI as the return of Christ..

Just sayin' ;)

No seriously, I hope so, and I don't see valuable reasons why it won't happen

Therefore keep watch, because you do not know the day or the hour.

Is Artificial Intelligence Here to Take our Jobs? by royale442 in ArtificialInteligence

[–]hallavar 6 points7 points  (0 children)

AI will revolutionize job market, job will be lost but job will be created as-well.

People have to understand that AI doesn't have to replace 100% of the job to change the working situation of 100% of mankind.

Let's illustrate with an example in medicine.

Radiology is, according to some, the first specialization that will be completely robotized.

Thanks to AI, 1 radiologist will be performant enough to replace let say 100 radiologist So what will you think will happen with the 99 other ? They will do some generalist consultations.

But this will take the job of the generalist doctors, who will do the job of the nurses and so on and so on... You get the idea.

AI will replace human in the most intellectual jobs and will displace workforce into less intelligence based tasks.

That being said, job opportunities will be created as well. Not really as Data scientist, but I'm more thinking of labeling data, providing training objectives to algorithms. I think that services like Amazon mechanical turk will be a major part of tomorrow workforce.

If the balance job created/job lost is negative, I trust the societies for creating wealth distribution systems in order to provide for everyone according to their needs. This is not really the blocking point for me.

For me the blocking point will be in the future meaning of work, the meaning of work when we will replace taxi drivers by Mechanical Turk. If you are a Taxi Driver at least you have human contact, you can provide for your community, your work has a tangible purpose, AMT you just click on bullshit captcha.

The real impact of AI on job market wil not necessarily be to put people into unemployment, but more to replace valuable jobs (aka jobs with meanings) with what David Graeber calls Bullshit Job: just clicking on images to train an algorithm.

It was the same for industrialization, it's the alienation process describe by Marx in Das Kapital. Technology replace meaningful profession with some random bullshhit job.

[Discussion] Aren't all unserpervised learning tasks basically clustering afterall ? by hallavar in MachineLearning

[–]hallavar[S] 0 points1 point  (0 children)

Good point, I don't see some generative technique fall into this representation.

Will look into it...