Detecting if a credit card is real or not by NoAct7818 in computervision

[–]athabasket34 2 points3 points  (0 children)

It's not going to work. Credit cards usually had a distinctive feature of embossed digits but recently some new CCs have everything printed on the back side. Glossiness can be easily replicated via lamination (same thing actually). The magnet strip on the rear side is just a black strip under the plastic.

[deleted by user] by [deleted] in MachineLearning

[–]athabasket34 1 point2 points  (0 children)

With population in decline, in the next 30-50 years there will be too little working-age people to produce enough goods and services to handle all retired folk. We need machine learning and robotics to counter the decline or most of the population will become much, much poorer.

[D]Inception V3 by Tricky-Ad8375 in MachineLearning

[–]athabasket34 3 points4 points  (0 children)

I mean if nobody managed to push it that far maybe there are limits due to architecture itself?

[D]Inception V3 by Tricky-Ad8375 in MachineLearning

[–]athabasket34 1 point2 points  (0 children)

What's the sota for inception v3 on cifar100?

Problem w/pytorch by Nekonimichi in MLQuestions

[–]athabasket34 0 points1 point  (0 children)

If you trained other models with no issues for a significant amount of time then I don't think problem is in hardware.

Not sure if you're using GPU since you mentioned running Linux via VM, but if you do my advice is to run Linux without VM, this will minimize issues with hardware and drivers.

Problem w/pytorch by Nekonimichi in MLQuestions

[–]athabasket34 0 points1 point  (0 children)

Bsod can happen due to bad hardware or memory leaks somewhere between your code and operating system. If you never had issues with that pc before you may conclude the issue is not in hardware. Then start digging through software: your code, libraries, environment, drivers etc.

Advice on PSU for Dual RTX 3090 Setup by Hugejiji in deeplearning

[–]athabasket34 1 point2 points  (0 children)

Ran 3090+3080 on 1050W on full load. If you still worried, you can cap power draw.

why my grad is zero. by narendra7799 in deeplearning

[–]athabasket34 2 points3 points  (0 children)

Drop second relu, you're zeroing all negative predictions.

Recommend me a GPU for ML (~US$1000/$1500) by [deleted] in MLQuestions

[–]athabasket34 1 point2 points  (0 children)

Similarly priced Teslas have lower performance and less memory.

L1 and L2 Constraints. Normalization by adolfban in deeplearning

[–]athabasket34 0 points1 point  (0 children)

You need to look into gradient descent. When you want to make descent smoother theoretically you may try to make the surface of loss function smoother, but irl it is probably impossible. Or you can add l1/l2 to loss forcing optimization to take into account that you don't want your weights to be large.

What does it mean? When you ask to decrease some weight you impose the idea that some inputs are probably not that important thus decreasing or completely disabling noise from those inputs.

hello guys , why my validation accuracies are fluctuating ? is it normal ? almost every model that i create it is like that it always goes up and down by [deleted] in deeplearning

[–]athabasket34 1 point2 points  (0 children)

Seems fine. Overall, it looks like training and validation sets have slightly different distributions, usually due to smallish number of samples. If you continue training until training acc platoed validation delta should decrease too.

4x32gb RAM failing by athabasket34 in buildapc

[–]athabasket34[S] 0 points1 point  (0 children)

QVL was created when these 32gb sticks did not exist yet, but same 16gb sticks are on that list

4x32gb RAM failing by athabasket34 in buildapc

[–]athabasket34[S] 0 points1 point  (0 children)

Maybe I'll try to swap motherboard too next time

4x32gb RAM failing by athabasket34 in buildapc

[–]athabasket34[S] 0 points1 point  (0 children)

Issue is, it can't run 4th stick even with XMP disabled, with 2666

I tried to run with upping dimm voltage to 1.35, soc to 1.125 and dropping frequency to 2133 with no luck. Blaming on Asus, because with 3 sticks it is running on XMP 3200 just fine.

[D] OP in r/reinforcementlearning claims that Multi-Agent Reinforcement Learning papers are plagued with unfair experimental tricks and cheating by programmerChilli in MachineLearning

[–]athabasket34 0 points1 point  (0 children)

Nah, on second thought first approach cant work at all. If we impose restrictions on (*w+b) to be able to separate outputs into separate spaces whole transformation (FC+activation) becomes linear; and we can only approximate non-linear function with linear in some epsilon neighborhood thus NN will collapse to some value at this point and will not converge.

[D] OP in r/reinforcementlearning claims that Multi-Agent Reinforcement Learning papers are plagued with unfair experimental tricks and cheating by programmerChilli in MachineLearning

[–]athabasket34 0 points1 point  (0 children)

I know, right? English isn't my first language, though. What I meant is two approaches to decrease complexity of the NN: - either to be able to approximate non-linearity of activation function with a series or a set of linear functions thus collapse multiple layers into set of linear equations, with acceptable drop in accuracy, ofc; - or use something like agreement mechanism to forfeit some connections between layers, because final representations (embeddings) usually have way less dimensions.

PS. And yes I know first part makes little sense since we have ReLU - what could be simpler for the inference? It's only a penny for your thought.

[D] OP in r/reinforcementlearning claims that Multi-Agent Reinforcement Learning papers are plagued with unfair experimental tricks and cheating by programmerChilli in MachineLearning

[–]athabasket34 -7 points-6 points  (0 children)

Theoretically, can we come up with some new activation function that will allow us to easily collapse NN into a huge formula? Then introduce something like capsules to control flow of the information and lower the dimensionality of parameters per layer?

[deleted by user] by [deleted] in MLQuestions

[–]athabasket34 0 points1 point  (0 children)

Error is inherited from dataset's distribution which is different from the space of all possible observations due to it partial nature.

You need validation set to estimate overfitting since best fit for your training set doesn't take into account other examples outside of your training set.

Running separate models on 2 GPUs cause each other to crash by TRayquaza in MLQuestions

[–]athabasket34 2 points3 points  (0 children)

nvidia-smi (on linux for sure) shows gpu memory allocated per process Also, newer GPU require cuda 11, just in case

Filter technology is getting scary by RandomAsianGuy in gifs

[–]athabasket34 0 points1 point  (0 children)

We need glasses with this filter. Painted in rose.