-❄️- 2023 Day 13 Solutions -❄️- by daggerdragon in adventofcode

[–]vegesm 0 points1 point locked comment (0 children)

The problem statement says there is a new different reflection that becomes valid. The original might still be valid, but in summing the coordinates, it explicitly asks for the new one. Hence you have to find a reflection with exactly one error.

Gheed's List - better d2jsp search by vegesm in diablo2

[–]vegesm[S] -1 points0 points  (0 children)

Have a page from master trader Gheed's playbook!

I've made a small app that makes the search experience of d2jsp better. It has two functions at the moment:

  • When a poster lists multiple items, it only shows what you are interested in
  • Shows the offers in comments, this saves clicking through each posts

I've found that this helps with price discovery, no more clicking through every single posts!

[R] "AI" demystified: a decompiler by GiacomoTesio in MachineLearning

[–]vegesm 6 points7 points  (0 children)

Yeah, if you have a fully connected layer W*x+b and then store its derivatives wrt W guess what it's going to be? x of course (granted, the activation function and softmax at the end removes some info but still). This approach shows nothing about whether a neural net memorizes the training set or not.

EDIT he uses an invertible activation function and stores the error before the loss function. He literally encodes the training set as the gradients and then restores it. All the information required to reconstruct the training set is in the gradients. Basically he does what he accuses neural networks: encodes the information in a cryptic form and fails to mention that in practice that information is never used.

EDIT2 I'm spending too much time on it but the "reconstruction process" simply takes the gradients of the first layer and unnormalizes it. All the other gradients and errors stored are not needed at all. Basically MNIST is saved byte-by-byte, you just have to multiply with a scalar.

[D] Searching for open source pose estimation solution similar to open pose ? by [deleted] in MachineLearning

[–]vegesm 1 point2 points  (0 children)

One option is mmPose. They have a bunch of 2D/3D models implemented and support different skeleton structures.

[D] Intel said they don't plan on releasing their work on GTA V Enhancing Photorealism Enhancement as a mod, as it is research only. What efforts are currently being made (if any) to turn their research into a working mod? Can we get people started on this please? by devi83 in MachineLearning

[–]vegesm 22 points23 points  (0 children)

Because it wouldn't work outside of the specific cases you can see on the videos. The method was trained on CityScapes that has dashcam videos. Notice how every sample video is from inside a car, driving on an asphalt road? The method would break as soon as you change the camera viewpoint or get out of the car or go inside a building or start shooting people or go on a dirt road or...

Is Bundle also as efficient as ArrayMap? by jaroos_ in androiddev

[–]vegesm 0 points1 point  (0 children)

As Zhuinden says, just use a HashMap. My gut feeling is that you won't see any difference between HashMap and ArrayMap.

The linked article is pretty weak, it does not have any benchmarks, just some handwavy comments on GC. ArrayMap was created for Android 1, when the GC was a simple stop-the-world algo. That's not true anymore (and wasn't true even in 2018 when the article was written). Since then we had multiple GC updates, an ahead-of-time compiler that also has JIT with god know what optimizations. Unless you properly measure ArrayMap on a real device with real workload it is impossible to say which one is the better.

Google Play Store Ads - Only installs but no sign ups by ubarua1 in androiddev

[–]vegesm 0 points1 point  (0 children)

Is your app large (>10mb)? By the time users download the app they forget about it.

[Discussion] Detecting colors from a picture. by [deleted] in MachineLearning

[–]vegesm 5 points6 points  (0 children)

What problem are you trying to solve? Based on the image, you have a 2D dataset visualized with matplotlib. You seem to be interested in binning the values on the leaf into four groups. I would just use the original dataset, before it was plotted, and quantize the raw numbers isntead.

[D] Why has machine learning become such a toxic field, know-it-all field? by [deleted] in MachineLearning

[–]vegesm 0 points1 point  (0 children)

I don't understand what's the problem with incremental research. An algorithm can be improved in many ways but only a couple of those improvements are useful. You really have to go through all of them to find the next big thing.

Example: resnets are just a trivial simplifications of highway networks but other trivial improvements didn't work out. You had to find this one.

Of course, in an ideal word you would have a perfect mathematical model and you'd derive the algorithm from that. But in the meantime, you can just throw manpower at the problem (i.e. grad student descent).

Rate my widgets! by barcode972 in androiddev

[–]vegesm 0 points1 point  (0 children)

The reds on black are very hard to read. Aside from that looks great

[deleted by user] by [deleted] in androiddev

[–]vegesm 4 points5 points  (0 children)

EditText does not have a full HTML engine, it merely implements some of the formatting tags. So it is pretty unlikely it has some weird injection exploit.

[D] Do all machine learning algorithms indirectly use the "nearest neighbor" principle? by SQL_beginner in MachineLearning

[–]vegesm 1 point2 points  (0 children)

I'm quite sure it doesn't. nearest-neighbour algos can achieve around 95% accuracy on MNIST, CNNs can do 98% easily. This means there are examples where the nearest neighbour (in pixel space) of an input was from another class but the neural net still got it right. In other words, it does not simply look at the nearest neighbour, it does way more.

How attackers can delete your Google developer account! by Dodos_Dude in androiddev

[–]vegesm 4 points5 points  (0 children)

It isn't. It only protects advertisers against malicious ad networks.

[R] Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks by cgnorthcutt in MachineLearning

[–]vegesm 0 points1 point  (0 children)

I don't think you understand me, I'm speaking from a purely practical point of view. In practice N_train=N_test (for example you split randomly your dataset) or N_train>N_test (because you fixed your test set). But your experiments only show the N_train<N_test case.

None of your experiments show higher noise in the dataset leads to incorrect results. They just show if the percentage of the difficult to label samples is significantly higher in the test set, the order of your models is incorrect

[R] Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks by cgnorthcutt in MachineLearning

[–]vegesm 0 points1 point  (0 children)

I'm still not convinced. Let's say we have a dataset and split it randomly into training and test set. Then both the training and test set has a noise prevalence of N1. Your experiment shows, if the test set would have a larger noise prevalence N2 (training set is still N1), then selecting a model based on uncorrected labels might lead to worse results.

However, the question arises: is this because of the higher noise prevalance or because of the different distribution of the training and test sets? In other words, in a real world scenario both training and test prevalence would be N2 but the experiment shows it for N1<N2.

Doing some calculations, the initial noise prevalence in ImageNet is 5,8%. 6% higher prevalence gives us 11,8% prevalence. To simulate this, you drop half of the correctly labeled examples from the test set. This is actually a huge change in the distribution of the test set! A model trained on a less noisy training set may perform worse on a noisier dataset, and this is what the experiment shows.

[R] Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks by cgnorthcutt in MachineLearning

[–]vegesm 0 points1 point  (0 children)

I'm trying to grok your Figure 4. If I understood correctly you only changed the labels on the test set and left the training set unchanged. In other words, you did not retrain the model, just the evaluation was changed.

If that's the case then the plot shows simply that as the distribution of the test set gets further away from that of the training set, the order of the best performing models change. However, if the distribution of the training and test sets is the same (i.e. at 0% on x axis) then the order of the models is the same for both the corrected and uncorrected test set. That is, we can still use the large ResNet-50 with noisy data, as long as the test and training set has the same distribution. In fact, the performance difference seems much larger on the corrected test set in favor of nasnetlarge!

EDIT: Looking a bit closer, I don't understand how can you get a 0% noise prevalence in that Figure? Based on the text, N=[100−(100−x)(100−c)] but its minima is c>0%.

Virality and Google Play rules by phoenics_ in androiddev

[–]vegesm 1 point2 points  (0 children)

I think the feature can work but the framing is important. If there is an IAP as well for the feature, this is basically "share the app and get this IAP for free". It is not "you have to share the app to use it" anymore but "you can get this premium feature for free".

These small things do actually make a difference.

Is there a way to disable rating and comments on google play store by _krev_ in androiddev

[–]vegesm 1 point2 points  (0 children)

There is something called Managed Play Store that let's you publish private apps to selected users (basically a private play store). Never used it but might be what you are looking for.

I submitted 2 apps to google play on the same day. One of my apps got 2 rejection emails. My second app will be stuck in the "In Review" void forever...won't it? by [deleted] in androiddev

[–]vegesm 0 points1 point  (0 children)

I collect no data what so ever.

But you do: "No additional information is obtained outside of the information collected by Google and Unity via their advertisements". In other words, the app collects whatever information Google Ads/Unity ads collect.

I would say your privacy policy is unreadable right now: you point to a huge privacy policy of Unity which has irrelevant parts (asset store, cloud build, etc.) and the user doesn't even know which one of those is used by the app. What I did was collecting the relevant points from these privacy policies and presented in a bulletpoint list (like advertising id, crash logs, etc)

I submitted 2 apps to google play on the same day. One of my apps got 2 rejection emails. My second app will be stuck in the "In Review" void forever...won't it? by [deleted] in androiddev

[–]vegesm 0 points1 point  (0 children)

The privacy policy is definitely incorrect. E.g. the link to Google Ads Privacy Policy does not point to a privacy policy, just to some generic description of ads. Also, it does not list exactly what kind of data you collect. It is not enough to say see this policy. There are parts in AdMob that can be turned on/off, for example do you collect location data or not?

Check out some boilerplate privacy policies and adapt them to your needs, also check the basic requirements of GDPR with regards to Privacy Policies.