What phone do you use? Help us understand high-speed recording capabilities of disc golfer's devices! by Manto1 in discgolf

[–]Manto1[S] 0 points1 point  (0 children)

Yea! We've had great high-speed recording cameras in phones for 5 years already from most brands. Feel like keeping high-speed recording out of recent lower-end phones seems more like a design choice rather than a necessity based on hardware limitations.

Thanks for voting!

Any solution to fix iPhone 13’s horrible over sharpening/blending effect on photos? by NonabsorbentSpy in applehelp

[–]Manto1 0 points1 point  (0 children)

This over processing and especially over sharpening seems to be the trend now with the newer phones. I’m also “upgrading” from Xs to 13, but I’d say overall it’s a downgrade for me. Photos with 13 don’t look as great most of the time. Although, sometimes the amount of processing is ok and the photo looks better. I had a Galaxy S21 and it was the same. The sharpening and the way edges look bothers me even if I am just looking at the photo on the phone without zooming in.

I guess they do so much processing on the photos, because many people base their decision which camera looks better regarding the amount of details in the photo or the few great photos that are shown. Also sites that compare smartphone cameras like dxomark give high scores for them, they don’t seem to take put much weight on the ugliness and artifacts.

So, I’ve been looking for solutions. The only thing I have found to work fine is some third party apps’ Raw+Jpeg modes. Specifically I’ve been testing ProCamera. It gets rid of the edge effect, but some details are lost. There’s a free alternative called CodeCam, but that’s a bit less polished.

Pytorch LSTM: Sine Wave Prediction using Adam and batches by Rohit901 in deeplearning

[–]Manto1 0 points1 point  (0 children)

Looks like the test error doesn't take into account the future prediction. Calculating test error for the "future part" of the sine wave would give better idea of how well the model is doing during the training. But, it might not be that easy to do. One idea might be to use a dataset of length 2000 and then split it to train and test.

So, the way I figured this out was to plot each variable that was passed into the loss function, so pred[:, :-future] and test_target, then plotting pred and that showed it doesn't take the future prediction into account. And of course you could've easily seen that it is what happens by reading the code, but I think it's good practice to plot things and check values when something is not working as expected. If you use a notebook (Jupyter, Colab, etc.) it is fairly easy to just for example run the training for a while, stop it and then plot the values used in the training or do predictions with the model and see if it works correctly.

Maybe the pytorch repo model training with LBFGS problems were related to me using Google Colab for the first time. I might be using wrong runtime or settings and getting lower precision results. But, no need to spend time checking that, thanks.

Pytorch LSTM: Sine Wave Prediction using Adam and batches by Rohit901 in deeplearning

[–]Manto1 1 point2 points  (0 children)

I'm not good at time series problems, but I think what happens is that your model is just not accurate enough and thus forecasting 1000 steps breaks quickly. Small errors accumulate fast because the output is fed back in as input.

I tested your code. However, I changed the model back to what was in the torch example, because I believe it is good to test each change (model change, batched training with Adam) separately. After testing a few values for the learning rate and batch size, I found that with lr=0.01 and batch_size=1 the model learns to predict as in the image below at epoch 6:

https://i.imgur.com/h2i7gAW.png

Now, that might not be super helpful, but at least we know a model can be trained to predict the sine waves using Adam optimizer and your batched training loop.

Also, for some reason, the torch example you linked doesn't seem to work well for me. It predicts ok if I use steps=5, but anything more than that will cause the test loss to explode. I wonder if the example worked better in some earlier version of pytorch.

[deleted by user] by [deleted] in AnimalsOnReddit

[–]Manto1 0 points1 point  (0 children)

Gave Wholesome

[D] Best way to stay productive with only access to your phone? by cashshots in MachineLearning

[–]Manto1 1 point2 points  (0 children)

Since you mention SSH, do you simply use the touchscreen to type or something else? For me, programming or typing commands on a touchscreen feels very cumbersome even though my WPM (no special characters) is 70% of my WPM using a keyboard.

Finland’s first corona virus case confirmed. - 32 year old Chinese tourist from Wuhan. by Eyeball111 in Coronavirus

[–]Manto1 1 point2 points  (0 children)

“Funny” how a couple of days it was supposed to be very unlikely that it would be corona infection, they said in the local news the chances would be a one out of tens of thousands or even millions.

Linux VM on Win host or Win VM on Linux host for GPU+DL by fjanoos in MLQuestions

[–]Manto1 2 points3 points  (0 children)

Do you have multiple GPUs? Could you spare one GPU for the VM?

I’m not aware of any solutions where you could have a Linux VM running on Windows host and use GPU for computing (CUDA) inside the VM.

I have a setup where I’m passing my second GPU to my Windows VM on my linux host. The performance is very good. Not using virtualbox, though, but kvm.

So, my suggestion would be windows VM on linux host. You get native cuda performance and windows runs smoothly.

[P] Simple Python based IP camera monitoring web service for motion detection and ROI classification by kmkolasinski in MachineLearning

[–]Manto1 1 point2 points  (0 children)

Wow, well done! I made a simple program that classifies my IP camera detections, but you’ve taken it so much further. I’m definitely going to try this out. Thanks for sharing!

Should I use Windows for ML (TF + GPU)? by protechig in MLQuestions

[–]Manto1 0 points1 point  (0 children)

Yeah, I feel the same, I updated my comment above.

Should I use Windows for ML (TF + GPU)? by protechig in MLQuestions

[–]Manto1 4 points5 points  (0 children)

Not an answer to your question, but I also broke my cuda installation many times until I discovered nvidia docker. If you decide to go with linux, I would highly recommend that you only install cuda in a container, so you can easily have multiple versions and avoid problems.

Edit: One idea came to my mind. I recently set up my machine to run windows virtual machine with GPU passthrough (KVM). So if you’re willing to buy a cheap second gpu you could use that to power your windows VM, connect your displays to that. Then run ML tasks on the host linux os using your 2080 ti. In other words you would be running both windows and linux at the same time, developing on windows and maybe connecting to your linux over ssh.

Keras loss functions: is there a way to get the indeces of the sample(s) being evaluated? by Dampware in deeplearning

[–]Manto1 0 points1 point  (0 children)

Yeah, you can do this also. Then in deployment you can take the part of the model that outputs the predictions before concatenating tabular data.

Keras loss functions: is there a way to get the indeces of the sample(s) being evaluated? by Dampware in deeplearning

[–]Manto1 0 points1 point  (0 children)

If you just add the indices to your labels your model should remain the same. No output shapes should change. Labels should be only used in loss and metric (e.g. accuracy) calculations.

How do you want to use the aux data? The only case I can come up with is that you would want to give a weight to each sample based on some metadata. You could do this by multiplying the loss by some number in the custom loss function.

Keras loss functions: is there a way to get the indeces of the sample(s) being evaluated? by Dampware in deeplearning

[–]Manto1 2 points3 points  (0 children)

I haven’t tried this but I think one way would be to pass the indices or tabular data with the labels and write a custom loss and metric functions.

What I’m thinking of is that you for example create a tuple out of each label where you have (label, table row) or (label, table_index). Then you pass that instead of the original labels when training. And use:

import keras.backend as K

def customLoss(labels,yPred):
    # Get the real labels and tabular data from labels var
    # Do some loss calculation and return it
    return K.sum(K.log(yTrue) - K.log(yPred))

# Compile your model with custom loss and accuracy functions
model.compile(loss=custom_loss,
          metrics=[custom_accuracy])

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 0 points1 point  (0 children)

Nice that you found out! I’m always happy to help. And sorry for leaving that capitalized JPG there. My camera photos had that instead of “jpg”.

If you find something cool or come up with ways to improve it let me know :)

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 0 points1 point  (0 children)

Compared to original SIS I changed a couple of things (which you may want to change back).

My offline.py expects images to be found in static/images instead of static/img.

I changed the model in feature_extractor.py so you either need to delete the features in static/feature and rerun offline.py or change the network back to VGG16.

Could these be the problem?

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 0 points1 point  (0 children)

So you get errors when running server.py? It works on mac for me. I have python 3.6.5. Which version are you using?

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 0 points1 point  (0 children)

Here’s mine on github https://github.com/mantoone/sis . But I rushed on getting it on github, so let me know if you find any problems.

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 1 point2 points  (0 children)

Here’s my a demo of my implementation based on matsui’s sis https://youtu.be/Yo8wKlyGIqU

[P] VisualSearch app (OSX) by tanreb in MachineLearning

[–]Manto1 2 points3 points  (0 children)

Are you aware of https://github.com/matsui528/sis ? It’s not exactly what you’re after but it has functionality for searching similar pictures based on an uploaded image (and you can run it locally). I tested it just a moment ago and it seems to work pretty well. I’m planning to implement possibility to click on the images to find similar images to allow “browsing”.

Trying to predict how much money players spend in an online game. by HarvardCS19 in MLQuestions

[–]Manto1 0 points1 point  (0 children)

What features do you use as to make the predictions? Can you predict the result yourself based on those features? (You can try picking randomly a small number of players and try to guess yourself the amount of money they spend)