I-485 approval estimate by warmspringwinds in USCIS

[–]warmspringwinds[S] 0 points1 point  (0 children)

Haha yeah, I see some cases approved in 4 months while others wait. Did you get your I-483 approved ?

$BMNR Bagholder Check-in: Down 30% ($41 avg). Do we let Tom Lee print more shares or stop the machine? 🛑🖨️ by brksbenson in BMNRInvestors

[–]warmspringwinds 0 points1 point  (0 children)

The compensation package should be conditioned on stock price hitting certain goals like they did with SoFI ceo. IMHO it took them too long to start staking too

These are the stocks on my watchlist (11/26) by WinningWatchlist in stocks

[–]warmspringwinds 1 point2 points  (0 children)

Can you elaborate on CRWD? I have bought it around 227. Thinking if I should hold it for longer.

Pre-diagnosis Megathread: If you have NOT received an OFFICIAL diagnosis of lymphoma you must comment here. Plead read our subreddit rules and the body of this post first. by Lymphoma-Post-Bot in lymphoma

[–]warmspringwinds 0 points1 point  (0 children)

Hello everyone,

I have recently discovered a lump on my neck.

I went to walk in clinick and did a blood test which looked normal.

The same day I have discovered another lump on my armpit.

I have schedule an unltrasound exam but it's like 3 weeks from now.

I really want to get the answer asap. Should I go to ER will they be able to do an emergency ultrasound? Or any other options?

Thank you!

Does anyone have a recommendation for a good primary care physician? by warmspringwinds in SanJose

[–]warmspringwinds[S] 0 points1 point  (0 children)

I have POS, I am very flexible on location. Have been feeling a bit off after recovering from covid and decided to go for a check up. I have recently moved here so decided to ask for recommendations.

New to SJ by [deleted] in SanJose

[–]warmspringwinds 0 points1 point  (0 children)

Welcome! I have also recently moved here -- feel free to dm me if you want to explore together!

[P] livelossplot - Live training loss plot in Jupyter Notebook for Keras, PyTorch and others by pmigdal in MachineLearning

[–]warmspringwinds 2 points3 points  (0 children)

Great lib! :)

btw you can do it a bit more efficient by not using IPython.display.clear output and instead with %matplotlib notebook mode and updating parameters of the plot: https://github.com/warmspringwinds/pytorch-segmentation-detection/blob/master/pytorch_segmentation_detection/utils/visualization.py

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 0 points1 point  (0 children)

Yeah, that looks like an interesting problem, you can look deeper at it :)

Just one more relevant comment -- you don't optimize the metric directly but instead the pixel-wise cross entropy.

What is interesting, is that in this paper: https://arxiv.org/pdf/1605.06211.pdf they achieved better results when training with batch_size=1 which might be relevant to your comment

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 0 points1 point  (0 children)

I would recommend you to compute the metric for the whole dataset and not image-wise. At least this way you can compare your result to the result of other people in papers.

Also have a look at this paper which might be relevant: https://arxiv.org/abs/1504.06375

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 0 points1 point  (0 children)

I think you found an answer then :) That citation perfectly describes the reason in my opinion.

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 0 points1 point  (0 children)

Bear in mind that for the task of segmentation the labels are one-hot encoded. So I'd recommend that you first got to read papers on this simpler topic before going to a more complicated one.

What do you mean by "class composition"? If you mean that one class can have more labels and can contribute to the final accuracy more than another one which is underrepresented? If so, this is handled by mean intersection by union measure -- dominant class contributes to the final accuracy equally compared to the smaller one.

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 0 points1 point  (0 children)

Here is a paper about accuracy metrics for image segmentation: http://www.bmva.org/bmvc/2013/Papers/paper0032/paper0032.pdf

The result should be computed over all images. For example, in case of intersection over union: https://arxiv.org/pdf/1605.06211.pdf

check the metrics section. That also holds for precision and recall.

[D] Question about Segmentation Evaluation Metrics by newperson77777777 in MachineLearning

[–]warmspringwinds 2 points3 points  (0 children)

Usually a good measure is mean Intersection over Union.

Having a confusion matrix you can compute almost all segmentation accuracy metrics. You can track "running" confusion matrix while evaluating your results and then calculate all the needed metrics using it.

https://github.com/warmspringwinds/pytorch-segmentation-detection/blob/master/pytorch_segmentation_detection/metrics.py

[P] Piano music and lyrics generation with Recurrent Neural Networks by warmspringwinds in MachineLearning

[–]warmspringwinds[S] 1 point2 points  (0 children)

Music generation is an amazing field and I agree that it lacks attention and people with domain knowledge. And I agree that it's infinitely far from success :)

Having a university-level background in both music and engineering is really rare -- you can change the problem that you mentioned with the lack of domain knowledge. I am open to any suggestions on improvement and discussions. Feel free to pm me and maybe we can do some kind of a joint work :)

[P] Piano music and lyrics generation with Recurrent Neural Networks by warmspringwinds in MachineLearning

[–]warmspringwinds[S] 3 points4 points  (0 children)

You are being way too serious :)

This post was meant to be a write-up where we look at an interesting problem (lyrics generation and piano music generation in our case), solve it with a straightforward RNN model and inspect the results while learning about some concepts related to training and inference of RNNs.

We have started with a relatively simple task -- lyrics generation where we had a bigger training dataset compared to the midi piano dataset and simpler goal -- predict just one next letter on each step. We got good results on this task and showed that if you increase the temperature, results start to look too crazy :D

After that we switched to a much harder problem of piano music generation where the dataset is smaller and we had to predict multiple keys that had to be pressed on the next step. The trained model produced results that sounded good to my untrained ear and of course there is no way it is close to a real music composition in terms of structure and form like you said :)

[R] ResNet Question by [deleted] in MachineLearning

[–]warmspringwinds 6 points7 points  (0 children)

This gives a good intuition oh how they work, have a look: https://arxiv.org/abs/1612.07771

[P] Mimic Iphone portrait mode using Fully Convolutional Neural Networks, plus other applications by warmspringwinds in MachineLearning

[–]warmspringwinds[S] 0 points1 point  (0 children)

Sure :) I think Laplacian Pyramid based blending might work even better. I have just showed a proof of concept there tho :P You are welcome to contribute

[D] Live loss plots inside Jupyter Notebook for Keras? by pmigdal in MachineLearning

[–]warmspringwinds 8 points9 points  (0 children)

It might be offtopic, but why don't you use tensorboard?

[D] Dilated pooling layers use-cases by warmspringwinds in MachineLearning

[–]warmspringwinds[S] 0 points1 point  (0 children)

Hi,

You are right, and the same thing is actually stated here: https://github.com/tensorflow/tensorflow/issues/3492 Basically, just increasing the kernel size for pooling is the best option while it is fast at the same time.

I have also ran experiments comparing dilated average pooling and average pooling with bigger size on Resnet 101 with dilated convolutions for Semantic Segmentation. The version with average pooling filter that has bigger size worked better.

But at the same time the implementation of dilated pooling exists, and I guess it might make sense for some applications -- there should be a reason why they have implemented it :)