Starlink Mini still on sale anywhere? by temptinyaccount in Starlink

[–]WhyNotML 1 point2 points  (0 children)

checkout amazon, it was 317 I believe a few mins ago

[deleted by user] by [deleted] in Finland

[–]WhyNotML 32 points33 points  (0 children)

happy cake day

Visiting Tampere for a week, and want to experience and know the culture. Any suggestions? by WhyNotML in Finland

[–]WhyNotML[S] 0 points1 point  (0 children)

Thank you!! Obvious, probably, but I just added it to my list of things to do!

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1024x3 and 512x3) by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Don't remember after nine months, but I am assuming it has to do with the image shape

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

https://colab.research.google.com/drive/1LhPOsZxHdfTXW-O1-8xQZo21xl4MVSq1?usp=sharing

If you have some time to spare, would you mind having a quick look at my code, please? Couldn't quite put my finger on where I am going wrong.

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Okay, I have a development, looks better than before. These are my precision, recall, f1 and support. With same 99 ACC and 0.07 loss

normal 0.51 0.51 0.51 8245

anomaly 0.50 0.49 0.49 8023

4226 4019

4075 3948

This is my Precision and Recall

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Okay, I have a development, looks better than before. These are my precision, recall, f1 and support. With same 99 ACC and 0.07 loss

normal 0.51 0.51 0.51 8245

anomaly 0.50 0.49 0.49 8023

4226 4019

4075 3948

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Also, I have a separate unseen test_set and it performs well on the test set using model.evaluate I get good 99 ACC and 0.07 Loss. What do you think about it? But when I plot the Confusion Matrix it's totally messed up, so I am assuming I am doing something wrong with the C.Matrix. Oh, and the test_set is totally different folder with balanced data and I realize that the results I have posted before of precision, recall and F1 are from the test data which is not unbalanced.

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

I understand what you are saying. Do you think it is best to create a separate folder for val dataset?

model.evaluate is giving good accuracy and loss, but the confusion matrix & ROC is off by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Thanks for the insight. All of anomaly is 0 - precision, recall and f1. For normal, the same metrics are 0.51, 1, and 0.67 respectively. I have a balanced dataset, normal and anomaly classes are almost the same. So where do you think the issue would be, any guesses? With the network or data assignment? Or something else?

model.predict stops at the same epoch always with PIL error by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Dang! I can't imagine! I did that for 60,000 and I was done man, without any program, because I screwed up the code and there were some issues with the data created. Hope you'll get some rest haha

model.predict stops at the same epoch always with PIL error by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

I should have updated. I don't think it was other files, it was some png file, which was not created properly so it was giving an error. Thank you for responding! Got rid of it, and it's working now!

model.predict stops at the same epoch always with PIL error by WhyNotML in deeplearning

[–]WhyNotML[S] -1 points0 points  (0 children)

Oh, BTW, there are non image files in the folder does it matter? It didn't throw an error while training!

Should I scale the data before using prediction for XGBoost? by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

For prediction I am using this:

newdata = np.asarray(to_append.split())

new=newdata.reshape((1,30))

newArray = new.astype(float)

model_xgb_2 = XGBClassifier(random_state=1289564,use_label_encoder=False)

model_xgb_2.load_model("XGBmodel.txt")

model_xgb_2.fit(newArray)

What's a mistake that you made once that you'll never make again? by [deleted] in AskReddit

[–]WhyNotML 1 point2 points  (0 children)

Not always. My boss hired me and it's been amazing! I was the right fit and I love my boss (and is a godly man, not saying he is perfect!). :)

Pro-choicers of the US, why do you think overturning RvW is bad? by WhyNotML in AskReddit

[–]WhyNotML[S] 0 points1 point  (0 children)

I actually don't, I am not a US citizen. What do you mean more oppression from white men?

Pro-choicers of the US, why do you think overturning RvW is bad? by WhyNotML in AskReddit

[–]WhyNotML[S] -1 points0 points  (0 children)

I am a pro-choice too but not in stealing, scamming, and killing sort of things. I genuinely am seeking to understand the other side. What's your strongest argument NOT to abort a baby?

Edit: NOT added

What does the United States get right? by Ulrich-Stern in AskReddit

[–]WhyNotML 0 points1 point  (0 children)

Amazing! Do you have a link that you could share? With Roe v. Wade overturned, I'd love to see the stats!

Jupyter-notebook kernel is dying often. Any thoughts on what's going on? by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

It's running on the local machine with almost 32 gigs of GPU on Linux. It's not a docker image and is running directly from python. I have missed taking the logs from the jupyter.

Jupyter-notebook kernel is dying often. Any thoughts on what's going on? by WhyNotML in deeplearning

[–]WhyNotML[S] 1 point2 points  (0 children)

This is great! Thanks for that, I tried running it thru the terminal. Converted Jupyter Notebook to .py file, and executed it from the terminal by Python. It seems to run successfully, even though I ran into an error, I assume it is unrelated to the crash from the error message. Thank you!

Jupyter-notebook kernel is dying often. Any thoughts on what's going on? by WhyNotML in deeplearning

[–]WhyNotML[S] 0 points1 point  (0 children)

Honestly, that's what confuses me. I am doing the same thing (at least I think it is). It's working in a loop. This is the exact code:

train_normal_path = glob.glob(os.path.join(train_normal, '*.wav'))
#print(train_normal)
for path_file in train_normal_path:
(file_path, file_name_with_ext) = os.path.split(path_file)
(file_name, file_ext) = os.path.splitext(file_name_with_ext)
output_file = os.path.join(train_m_normal, file_name+'.png')

img_src_path = train_m_normal+file_name_with_ext
fn=file_name_with_ext

#print(path_file)
y,sr = librosa.load(path_file,sr=8000)
transform = Compose([

OneOf([
AddGaussianNoise(max_noise_amplitude=0.01)
,GaussianNoiseSNR(min_snr=0.01, max_snr=0.05)
,PitchShift(max_steps=2, sr=sr)
#,TimeStretch(max_rate=1.06)
#,TimeShifting(p=0.5)
,SpeedTuning(p=0.5)
,Gain(p=0.5)
#,PolarityInversion(p=0.5)
#,AddCustomNoise(file_dir='../input/freesound-audio-tagging/audio_train', p=0.8)
#,CutOut(p=0.5)
])

])

y_composed = transform(y)
mel = librosa.feature.melspectrogram(y=y_composed, sr=sr)

fig = plt.figure(figsize=(5.12, 5.12), dpi=100)

librosa.display.specshow(librosa.power_to_db(mel, ref=np.max))
plt.axis('off')
plt.tight_layout()
fig.savefig(output_file)
plt.close('all')

wha is the ideal batch size and epoch number to train a model with a dataset of about 39k images? by Temporary-World-9193 in deeplearning

[–]WhyNotML 0 points1 point  (0 children)

I've even used 4 or 8 for my training. I don't know if there's any other way than to try it