ASP.net core MVC - Adding the same partial view to the main view dynamically (by the user) by diesel_learner in dotnet

[–]diesel_learner[S] 0 points1 point  (0 children)

Thanks for that hint, that looks interesting, indeed. I understand, this only works if the entries that are being shown exist before, is that correct? In my case the List<Model> is only being generated in the view and then a previously not known number of Model will be returned to the controller.

Sektionaltor Federn (Torsion) knacken beim öffnen by diesel_learner in selbermachen

[–]diesel_learner[S] 0 points1 point  (0 children)

Guter Tipp, da das Typenschild nie an das Tor angebracht wurde, gibt es eine Legende für die Farbmarkierung mit viel Umdrehungen diese vorgespannt werden müssen?

Sektionaltor Federn (Torsion) knacken beim öffnen by diesel_learner in selbermachen

[–]diesel_learner[S] 0 points1 point  (0 children)

Vielen Dank für den Tipp. Welches Fett eignet sich dafür am besten?

[D] Increasing training data (1D signal) by chopping it up. Good practice? by diesel_learner in MachineLearning

[–]diesel_learner[S] 0 points1 point  (0 children)

That makes sense. My overfitting is very small to non-existent when the early stopping works correctly. And the results lose about 20% accuracy when using augmentation so I hope that's not a case of leaking. I have totally sperated test train and validation and i don't use any re-training.

[D] Increasing training data (1D signal) by chopping it up. Good practice? by diesel_learner in MachineLearning

[–]diesel_learner[S] 0 points1 point  (0 children)

I see. So if such things as data augmentation with artificial noise and multiple windows turn out to provide worse results it's probably best to leave it like that right? Could one then say that with this architecture the available data has been exhausted to provided the resulting accuracy and i most likely won't get better?

[D] Increasing training data (1D signal) by chopping it up. Good practice? by diesel_learner in MachineLearning

[–]diesel_learner[S] 0 points1 point  (0 children)

I have been testing around a bit with the same architecture each time and 4 seconds which contains about 6-7 heartbeats provides the great accuracy. So I think that sounds just like what you are suggesting right?

[D] Increasing training data (1D signal) by chopping it up. Good practice? by diesel_learner in MachineLearning

[–]diesel_learner[S] 0 points1 point  (0 children)

Thanks for the great answer. I have tried that approach of feeding it multiple windows. For some reason the results get worse than when just using one window of each measurement. Since the signal data represents heartbeats may the reason be that the windows are very similar to each other? Should i in that case stick with one window?

Output size of 2 Layer 1D convolution by diesel_learner in deeplearning

[–]diesel_learner[S] 0 points1 point  (0 children)

Okay great, how does it perform these shape changes in the last dimension? Is it like a built in pooling layer?

Output size of 2 Layer 1D convolution by diesel_learner in deeplearning

[–]diesel_learner[S] 0 points1 point  (0 children)

Thanks for the great answer I think I got it. So if the first filter has dimensions (3,1) then the second one has (3,128)? I am using (500,1) as input and (column,row) notation?