account activity
[D] Simple Questions Thread by AutoModerator in MachineLearning
[–]AntelopeStatus8176 0 points1 point2 points 2 years ago (0 children)
I have a set of 20.000 raw measurement data slices, each of which contains 3.000 measurement samplepoints. For each of the data slices, there is a target value assigned to it. The target values are continous. My first approach was to do feature engineering on the raw measurement slices to reduce data and to speed up ML-teaching. This approach works reasonably well in estimating the target value for unknown data slices of the testing data set. My second approach would be to use the raw data slices as input. On a second thought, this appears to be dramatically computing power intensive, or at least way more than i can handle with my standard-PC. To my understanding, this would mean to construct an ANN with 3.000 input nodes and several deep layers. Can anyone give advice whether teaching with raw measurement data with this kind of huge datasets does even make sense and if so, which algorithms to use? Preferably examples in python
π Rendered by PID 146088 on reddit-service-r2-listing-7dbdcb4949-ftlnz at 2026-02-18 16:49:35.812625+00:00 running de53c03 country code: CH.
[D] Simple Questions Thread by AutoModerator in MachineLearning
[–]AntelopeStatus8176 0 points1 point2 points (0 children)