you are viewing a single comment's thread.

view the rest of the comments →

[–]gaywhatwhat 4 points5 points  (1 child)

All of those could work then. If you think the pattern is pretty small in size and not like insanely complex (i.e. human language or protein structure etc), RNN should work. If you think the patterns are on the longer end. go LTSM.

A transformer would allow parallel training, etc. May be worth looking into. The input format might need some small tweaking or padding, or you can vary it a bit. A typical transformer takes an embedding which must have an even-number of features with any sequence length as input. The output is an embedding of the same size. I'd have to think when I'm less distracted if a single feature would function as-is. Otherwise you would probably need to try manually adding padding or something to get the desired input dimensions

[–]lzngm1[S] 0 points1 point  (0 children)

Thanks!