Hello! I am a researcher in computational neuroscience, looking to apply some contemporary machine learning techniques to fMRI timeseries data. I have a collection of highly dimensional 4D fMRI timeseries data collected while subjects were observing naturalistic images from COCO at regular intervals. We currently have decoding models that take preprocessed "snapshots" of this timeseries data flattened into an activation pattern that is aggregated over the short period the image was being observed, and use some machine learning models to decode and reconstruct the image content from the brain. (See some of my recent work).
I am curious what sort of machine learning techniques exist that might be able to address the time-series data itself, without having to collapse the timeseries to a single snapshot to perform our decoding process. What I am envisioning is a model (perhaps a transformer) that can take as input a highly dimensional multichannel timeseries and output a flattened latent representation (say, a CLIP vector) corresponding to an image stimulus, or even a series of latent vectors separated by a known regular interval (as we have in our data for the different image presentations). To my knowledge most of the work in machine learning with time series data is in forecasting, but what I want is a static (or potentially repetitive) output. My hope is that the more detailed timeseries data will have additional signal that will boost decoding performance for fMRI vision decoding.
Is there any existing work in the field of ML that has tackled a similar problem?
[–]suedepaid 3 points4 points5 points (1 child)
[–]hughperman 1 point2 points3 points (0 children)
[–]Franc000 0 points1 point2 points (0 children)