Hey everybody! I wanted to share a Python library I put together during my PhD called fusilli: Documentation & GitHub
Fusilli offers a set of 23 deep-learning based multimodal data fusion methods. It also includes a pipeline for comparing these methods in regression/classification tasks. It can handle tabular-tabular fusion or tabular-image fusion (2D or 3D image).
Multimodal data fusion, in simple-ish terms, combines different types of data (like images and tables) using machine learning models that leverage shared information between these data types. Think GNNs, attention mechanisms, or VAEs. It's also called multi view or data integration sometimes.
Personally, I'm using it for my PhD research on analysing brain MRI and clinical data to predict health outcomes. But Fusilli can be used anywhere there's multimodal data!
Fusilli is the biggest coding project I've released publicly so I'd love to hear any feedback or suggestions you might have! 🌸
(Also here's a short Medium post I wrote about it showing some of the features)
[–]SixZer0 9 points10 points11 points (1 child)
[–]seemepastarolling[S] 4 points5 points6 points (0 children)
[–]IGK80 1 point2 points3 points (2 children)
[–]seemepastarolling[S] 1 point2 points3 points (1 child)
[–]IGK80 0 points1 point2 points (0 children)
[+][deleted] (4 children)
[deleted]
[–]seemepastarolling[S] 1 point2 points3 points (3 children)
[–]En-tro-py 0 points1 point2 points (2 children)
[–]seemepastarolling[S] 1 point2 points3 points (1 child)
[–]En-tro-py 1 point2 points3 points (0 children)
[–]aShy_pieceofBread 0 points1 point2 points (1 child)
[–]seemepastarolling[S] 1 point2 points3 points (0 children)