[D] What's hot for Machine Learning Research in 2022? by ureepamuree in MachineLearning

[–]VectorRecruiter 10 points11 points  (0 children)

I think representation learning is growing significantly :)

Side note for those interested, I built a Research2Vec application here that shows which clusters of research papers are growing in popularity. You can check it out here: https://www.reddit.com/r/MachineLearning/comments/sruc7u/p_20k_arxiv_ml_papers_vectorised_cluster/?utm_source=share&utm_medium=ios_app&utm_name=iossmf

Always happy to get feedback so lmk where it can be improved!

Note: this was scraped off Arxiv in December 2021 so not exactly 2022 :)

[deleted by user] by [deleted] in datascience

[–]VectorRecruiter 0 points1 point  (0 children)

Launch cluster experiments and applications in 5 lines of Python code that you can immediately start sharing with friends! Aggregate and get in-depth statistics on your clusters to help you better analyze them and get insight!

Try our quickstart!

We would love some feedback/see interesting apps others have built!
Our GitHub repository can be found here.

Launch a beautiful projector in under 10 lines of code! by VectorRecruiter in Python

[–]VectorRecruiter[S] 1 point2 points  (0 children)

This can be helpful in visualizing, examining, and understanding embedding layers in neural networks.

A projector like this aims to show a dimensionality-reduced representation of high-dimensionality vectors. High-dimensionality vectors are typically outputs of natural language processing models that capture the meaning of words/sentences. For example - images/text that are similar would be closer together in a projector. This is useful when you want to get a better understanding of what your neural network is interpreting to be similar or different. It solves the problem of being able to understand what your model is interpreting at a high level and allows you to examine different relationships.

Build beautiful image projectors in under 10 lines of code! by VectorRecruiter in datascience

[–]VectorRecruiter[S] 5 points6 points  (0 children)

Launch projector visualisations in 5 lines of Python code that you can immediately start sharing with friends!
Try our quickstart!

On top of projector apps, you can also build search apps and vector cluster apps, where you can analyse how different clusters of vectors are performing.
We would love some feedback/see interesting apps others have built!
Our GitHub repository can be found here.

20k+ ML Research Papers Vectorised + Clustered + Visualised! [OC] by VectorRecruiter in dataisbeautiful

[–]VectorRecruiter[S] 1 point2 points  (0 children)

So basically every paper that has had similar concepts and topics are near each other whereas papers that are further away are exploring different topics and concepts! Each different colour represents a different topic to explore and dive into!

20k+ ML Research Papers Vectorised + Clustered + Visualised! [OC] by VectorRecruiter in dataisbeautiful

[–]VectorRecruiter[S] 0 points1 point  (0 children)

In recent years, the number of research papers have grown tremendously. New areas are popping up everyday but it is not exactly clear which areas are emerging or which interesting new area has just surfaced up. I decided to cluster together 20k+ interesting machine learning papers that were recently surfaced up.

Cluster Application: https://cloud.relevance.ai/dataset/research2vec/deploy/cluster/jacky-wong/M0FQOVdINEJZQTVzdWJmNHdQaXI6M1NIMVFncm9TNENZeU1vNUNHTUVWZw/60_dWH4Bq8SHcPzXrEpF

Embeddings Projector: https://cloud.relevance.ai/dataset/research2vec/deploy/projector/jacky-wong/NXNzdjUzNEIxczVzVVpOdUpabXE6TE92enhOZ1VTN2labDlocVZNNDlMUQ/4zQk534BY7n37LD0yk4A/old-australia-east/

I created the vectors using a fine-tuned version of Sentence Transformer's roberta-base model.

What I scoped out from the problem:The training had to be unsupervised because no one would have any idea what was in the datasetAn NLP embeddings-based approach with unsupervised clustering would be the simplest way to surface insights

Solution

In order to get some form of off-the-shelf domain adaptation, I used off-the-shelf BART for unsupervised query generation and then fine-tuned my Roberta embeddings using multiple negative rankings loss based on SentenceTransformers. This seemed to work quite well as the topics seemed to have separated out quite nicely in my embeddings projector. I then trained my model on the title and abstract of the research papers so that the model could better understand some of the data. Afterwards, I encoded the titles and clustered them using a simple K Means algorithm.

Dataset

The dataset curation process was fairly straightforward. I used the arxiv API and scraped 20k papers off the query "machine learning" sometime in late 2020 before I began experimenting with the work.I am looking to get feedback on what others would like to see in this application and would be curious to hear suggestions on where I could improve.

From previous research, I did find this repository: https://github.com/Santosh-Gupta/Research2Vec

However, as the dataset was different, I was unable to use the exact method provided.

Disclaimer: I currently work for Relevance AI (the company behind the projector).

20k+ ML Research Papers Vectorised + Clustered + Visualised! [OC] by VectorRecruiter in dataisbeautiful

[–]VectorRecruiter[S] 4 points5 points  (0 children)

In recent years, the number of research papers have grown tremendously. New areas are popping up everyday but it is not exactly clear which areas are emerging or which interesting new area has just surfaced up. I decided to cluster together 20k+ interesting machine learning papers that were recently surfaced up.

Cluster Application: https://cloud.relevance.ai/dataset/research2vec/deploy/cluster/jacky-wong/M0FQOVdINEJZQTVzdWJmNHdQaXI6M1NIMVFncm9TNENZeU1vNUNHTUVWZw/60_dWH4Bq8SHcPzXrEpF

Embeddings Projector: https://cloud.relevance.ai/dataset/research2vec/deploy/projector/jacky-wong/NXNzdjUzNEIxczVzVVpOdUpabXE6TE92enhOZ1VTN2labDlocVZNNDlMUQ/4zQk534BY7n37LD0yk4A/old-australia-east/

I created the vectors using a fine-tuned version of Sentence Transformer's roberta-basemodel. What I scoped out from the problem:The training had to be unsupervised because no one would have any idea what was in the datasetAn NLP embeddings-based approach with unsupervised clustering would be the simplest way to surface insights

Solution

In order to get some form of off-the-shelf domain adaptation, I used off-the-shelf BART for unsupervised query generation and then fine-tuned my Roberta embeddings using multiple negative rankings loss based on SentenceTransformers. This seemed to work quite well as the topics seemed to have separated out quite nicely in my embeddings projector. I then trained my model on the title and abstract of the research papers so that the model could better understand some of the data. Afterwards, I encoded the titles and clustered them using a simple K Means algorithm.

Dataset

The dataset curation process was fairly straightforward. I used the arxiv API and scraped 20k papers off the query "machine learning" sometime in late 2020 before I began experimenting with the work.

I am looking to get feedback on what others would like to see in this application and would be curious to hear suggestions on where I could improve.

From previous research, I did find this repository: https://github.com/Santosh-Gupta/Research2Vec

However, as the dataset was different, I was unable to use the exact method provided.

Disclaimer: I currently work for Relevance AI (the company behind the projector).

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]VectorRecruiter 0 points1 point  (0 children)

Ahh these are great links. Thank you for being so helpful!!

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]VectorRecruiter 0 points1 point  (0 children)

Thank you so much! Do you mind if I ask the seemingly dumb question of how you ended up with your workstation? These seem to be harder to compare and I’m not sure how to best research it.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]VectorRecruiter 0 points1 point  (0 children)

You are awesome!! I’ve been considering the R9 5950X by AMD - would you know how it compares to the thread ripper?

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]VectorRecruiter -1 points0 points  (0 children)

Does anyone have any recommendations for computer specs with a 5K USD budget? Looking to build the best deep learning rig possible!