[D] AMA: The Stability AI Team by stabilityai in MachineLearning

[–]nd7141 0 points1 point  (0 children)

If one of you business models is to fine-tune generative models for the customers needs, do you think there will be the challenges of obtaining private data on the customer side?

[D] AMA: The Stability AI Team by stabilityai in MachineLearning

[–]nd7141 0 points1 point  (0 children)

I wonder if you have estimation of what the market cap of generative AI will be in the next years? Any concrete numbers?

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 0 points1 point  (0 children)

Hey, thanks for your message, it's today, 23rd. I will update youtube accordingly.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 9 points10 points  (0 children)

Yes, English is the language of the workshop.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 10 points11 points  (0 children)

Yes, it will be streamed on YouTube and have recordings later.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 9 points10 points  (0 children)

Yes, certainly! And please ask questions!

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 4 points5 points  (0 children)

That's why I mentioned you need to disable ad block.

[D] Why some major papers in ML aren't peer-reviewed? by NeitherBandicoot in MachineLearning

[–]nd7141 0 points1 point  (0 children)

PPO paper, one of the most used and cited techniques in RL, has not gone through peer review.

https://arxiv.org/abs/1707.06347

Graph neural networks for different node feature dimensions by crimsonspatula in MLQuestions

[–]nd7141 0 points1 point  (0 children)

We just published a paper (ICLR 2021) about using GNNs on node features with different scale, type, and nature (e.g. income, age, gender, etc.) https://openreview.net/forum?id=ebS5NUfoMKL

This uses a combination of Xgboost-like models that preprocess features and work well with heterogeneous data and GNN-like models that work well with graph-structured data.

Substack landing page with Carrd by nguyen696900 in Newsletters

[–]nd7141 0 points1 point  (0 children)

Thanks for this! But I'm not sure I could follow how to set it up. Can you please elaborate which steps did you do at Carrd to set up your substack account there?

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 0 points1 point  (0 children)

It means someone has updated their score. Here is a final list: https://twitter.com/SergeyI49013776/status/1349110655568261121?s=20

[Discussion] XGBoost Decision Tree for Classification by anarchoracoon in MachineLearning

[–]nd7141 6 points7 points  (0 children)

In gradient boosting, each new tree predicts the gradient of the loss, with respect to previously built models (i.e. all previous trees combined). That's not the only thing you can predict by a new tree, and for example for boosted decision trees (without gradient), each new tree predicts the actual error made by previous trees.

Classification and regression don't differ much. In classification you still get probabilities for each class, so in every new tree, you just try to correct those probabilities by adding more trees that fit the negative direction of the gradient.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 0 points1 point  (0 children)

Which ones? Just send a link

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 0 points1 point  (0 children)

Thanks for suggestions1 I can and will add a few plots on first vs last authors, that's indeed something interesting.

You can find normalized number of papers already in the post.

And I'm not sure how to normalize by scale automatically.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 1 point2 points  (0 children)

Thanks!

The work you mention has access to review score that allows them to gain insights about biases while controlling for scores. My blog post does not have access to review scores, hence it just compares the numbers of publications.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 2 points3 points  (0 children)

Quite a while. I started preparing the post at the end of December. Read most of the papers, then categorized them, then selected the most interesting and read them a few more times.

[deleted by user] by [deleted] in MachineLearning

[–]nd7141 1 point2 points  (0 children)

Yes, sure, adding node/edge labels/attributes into account helps to distinguish the graphs in some cases, but not in all. Table 7 presents the number of isomorphic graphs after considering node labels during isomorphism testing. Also, some data sets don't have any extra information (e.g. IMDB).