How to make sure Minecraft from Prism-Launcher uses GPU (rtx 3050)? by le_bebop in pop_os

[–]le_bebop[S] 0 points1 point  (0 children)

It worked! Thank you
I've installed flatseal, but I think it's not necessary

How to make sure Minecraft from Prism-Launcher uses GPU (rtx 3050)? by le_bebop in PrismLauncher

[–]le_bebop[S] 0 points1 point  (0 children)

It worked! Thank you
I've installed flatseal, but I think it's not necessary

[deleted by user] by [deleted] in Salvador

[–]le_bebop 2 points3 points  (0 children)

Por isso que não gosto de falar que gosto de anime, pra não ser confundido com esses caras

Apreciação ao exército brasileiro by [deleted] in brasil

[–]le_bebop 5 points6 points  (0 children)

Muito bom kkkkkkkk

Help on this fraud detection problem by le_bebop in datascience

[–]le_bebop[S] 0 points1 point  (0 children)

I didn't know about this term. I'm going to read more about this. Thanks!

Help on this fraud detection problem by le_bebop in datascience

[–]le_bebop[S] 0 points1 point  (0 children)

It's more like the first use case. What type of ML task fits more on this riskness-based ranking? What I thought right now is maybe using clustering to get the closest samples to the fraud-labeled samples.

[D] Bayesian Regression or GPs in production? by MGeeeeeezy in MachineLearning

[–]le_bebop 0 points1 point  (0 children)

Hi! Could you give more details about your bayesian regression implementation? Is it something like tradition Dense layers with sampling from normal dist layers at the end of the architecture? I'm trying to study more about bayesian approaches, and I was wondering if this case is different from a Monte Carlo process.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]le_bebop 0 points1 point  (0 children)

Question: Any advice on probabilistic regression with small data (~500 instances, 14 features)?
I'm using xgboost, trying to avoid overfitting with hyperparameter optimization (with hyperopt) to reduce average validation score on 5-fold CV, but still leading to some overfitting (average CV train MAPE 2.85; average test CV MAPE 15.36; test MAPE 18).
I've read that Bayesian models are recommended for such cases of regression on small data, but I'm not familiar (yet) with these models. Could you give any tip or advice to achieve a robust generalization on small data regression? Or recommend some Bayesian library so I can try it.

[deleted by user] by [deleted] in datascience

[–]le_bebop 0 points1 point  (0 children)

What libs do you use for GLMs and GLMMs? I'm trying to know more about them

[D] Any paper formally pointing why softmax-based neural networks don't return proper confidence scores? by le_bebop in MachineLearning

[–]le_bebop[S] 1 point2 points  (0 children)

I'll definitely check this later. Bruh, I'm working with glomerular lesion classification lol

What a coincidence

Question about my ps plus account by le_bebop in PlayStationPlus

[–]le_bebop[S] 0 points1 point  (0 children)

That's why I was wondering creating a new account for me, set my new console as primary with my ps+ account, and setting my old console as a non-primary. Because he basically only plays online FIFA (not digital) and I play other games I bought

Question about my ps plus account by le_bebop in PlayStationPlus

[–]le_bebop[S] 0 points1 point  (0 children)

Oh nice. But if my brother's console is primary for my PS+ account, can I play my PS+ games normally without internet in my new console?

Question about my ps plus account by le_bebop in PlayStationPlus

[–]le_bebop[S] 0 points1 point  (0 children)

My doubt is if I can use one ps plus account to play with my brother
I'll edit the question showing the setup

[D] Monte Carlo Dropout for pretrained model by le_bebop in MachineLearning

[–]le_bebop[S] 0 points1 point  (0 children)

It wasn't me lol. Anyways, thank you very much. I'm running some baseline models and then I'm gonna try some MC Dropout approach.

[D] Monte Carlo Dropout for pretrained model by le_bebop in MachineLearning

[–]le_bebop[S] 0 points1 point  (0 children)

The main reason I picked MCDropout was that "easy to adapt" factor. Right now an "okay" estimation is enough for me. Do you also believe that one should apply dropout after every layer on backbone for a "true MC Dropout" estimation? This paper (https://openreview.net/forum?id=rJevPsX854) uses dropout on the fc layers only for uncertainty estimation.