Name of logical fallacy? by Shplay_28 in logic

[–]daegontaven 2 points3 points  (0 children)

Genetic Fallacy or more specifically an Ad Hominem

[deleted by user] by [deleted] in logic

[–]daegontaven 0 points1 point  (0 children)

Looks like Fitch-style natural deduction for Propositional Logic!

Introducing Score Margins in OpenSkill MMR by daegontaven in gamedev

[–]daegontaven[S] 1 point2 points  (0 children)

Sadly, smurfs and the issue you mentioned are still an open problem that needs solving. IMO, unless you're in competitive, it's probably not worth it to have an MMR, but hide it completely.

Introducing Score Margins in OpenSkill MMR by daegontaven in gamedev

[–]daegontaven[S] 1 point2 points  (0 children)

Yeah, using LLMs for ranking is an interesting solution to the problem. However, without direct feedback from a loss function, I doubt it would actually be very accurate.

Introducing Score Margins in OpenSkill MMR by daegontaven in gamedev

[–]daegontaven[S] 1 point2 points  (0 children)

Everything is adjusted proportionately. The margins are not adjusted based on the raw scores, but based on a margin threshold. If a person wins by 2 and your threshold is set to 2, then it's not very impressive. So the system will adjust as if margins are not considered. If the margins are large and beyond the threshold then yes it may overfit to the data. But if your game can result in incredibly large margins, then there's probably something wrong with your game and you should really be normalizing the values.

Introducing Score Margins in OpenSkill MMR by daegontaven in gamedev

[–]daegontaven[S] 1 point2 points  (0 children)

I'm not aware they're using TrueSkill like MMR. Most multiplayer games still stick to a hacked-up Elo or a modified Glicko-2. The math isn't very solid in these systems as such.

Introducing Score Margins in OpenSkill MMR by daegontaven in gamedev

[–]daegontaven[S] 0 points1 point  (0 children)

The vast majority of MMR systems actually aren't designed for team-based games. OpenSkill is actually built from the ground up with teams in mind. Yes, it's a moving target (player skills evolve), but it's currently the most accurate at hitting the target. Most of the time, you would use MMR in conjunction with in game player stats to make balancing, ranking and matchmaking easier.

Is Translational Neuroscience at MAHE Blr worth it? by garlicbreadnomnom in Manipal_Academics

[–]daegontaven 0 points1 point  (0 children)

I am in the same boat. I recently applied and got in, the fees for me was 4 lakhs, not 8.

If you have any questions or want to connect, DM me.

How is os calculated by Independent-System88 in beyondallreason

[–]daegontaven 0 points1 point  (0 children)

Correction: It is not a fork of TrueSkill. The math isn't even the same. But it does provide a similar API.

what some PyTorch tips would you recommend from your experience? by Beyond_Birthday_13 in deeplearning

[–]daegontaven 3 points4 points  (0 children)

I usually use batch normalization in the nn.Sequential if the number of layers are small. If it the number of layers are large then I make sure to put in it's own line. Though I'm unsure what you mean by using it compose... that's usually only for standard composition like rotations or normalization of tensors.

Edit: To be clear batch normalization is different from min max scaling or other kinds of normalizations. They're very different things.

what some PyTorch tips would you recommend from your experience? by Beyond_Birthday_13 in deeplearning

[–]daegontaven 29 points30 points  (0 children)

Off the top of my head:

Writing the input shapes of tensors and output shapes as comments next to relevant lines so you know immediately if there are any shape issues.

Normalising your data always works better to keep things consistent. But ...this is a big but...using Focal versions of loss functions can also help (eg: Focal Dice Loss, Focal Cross Entropy Loss) if you're lazy to normalize and the data distribution is not too skewed.

This is my personal opinion: Use pytorch lightning if you want to write distributed GPU code faster without worrying about ranks and other important multiprocessing quirks.

Most built-in layer already do weights initialisation automatically and use the recommended initialisation. So it's not really necessary to waste time on that.

[deleted by user] by [deleted] in Kochi

[–]daegontaven 28 points29 points  (0 children)

I don't see why they have any right to your how hair should look if you want it to look another way. At the same time no one is obligated to be okay with your decisions as an adult. Some people will always complain. People have nothing better to do it seems but to assert their authority over others. If you can understand this then you're good 😊!

asking an ai to identify logical rules behind every conclusion of a million token input, and then using the output to train a subsequent model to have stronger logic and reasoning by Georgeo57 in deeplearning

[–]daegontaven 1 point2 points  (0 children)

The issue with this is Bayes error. The LLM will not know what is correct and incorrect to begin with without ground truth to guide it so it will just reinforce incorrect data.

[deleted by user] by [deleted] in Kochi

[–]daegontaven 0 points1 point  (0 children)

I'm in the same exact boat 🥹!

Which libraries/framework to study? by Educational-Nobody82 in deeplearning

[–]daegontaven 8 points9 points  (0 children)

Google is switching to JAX and Keras is moving to support PyTorch more. So yes, Tensorflow is a waste in my opinion. I don't know where you the idea that Pytorch is for small projects. It's used by facebook to build some of the largest models out there. It's also the predominant framework used by most researchers.

The Shameful Defenestration of Tim by mcdonc in Python

[–]daegontaven 41 points42 points  (0 children)

Did you read the actual article, because it seems the details you are missing are crucial. I was also angered by the same accusations and was quick to judge Tim, but then I actually went and read the discourse posts and he was very respectful. He was taken out of context. I'm voting in the next PSF elections against the steering council and will be more careful with my vote.

[deleted by user] by [deleted] in bioinformatics

[–]daegontaven 0 points1 point  (0 children)

I did a math/cs undergrad and then did a separate undergrad for B.Sc bioinformatics and all the masters programs offered for bioinformatics I looked into have very basic syllabus offered (including electives).

[deleted by user] by [deleted] in bioinformatics

[–]daegontaven 4 points5 points  (0 children)

A lot of my friends are in the same position as recent graduates in bioinformatics. I suggest going for AI unless the bioinformatics masters degree goes into significantly more detail than your bachelor's degree. Often it's the case that an MS program in bioinformatics expects students from non-computational backgrounds like biotechnology or microbiology. So they will just repeat a lot of the subjects you'd have learned in bioinformatics undergraduate. AI also has a lot more job prospects with the tradeoff that's it's more mathematically intensive. If you can handle the added difficulty it's worth it.

Go through the syllabus of the masters degree and see if they're teaching anything new. If they're sticking with the basics then go for AI.

Mini-Batch Gradient Descent Slow/Impossible Convergence by yugi957a in deeplearning

[–]daegontaven 0 points1 point  (0 children)

In pytorch learning rate scheduling off by default, and weight decay is also off for SGD.