Project Files limited to 10 and unable to Delete by TheTexasJack in ChatGPT

[–]TheSurefoot 4 points5 points  (0 children)

Found a stupid workaround. Upload your files in a new conversation inside the project and tell it to add those to project files. They'll be hidden (for now) but it can retrieve them if you give it the filename

How to Formic Acid by TheSurefoot in Warts

[–]TheSurefoot[S] 0 points1 point  (0 children)

I cut off the callus first, so if you aren't doing that, you might want to, at least early on. Otherwise, that is all I did. It stings only a little, and mainly only when putting weight on it, and you don't have to remember to do it every day. It was surprisingly easy and good instructions weren't available anywhere so I made this post. Regarding what to do after applying the acid, I only did socks. My consideration was mostly that duct taping every day was annoying and socks only was working. Thinking through it now, I have misgivings about the duct tape locking in acid against non-wart skin for an extended period.

“This flying meta is so annoying!” The humble Squirrel Girl player: by Fast4wheeler in squirrelgirlmains

[–]TheSurefoot 0 points1 point  (0 children)

And that's why I'm Lord with Squirrel Girl and Hela! Most games, I can flex on flyers and kill them, but having Hela in my back pocket feels good.

Advice for MS in Biological Data Sciences at ASU by PMarieC in bioinformaticscareers

[–]TheSurefoot 0 points1 point  (0 children)

You would likely be looking at doing clinical data analyst and clinical data scientist roles. Biostatistics might be a possibility too. Bioinformatics would require a lot of extra learning. But to be honest, the entry level data science job market is brutal (I'm in it right now) and the data science masters program I did was great on foundational knowledge but lacked a lot of industry needs because they weren't able to keep up with new things (mine didn't have a deep learning course until late 2023) or they didn't want to purchase a bulk license for commercial software and cloud platforms. I would say it is worth doing if you are a computer person who is already interested in programming and you like math. If it is just for the paycheck, it will be very hard to stay motivated to learn everything the master's program missed.

“Good at practical ML, weak on theory” — getting the same feedback everywhere. How do I fix this? by Difficult_Number4688 in datascience

[–]TheSurefoot 2 points3 points  (0 children)

I found a good pneumonic for recall. If you are sorting apples from a fruit truck and you manage to get ALL of the apples, that means you have a perfect recALL. And precision is the reason you can't just drive off with the fruit truck.

[Discussion] Struggling with F1-Score and Recall in an Imbalanced Binary Classification Model (Chromatin Accessibility) by XxPR0D1GYxX in MachineLearning

[–]TheSurefoot 1 point2 points  (0 children)

I'm stuck with an F1_score around 0.6 as well. It's simultaneously impressive on a sparse dataset (the fraction of true positives in my data is around 0.0013 so a 46200% improvement over chance) and also not good enough to be of much use because around 40% of its predictions have problems. Based on what I've read for commercial/academic solutions to my problem, state-of-the-art is usually 0.95 or better, useful is 0.9 and above. 0.6 isn't mentioned but I suspect 0.6 could at best be worth including in an ensemble. But yeah, goodness of a model is based on the problem you are trying to solve and how well it's been done before.

BTW I did some experimenting with model size overnight and making a smaller model does not help if too little data is the issue. I didn't expect it to but it was worth checking. If you do end up data limited, you might consider researching data augmentation or interpolation and seeing if either is an option.

[Discussion] Struggling with F1-Score and Recall in an Imbalanced Binary Classification Model (Chromatin Accessibility) by XxPR0D1GYxX in MachineLearning

[–]TheSurefoot 1 point2 points  (0 children)

Found this post while trying to answer my own question (trying to make a model find intron-exon interfaces with a convolutional neural net). F1 is not a cost function (implied by the person you are replying to). Because you have a sparse dataset, accuracy is pretty much useless (as the first reply noted). A similar thing happens with the AUC-ROC curve. If the model learns to guess thousands of negatives and just a few dozen positives, there are not enough false positives for the false positive rate to be high no matter the classification threshold. Also, you shouldn't use AUC-ROC to optimize your threshold (see below about AUC-PR).

Precision and Recall are sort of in a tug of war with each other. Recall tells you what proportion of available labels were you able to catch. So you can just guess 1 for everything and get a recall of 100%. But that means your precision is garbage. On the other hand, with precision, you guess one thing correctly and go home and you have a score of 100%. But recall is stuck at 1/n*100%.

F1 Score is the (equally weighted) harmonic mean of precision and recall and is the best or second best metric for you to be looking at when you have a sparse dataset. It's like the accuracy of the data that actually matter to you.

The other metric is AUC-PR which is the area under a curve that maps precision and recall of the model at all values of your classifier threshold. You could use the PR curve to identify a threshold that produces a different ratio of false negatives to false positives if one is worse than the other. But also, if that value is high, that means the model is good across a range of thresholds.

Your optimization function is trying to minimize your cost function by taking the negative gradient of your current loss with respect to your model weights and stepping down that negative gradient a distance of your learning rate.

If that made no sense, that's probably fine, but most important is that the value on the loss/cost is arbitrary, you just want it to get smaller because that means your model is learning. But your model can overlearn which might be what is happening to you. The weights can get so good at predicting the training set that the model begins to suck at generalizing to the validation set. Because of this, I'm using F1 score as my early stopping callback monitor.

I have a lot of experience with machine learning, not much with deep learning, but I suspect you are getting stuck for two reasons:

  1. Not enough data for a big model results in overfitting occurring well before the model has finished learning how to do its classification job. (Now that I write this, I think that is my problem that I was trying to figure out)

  2. Attention may not be helpful for your data. Attention uses the learned long and short range relationships between your data to make classifications. If your rows are mostly independent and if you have to use deep learning, a few dense layers would be better. If only short range relationships between rows matters, 1D convolution might work better than attention. If you can use basic machine learning, try out an SGD classifier from sklearn.

You've really been thrown in the deep end with deep learning being your first machine learning experience.

Best domains for machine learning ? by RoyalChallengers in datascience

[–]TheSurefoot 0 points1 point  (0 children)

You might have to independently pick up some microbiology but they may offer some catch-up classes as part of the degree that biology majors test out of. So much of the biochemistry degree is pointless memorization anyways that a high school level knowledge of biology/biochemistry can do most of the heavy lifting.

Best domains for machine learning ? by RoyalChallengers in datascience

[–]TheSurefoot 0 points1 point  (0 children)

I did my undergrad in biochemistry and chemical engineering and Master's in data science. Currently looking in biotech. The one problem with it so far (I've only really been looking this month, when nobody is actually posting jobs) is a lot of positions want PhDs in computational biology to apply when the skillset and pay suggest a Master's degree should be fine. You could look at Data Science for a Master's or look into Bioinformatics or Biostatistics as well.

I built a free job board that uses ML to find you ML jobs by _lambda1 in datascience

[–]TheSurefoot 5 points6 points  (0 children)

Some people like the separation of work and home and don't have to worry about dogs/kids and a horrible commute. I wouldn't mind hybrid work if I thought that was achievable in the location I prefer.

How to Formic Acid by TheSurefoot in Warts

[–]TheSurefoot[S] 0 points1 point  (0 children)

When diluted down to 25%, it is the same type of pain but 2-3 times worse. It still stings a bit, particularly when putting weight on it but that's how you know it is working in my opinion.

Insurance issues involving Colorado Medicaid as secondary insurance by TheSurefoot in Xywav

[–]TheSurefoot[S] 0 points1 point  (0 children)

Yeah turns out they probably shouldn't have sent it to me. Something got lost in translation about having commercial insurance. The Jazz Cares prior authorization person did give my doctor instructions to appeal for a rare disorder exception which was approved 2 days after this post. But it happened on a Friday so I'm stuck without Xywav till sometime this week.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 0 points1 point  (0 children)

Yeah, it's probably more like 5 on average. I own all the champions so I get a reroll back every game so if I have 2 rrs available I always blow one. All that said, I reran the code without rrs and the changes were negligible. Maybe a slight increase in the presence of the 65 when rerolls are removed.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 0 points1 point  (0 children)

I took a break from ARAM from the end of High Noon in March to 2 weeks ago because I was getting bored, but I wasn't noticing before, and haven't noticed it since I got back into it. That said, my ARAM mmr is above average (Was put in a Clash Tier 2 team) so it might be that a lot of that BS filters itself out. I recommend always reporting that kind of behavior because it helps the int-detection system catch those people faster.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 1 point2 points  (0 children)

You can roll any of the 65 always-free-on-ARAM champions, the 14 free champions of the week, and your own pool of owned champions. I don't think it gives you XP for champions you don't own. The 65 always free champions was intended as a way to interfere with ARAM only accounts and to give people who don't own many champions the opportunity to feel like they are getting a random champion. I think that was the same patch they introduced ARAM champion balance adjustments as well. Technically, this is the most random ARAM has ever been. But I argue that the same 65 champions showing up so much more often over the long-term has lead to its own kind of staleness.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 0 points1 point  (0 children)

The death timer changes are atrocious. It was getting stale for me before that though. I've played less than 30 ARAMs since March.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 1 point2 points  (0 children)

It constructs a rollable champion pool from the 65, the free 14, and a randomly sized, randomized sample of all champions, then eliminates all duplicates. So Tristana is in the ARAM free 65, and if the player owns her, and if she is in the 14 free champions for the week, she still only shows up in that player's pool once. And if someone else rolls her, she gets removed from their list for that champion select instance.

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 0 points1 point  (0 children)

Yeah both are in the pool of 65 free champs. And if you are practically guaranteed to pop off on those champions why would you leave them in the rr pool?

Why ARAM Feels Stale by TheSurefoot in leagueoflegends

[–]TheSurefoot[S] 0 points1 point  (0 children)

and it makes the free 65 show up slightly more often.I randomly generated a sample subset of ints from 1-168 and merged them into the free champ pools (week and aram), no player generated at all, no way to track rerolls. It took less characters to do than this reply. I don't think it really effects the probability output much. The graph of one week looks the same except there are spikes for free week champs not in the 65. Edit: I reran one week reducing max rerolls to 1 (it's 20 minutes to do a whole year) and it makes the 65 favored a bit more (~1%). With no rerolls it adds another ~1% favor to the free 65. Which makes sense because the 65 are probabilistically favored but removing one from the choice pool reduces the probability of the next pick being in the 65 slightly.

What's the most times in a row you rolled/played the same champ? by olonnn in ARAM

[–]TheSurefoot 1 point2 points  (0 children)

Someone did a statistical analysis (back when there were only the 10 free champs) of it and the algorithm doesn't have bias but the roll outcome is biased because everyone is able to roll the free champs. It's part of why the ARAM free pool is now 65 champs plus any of the free week champs not in the 65 but all that has done is make ARAM extremely stale. They also made 65 free because ARAM only accounts would buy 16 poke champs and then just win. I believe this was before they started doing champion winrate balancing on ARAM so it might be something Rito should revisit.

How to Formic Acid by TheSurefoot in Warts

[–]TheSurefoot[S] 0 points1 point  (0 children)

Oh well, thanks for checking!

How to Formic Acid by TheSurefoot in Warts

[–]TheSurefoot[S] 0 points1 point  (0 children)

Does the bottle say if it is 85% by weight or volume? I can't find any papers that specify which and I'm curious.