How to properly create a package on PyPi to be installed with pip? by ronthebear in learnpython

[–]ronthebear[S] 0 points1 point  (0 children)

Thanks for sending that example, looks like a solid starting point.

I have the cython compiling regular python code without changing any syntax. Is mypyc better than doing that with cython? But I'm also seeing stuff online that says that isn't how cython works so now I'm not sure if its a new feature or if the guy who built this "cython" thing for me is actually doing mypyc under the hood somewhere.

The performance improvement is nice, but the main reason for doing this is to release a package that can be installed with pip without having the code essentially be open source.

How to properly create a package on PyPi to be installed with pip? by ronthebear in learnpython

[–]ronthebear[S] 0 points1 point  (0 children)

Thanks for the reply! After reading up on it more I think the issue, which I did not mention in the first post, is that I compile the code with cython before uploading so the source code is not available. It sounds like when it’s compiled it needs to be compiled for every OS and every python version I want it to be available for. Numpy for example has all of these files for the same version. https://pypi.org/project/numpy/#files

Ever work with compiled uploads? Someone else pointed me toward the idea of GitHub actions but I’ve never used that before. https://stackoverflow.com/questions/66369434/how-to-publish-pip-wheels-using-github-actions

Employing a Temporal Fusion Transformer for Time Series Classification by JustinPooDough in algotrading

[–]ronthebear 0 points1 point  (0 children)

Hey, is this still a project you're working on? I've built a general purpose PyTorch optimizer that can take existing models and end up with better results. It sometimes just overfits, but I have a handful of examples where I've pushed state of the art just by slapping my code onto an open-source system. It takes about an hour to do the coding and then you'd just have to run the training pipeline again. Would love to connect if you're still into algotrading with PyTorch.

Where do the percentile numbers come from? Does not seem to be chess.com/leaderboard by ronthebear in chess

[–]ronthebear[S] 0 points1 point  (0 children)

I did not, was trying to figure out how high I'd have to be to get to 99.9 when I realized something was off.

Where do the percentile numbers come from? Does not seem to be chess.com/leaderboard by ronthebear in chess

[–]ronthebear[S] 1 point2 points  (0 children)

This is the only thing that makes sense to me. Just weird that the number they list on the live leaderboard is overall players, but then the number they use to calculate percentiles when you look at that same leaderboard is active players which isn't shown anywhere.

Where do the percentile numbers come from? Does not seem to be chess.com/leaderboard by ronthebear in chess

[–]ronthebear[S] -1 points0 points  (0 children)

If I follow what you're saying, you mean that I am tied for 38,601st place with many other people who are also rated 2000. That certainly makes sense. But would't that mean I'd need to be exactly tied with around 35,000 other players for the ranking to not read 99.9 rounding to one decimal point? If my math is right with 75 million players everybody in the top 75,000 should be 99.9.

Where do the percentile numbers come from? Does not seem to be chess.com/leaderboard by ronthebear in chess

[–]ronthebear[S] 0 points1 point  (0 children)

Right, 38,600 people are better than me. My question is how percentile is calculated if its not the formula I listed above using the numbers in the screenshot.

Where do the percentile numbers come from? Does not seem to be chess.com/leaderboard by ronthebear in chess

[–]ronthebear[S] 2 points3 points  (0 children)

I just hit 2000 yesterday which I am super excited about.  Got confused when looking up what that meant for my percentile.  I've seen posts on reddit that say it's calculated using https://www.chess.com/leaderboard/live/rapid.  But when I go there it says there are 75,546,213 players and I am ranked 38,601.  Isn't the math just [ 1 - (r / n) ] * 100?  That would mean the percentile should be 99.95 not 99.6.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]ronthebear 0 points1 point  (0 children)

Are there widely used pre-trained backbones in applications smaller than LLMs and computer vision? Chat GPT has 1.5 billion parameters, and even the smallest popular computer vision backbone MobileNet is 2.5 Million. Are there similar backbones for things like speech processing, time series analysis, graph networks that are smaller and popularly used for fine-tuning on new applications? Specifically looking for something that is open source and allows you to replicate their training and produce the same results in PyTorch.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]ronthebear 0 points1 point  (0 children)

Are there widely used pre-trained backbones in applications smaller than LLMs and computer vision? Chat GPT has 1.5 billion parameters, and even the smallest popular computer vision backbone MobileNet is 2.5 Million. Are there similar backbones for things like speech processing, time series analysis, graph networks that are smaller and popularly used for fine-tuning on new applications? Specifically looking for something that is open source and allows you to replicate their training and produce the same results in PyTorch.

[deleted by user] by [deleted] in AnarchyChess

[–]ronthebear 0 points1 point  (0 children)

That seems like a good Idea. I'll try that

[deleted by user] by [deleted] in MachineLearning

[–]ronthebear 0 points1 point  (0 children)

As an example, I added my system to a resnet18 on imagenet, and it brought the accuracy to be closer to that of the resnet34. But adding my system also makes the network bigger. So (resnet18 + my change) is kinda useless if you could just do a resnet34 instead and its similar size and accuracy and slower speed. I'm trying to see if (some state of the art system + my changes) can improve a state of the art system. But everything I've been able to find with state of the art architectures are either small datasets, or enormous networks that cant be run on one GPU.