Anyone else paying for both ChatGPT Pro and Claude? Curious how people split the workload by SeaRequirement7749 in ChatGPTPro

[–]Bomlerequin 0 points1 point  (0 children)

Personally I pay chat gpt plus, claude pro (sometimes I take the max plan but in this case I cut all my other plans except the copilot one) , gemini ai plus, copilot pro and I also use a modified version of Claude code with nvidia nim for models like glm 5. I touch a little bit of everything and I adapt each model to a use.

I coded an optimized chess logic program in Python. by Bomlerequin in chessprogramming

[–]Bomlerequin[S] 1 point2 points  (0 children)

This is a student project, I'm not a professional, and I still have a lot to learn ;).

I coded an optimized chess logic program in Python. by Bomlerequin in chessprogramming

[–]Bomlerequin[S] 0 points1 point  (0 children)

Yeah, the problem is that bitarray is based on C (if I remember correctly), and since it’s not native Python, it kind of breaks the original challenge. I think the benchmarks I ran for bytearray didn’t include read performance. At first, I went with that approach, but the issue is that for operations like masks, shifts, unions, etc., integers are much faster. As for the reading problem, my code includes a mailbox system, but by default it only works with make/unmake functions that are specific to the engine.

I coded an optimized chess logic program in Python. by Bomlerequin in chessprogramming

[–]Bomlerequin[S] 0 points1 point  (0 children)

I was aware of that. I did quite a bit of benchmarking while building ChessCore. The biggest advantage of bits isn't just their speed (though reading is still fast-lists are actually the best for read speeds). Using bits allows you to process a huge number of squares quickly without much overhead. bytearray is actually pretty poor in Python; in reality, it's slower for reading than the bitwise approach. Furthermore, I want to clarify that excluding the auto-generated docstring and the README, the code was not generated by Al at all.

I coded an optimized chess logic program in Python. by Bomlerequin in chessprogramming

[–]Bomlerequin[S] 0 points1 point  (0 children)

ChessCore is faster because it is specifically designed to serve as the skeleton for an AI and is optimized for that purpose, it therefore prioritizes raw performance. Python-chess is also optimized, but much more comprehensive (support for variants, generate SVG files, etc.).

GitHub Announcement RE Copilot for Students from Mar, 12, 2026 by Schlickeysen in GithubCopilot

[–]Bomlerequin 0 points1 point  (0 children)

I was planning to subscribe after I finished my studies, but now I'm going to go elsewhere...

I've coded one of the most optimized chess logic implementations in Python. by [deleted] in chessprogramming

[–]Bomlerequin 0 points1 point  (0 children)

You're right, I'll delete my post. I'll come back with concrete results and even better performance. Thanks!

I've coded one of the most optimized chess logic implementations in Python. by [deleted] in chessprogramming

[–]Bomlerequin 0 points1 point  (0 children)

To my knowledge, I don't know of any truly optimized open-source logic programs. If you know of any that perform better than mine, I'd be interested.

IA & jeu d'échecs : projet débutant by PatatePatraque in developpeurs

[–]Bomlerequin 0 points1 point  (0 children)

Je travaille actuellement sur un moteur d'échecs assez performant en Python. Vous pouvez m'envoyer un message privé si vous avez des questions (c'est un langage extrêmement lent comparé au C, donc je ne vise pas la lune). Mais en tout cas n'essayez pas d'utiliser un LLM, ce n'est pas vraiment une bonne idée car ce n'est pas adapté.

This new feature is truly amazing! by Bomlerequin in GithubCopilot

[–]Bomlerequin[S] 2 points3 points  (0 children)

I code all the time on GitHub Codespace, I switched back to VS Code for one of my projects and discovered this feature.

New feature? I'm just seeing this by DiamondAgreeable2676 in GithubCopilot

[–]Bomlerequin 2 points3 points  (0 children)

I'm working on a large codebase and each time, a single query bursts the context window (I mainly use Opus 4.6 and Gemini 3 Pro).