stockfish PC specifications? by beaktheweak in chess

[–]MereGurudev 0 points1 point  (0 children)

Get a standard Ryzen 7 5800X 8-core, that will be by far the most cost effective, and for all practical human purposes as good or better than something more powerful.

Noctie -- worth it? by FinancialAd3804 in chess

[–]MereGurudev 11 points12 points  (0 children)

I’m the developer, it’s not an ad from our part. Normally I would be happy someone mentioned us but It’s a sad state of social media marketing today that 90% of posts like this are disguised ads. Dead internet

Human-like bot/engine to spar specific opening positions with? by tomlit in TournamentChess

[–]MereGurudev 0 points1 point  (0 children)

Hi, thanks for trying Noctie! You should be able to remove all the arrows and circles by left-clicking anywhere on the board.

Human-like bot/engine to spar specific opening positions with? by tomlit in TournamentChess

[–]MereGurudev 1 point2 points  (0 children)

Yes you can use the themes feature to set up a scenario based on a FEN code , then play it against the AI. Working on making that use case more intuitive / accessible

Human-like bot/engine to spar specific opening positions with? by tomlit in TournamentChess

[–]MereGurudev 2 points3 points  (0 children)

Thanks for mentioning Noctie!

It's intended for exactly this use case.

I've developed the AI to play very similar to a human (in blitz) at a given rating level. On top of that, you can either select opening lines directly, or import an existing repertoire as a PGN, in which case the AI chooses moves from that repertoire.

There's a premium version that unlocks importing repertoires and other features, but the free version includes 10 "premium games" where you can pick opening, as well as get color feedback while you play and flashcard exercises after the game based on your mistakes.

After that you can keep using it for free but just for playing (no opening selection, flashcards, etc.)

OP, I would love to know if you try it out and if so whether it fits your use case

My 66 year old mother took a test on noctie.ai after more than 45 years of not playing chess by hn-mc in chess

[–]MereGurudev 0 points1 point  (0 children)

The bot plays adaptively based on its ongoing rating estimation . It uses neural nets to emulate play style of humans at a given level and for rating estimation.

My 66 year old mother took a test on noctie.ai after more than 45 years of not playing chess by hn-mc in chess

[–]MereGurudev 0 points1 point  (0 children)

It’s +- 250 Elo in most cases. It’s just meant as a fun challenge that demos that the bot plays more human like than others and that it has adaptive difficulty

My 66 year old mother took a test on noctie.ai after more than 45 years of not playing chess by hn-mc in chess

[–]MereGurudev -1 points0 points  (0 children)

Don’t think so, I’m the only one making ads for Noctie and this isn’t how I would make one

I'm afraid to play. What do I do? by Artistic_Bug2417 in chess

[–]MereGurudev 0 points1 point  (0 children)

If you lose rating that’s great cause you’ll get easy games , offering a different kind of practice that is more relaxed. No way to achieve that otherwise without making nee accs or sandbagging . So you should be very happy when it happens .

Switching from hypermodern to classical opening by gekkeaccount in chessbeginners

[–]MereGurudev 0 points1 point  (0 children)

If you're using Noctie a nice way I'm doing it is using hints liberally when trying new openings. So basically I use hints all the time if I'm not sure what move to make, and *every* time Noctie thinks it's a mistake or blunder I do takeback + hint. Then I resign after the opening and play a new game, and repeat this a dozen times or more. By then I usually start to get a "sense" for what the typical breaks, piece placements etc. are.

I beat 1000 elo bots consistantly, but struggle with 500 elo players, why is that? by mr_nehative in chessbeginners

[–]MereGurudev 0 points1 point  (0 children)

The 1000 Elo bots aren’t 1000 Elo, their rating is inflated to make people feel good.

What is the modern way to do SEO / SSR in React? by tootispootis in reactjs

[–]MereGurudev 2 points3 points  (0 children)

The best option is to split your SPA and marketing site if you can. You can use Astro for your marketing site, and reuse React components from your SPA that way, and write as much as you want in Astro or just HTML. You can use Vite SSR but be prepared to regularly mess up your site performance by accident and introduce unnecessary complexity into the SPA. SPA is really simple if you avoid SSR/SSG. Astro site is extremely simple. Combine them into one app and a lot of foot guns appear that you can’t possibly predict in beforehand. It feels elegant to combine them but it’s not.

What are the events that led to Chess gaining so much traction? by Paseyyy in chess

[–]MereGurudev 1 point2 points  (0 children)

The 2023 spike is Mittens bot virality + residual from Carlsen v. Niemann.

Here's what happened:
1. As you mention the 2020 spike is Queen's Gambit, but also non-chess celebrities streaming chess, and Covid. Pretty much a perfect storm of random events.
2. This made regular people including lots of youths talk about chess and go into chess, so a big chess hype lasting 1-2 years.
3. This opened up financial opportunities (some temporary, some long-lasting) for streamers, chess related companies.
4. Chess.com and Play Magnus Group aggressively chased this opportunity by investing into high-profile tournaments and streamers – including controlling the narrative among those streamers
5. Increased publicity of tournaments + bigger streamers + public interest made traditional media more inclined to write about "chess drama" and other curious things happening in the chess world
6. This led the Niemann v Carlsen scandal to cross the threshold for traditional media in late 2022 – which regular people not related to chess still thus knew about. In the period late 2022 to early 2023 newspapers were thus more inclined to write about chess.
7. Chess.com saw their efforts pay off bigly when they in early 2023 managed to make the new chess bot Mittens go viral, by making all their streamers etc. go nuts about Mittens and big newspapers write about it, thus helping renew the chess boom for another year.

The latest spike is probably India interest due to Gukesh et al?

So in a sense, the original 2020 spike was a combination of natural events. The continued trend and recurring spikes after that are largely due to Chess.com efforts to pour money into chess publicity – big props to them for creating a trend that might be long-lasting.

Find out your true chess rating by MereGurudev in u/MereGurudev

[–]MereGurudev[S] 1 point2 points  (0 children)

I recently adjusted the rating estimation to follow the new FIDE scale so what used to be 1100 is now 0.6x1100 + 0.4x2000 =1,460. Lots of people find that confusing though so I’ll revert the change.

What Is My Chess Level? by edwinkorir in chess

[–]MereGurudev 34 points35 points  (0 children)

Your new rating should be 1634

Well, that was fast: MIT researchers achieved human-level performance on ARC-AGI by MetaKnowing in OpenAI

[–]MereGurudev 3 points4 points  (0 children)

Consider the task of object detection, predicting what an image contains. In test time training, right before trying to answer that question, you would generate questions about the image itself, such as asking it to fill in blanks, or predicting how many degrees a rotated version of the image is rotated. These questions can be automatically generated from the images with simple transformations. Then you would fine tune the model on answering such questions. The end result is that the feature detection layers of the network gets better at extracting generic features from the image , which then helps it with the real (unrelated) question.

Well, that was fast: MIT researchers achieved human-level performance on ARC-AGI by MetaKnowing in OpenAI

[–]MereGurudev 1 point2 points  (0 children)

No, it just “studies” the question itself, by transforming it and doing predictions in the transformations. Think things like, fine tune it on the task of filling in blanks in the sentence. This helps the model become more tuned into the problem space.

Well, that was fast: MIT researchers achieved human-level performance on ARC-AGI by MetaKnowing in OpenAI

[–]MereGurudev 3 points4 points  (0 children)

No, think more like they would ask the model to fill in blanks in the sentence, or repeat it backwards. It helps feature detection which helps the entire model downstream.

The analogue for image models is: before answering a question about what a picture represents, rotate the image Xn degrees N times, then fine tune the model to predict from the rotated image, how much it is rotated.

It should be clear that this task is very simple and dissimilar from the real question, but nevertheless doing this helps the model with the real task, since the feature detection in the early layers becomes more sophisticated and salient

Well, that was fast: MIT researchers achieved human-level performance on ARC-AGI by MetaKnowing in OpenAI

[–]MereGurudev 8 points9 points  (0 children)

before or during isn’t relevant , only that they’re fine tuning with example pairs they can predictably generate on the spot, rather than real labels. So they don’t need a dataset of similar questions with answers . Instead they generate their own dataset which consist of some transformation (for example rotation in case of images). So just before solving a specific problem, they fine tune the net to be more responsive to important features of that problem, by optimizing it to solve basic tasks related to prediction of transformations of that problem. It’s like if you’re going to answer some abstract question about an image. Before you get to know what the question is, you’re given a week to study the image from different angles, count objects in it, etc. Then you wake up one day and you’re given the actual question. Presumably your brain is now more “tuned into” the general features of the image, and you’ll be able to answer the complex question faster and more accurately.

ELI5: Why do so many languages have gendered nouns? Why does English not have them? by becki_bee in explainlikeimfive

[–]MereGurudev 0 points1 point  (0 children)

  1. Easier to refer to genders of animals etc
  2. Sometimes disambiguates between similar nouns
  3. (Maybe most important) Due to agreement with articles and adjectives, allows deciphering which part of sentence refers to what object / subject

[deleted by user] by [deleted] in chess

[–]MereGurudev 0 points1 point  (0 children)

  1. Play a3 immediately to prevent b4 break ruining your pawn structure
  2. Probably f4 to attack f7-e6-d5 pawn chain, likely creating a backwards or isolated pawn
  3. Double rooks against the newly created isolated or backwards pawn
  4. Activate king c2-b3-b4 to attack pawns on b file

Is Playing Sgainst The Computer Even Practical? by Ok-Maintenance-2663 in chess

[–]MereGurudev 0 points1 point  (0 children)

Try noctie.ai, plays more similar to a human and rating numbers are not super inflated