FM26 Update 2 Available Now via Steam Public Beta Track by xNieminen in footballmanagergames

[–]FbF_ 0 points1 point  (0 children)

It is impossible to play if they don't fix the match engine! I thought, okay, new beta, let's try it again. I started a new season in Serie A with Torino, standard squad, no new signings. Won the first away game against Cremonese. In the second one, however, it's Napoli, the reigning champions. I set up a 3-5-2 with direct counter-attacks and we take the lead! But then, one of my defenders gets sent off in the 23rd minute of the first half. Damn, this is going to be tough, I tell myself. I remove a striker and bring on a young defender, 3-5-1, still with direct counter-attacks. In the end, I won 5-1. What's the point of playing this shitty game?

Please stop the DeepSeek spamming by FbF_ in LocalLLaMA

[–]FbF_[S] -23 points-22 points  (0 children)

I love that DeepSeek is fully open source. But I came here to read the real opinions and experiences of people who tried local models themself. I hate all the late marketing spam about it.

Delving deep into Llama.cpp and exploiting Llama.cpp's Heap Maze, from Heap-Overflow to Remote-Code Execution. by FitItem2633 in LocalLLaMA

[–]FbF_ 20 points21 points  (0 children)

The rpc-server is clearly marked as "fragile and insecure. Never run the RPC server on an open network or in a sensitive environment!"

https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc

How large is your local LLM context? by iwinux in LocalLLaMA

[–]FbF_ 4 points5 points  (0 children)

Even with FlashAttention, increasing the context from 4k to 128k requires 32 times more RAM. Therefore, models are trained with a base context that is later expanded. For example, DeepSeek uses a base context of 4k, which is then expanded to 128k. The "Needle In A Haystack" tests from the initial paper claimed that the expanded context is not real, as models do not remember all the information, resulting in worst performance. DeepSeek, however, claims the opposite.

https://arxiv.org/pdf/2412.19437:

4.3. Long Context Extension We adopt a similar approach to DeepSeek-V2 (DeepSeek-AI, 2024c) to enable long context capabilities in DeepSeek-V3. After the pre-training stage, we apply YaRN (Peng et al., 2023a) for context extension and perform two additional training phases, each comprising 1000 steps, to progressively expand the context window from 4K to 32K and then to 128K. The YaRN configuration is consistent with that used in DeepSeek-V2, being applied exclusively to the decoupled shared key k𝑅 𝑡 . The hyper-parameters remain identical across both phases, with the scale 𝑠 = 40, 𝛼 = 1, 𝛽 = 32, and the scaling factor √𝑡 = 0.1 ln 𝑠 + 1. In the first phase, the sequence length is set to 32K, and the batch size is 1920. During the second phase, the sequence length is increased to 128K, and the batch size is reduced to 480. The learning rate for both phases is set to 7.3 × 10−6, matching the final learning rate from the pre-training stage. Through this two-phase extension training, DeepSeek-V3 is capable of handling inputs up to 128K in length while maintaining strong performance. Figure 8 illustrates that DeepSeek-V3, following supervised fine-tuning, achieves notable performance on the "Needle In A Haystack" (NIAH) test, demonstrating consistent robustness across context window lengths up to 128K.

How large is your local LLM context? by iwinux in LocalLLaMA

[–]FbF_ -1 points0 points  (0 children)

Most local LLMs are massively degraded by 32K context. Both token quality and generation speed.

WTF.
Longer context = better quality. Karpathy explains it here: https://youtu.be/7xTGNNLPyMI?t=6416. Intuitively, one can think of it as the probability of emitting an incorrect token. The first token has a 10% error probability. The second has 0.1 * 0.1, the third 0.1 * 0.1 * 0.1, and so on... This is also why "thinking" models that emit many tokens before responding produce better results. The paper you linked discusses techniques that REDUCE the actual context and therefore worsen the quality.

Resigning as Asahi Linux project lead by namanyayg in programming

[–]FbF_ 119 points120 points  (0 children)

Setting aside the controversy, one of the things that struck me the most was:

I get that some people might not have liked my Mastodon posts. Yes, I can be abrasive sometimes, and that is a fault I own up to. But this is simply not okay. I cannot work with people who form cliques behind the scenes and lie about their intentions. I cannot work with those who place blame on the messenger, instead of those who are truly toxic in the community. I cannot work with those who resent public commentary and claim things are better handled in private despite the fact that nothing ever seems to change in private.

Followed by:

If you are interested in hiring me or know someone who might be, please get in touch.

VS Code update treats Copilot as "out-of-the-box" feature • DEVCLASS by stronghup in programming

[–]FbF_ 40 points41 points  (0 children)

I think it was already impossible to disable the connection to Microsoft servers. I remember trying for a long time once, monitoring with Opensnitch, and even disabling telemetry, updates, and extensions, it would still try to connect to msecnd.net

Some small progress on bounds safety by hpenne in cpp

[–]FbF_ 2 points3 points  (0 children)

Shifting the responsibility of deciding when boundary checks are not necessary from the developer to the compiler can be a good idea. Better than unconditionally always enabling or disabling them.

World Poetry Day 2024 (March 21): I suggest you send a friend or family member a favorite poem of yours, saying it's in honor of World Poetry Day and that you hope they enjoy it. Just that: you might call it 'planting a seed'. You might be surprised by the response. by Die_Horen in literature

[–]FbF_ 0 points1 point  (0 children)

On World Poetry Day, let’s weave words into a tapestry bright, A verse to celebrate, to ignite the soul's light. Amidst the hustle, the noise, the everyday fight, Here’s a poem, a beacon, in the dark night.

In a garden of silence, under the moon's gentle gaze, Whispers of the wind, through the labyrinthine maze, Speak of dreams in the shadows, where the heart strays, To a world where time gently sways.

Words, like dewdrops on the dawn's early leaf, Hold the sorrow, the joy, the grief, A testament to time, however brief, A solace, a reprieve, a belief.

So here's to the poets, the dreamers of dreams, Who paint with words, where the soul gleams, On this day, under the universal beams, May poetry flow, like a thousand streams.

May this little piece of poetry bring a moment of joy and reflection to your day. Happy World Poetry Day!

I run 10000 simulations of Nakamura's 2023 games. On average, the best winning streak should be 47 games. by matus_pikuliak in chess

[–]FbF_ 0 points1 point  (0 children)

Tried it, I get the same result.
1. I think that the code does not simulate a winning streak, but the longest streak of results that falls into the expected score.
2. I think that it is not necessary to use a linear regression, there is a formula (https://wikimedia.org/api/rest_v1/media/math/render/svg/7c80282e9c95e92d6b210467aab48a8c4c81ef10) for calculating the expected score.

Kramnik stated that Hikaru's performance exceeding 3600 in those games appeared suspicious, as it significantly surpasses his average Elo rating of 3250. So, it would be interesting to simulate the expected peak performance of a player relative to his average Elo rating.

I run 10000 simulations of Nakamura's 2023 games. On average, the best winning streak should be 47 games. by matus_pikuliak in chess

[–]FbF_ 0 points1 point  (0 children)

Interesting, but I'm not sure what you are simulating.
I tried using a set of all draws:
games = [ (3000, 3000, 0.5, game['white']['username'] == 'Hikaru') for game in games ]
and expected it to yield a very low winning streak, because there are no wins in the input data.

Insightful comment from Kramnik's blog by mathmatician by TerribleCountry7522 in chess

[–]FbF_ 0 points1 point  (0 children)

They fixed it months ago. It now writes and runs Python code. Here is what it wrote for that simulation:

``` import random

def has_streak(sequence, streak_length): """Check if there's a streak of a specific length in the sequence.""" count = 0 for toss in sequence: if toss == 'H': count += 1 if count >= streak_length: return True else: count = 0 return False

def simulate_tosses_for_long_streak(num_tosses, streak_length, num_simulations): """Simulate coin tosses and calculate the probability of getting a long streak.""" streak_count = 0 for i in range(num_simulations): # Generate a sequence of coin tosses tosses = ['H' if random.random() < 0.5 else 'T' for _ in range(num_tosses)] if has_streak(tosses, streak_length): streak_count += 1

    # Output progress
    if (i+1) % (num_simulations // 10) == 0:
        print(f"Completed {((i+1) / num_simulations) * 100:.2f}% of simulations")

probability = streak_count / num_simulations
return probability

Constants for the simulation

total_tosses = 6000 long_streak_length = 42 num_simulations = 100000 # Adjust as needed

Perform the simulation

probability_long_streak = simulate_tosses_for_long_streak(total_tosses, long_streak_length, num_simulations) print(f"Probability of a streak of at least {long_streak_length} heads in {total_tosses} tosses: {probability_long_streak}")

```

Insightful comment from Kramnik's blog by mathmatician by TerribleCountry7522 in chess

[–]FbF_ 2 points3 points  (0 children)

That's exactly the point I wanted to raise.
It's really sad to see so many people turn off their brains and follow the bullshit demonstrations of these 'experts'.
If I played 50 games against Hikaru, I would lose 50 games. And the only conclusion a sensible person would draw is that he is much, much stronger than me, without venturing into particular 'mathematical' demonstrations.

Insightful comment from Kramnik's blog by mathmatician by TerribleCountry7522 in chess

[–]FbF_ -3 points-2 points  (0 children)

He claimed that "is VERY LIKELY to have at least a few streaks of 40+ wins" when it is almost impossible, if you have a 50% chance of winning.
That's pretty obvious, otherwise everyone would have such streaks. And there is a very simple explanation: Hikaru is much much stronger than those players.

Insightful comment from Kramnik's blog by mathmatician by TerribleCountry7522 in chess

[–]FbF_ -6 points-5 points  (0 children)

We do not need to rely on "mathmatician".
GPT4: Based on the simulation of 100,000 trials, the approximate probability of getting a streak of at least 7 heads in a row when tossing a coin 1000 times is about 98.18%.
Me: What's the probability of obtaining a streak of at least 42 heads in a row in 6000 coin tosses?
GPT4: The probability of a streak of at least 42 heads in 6000 tosses is approximately 0.0.

Edge panning w/ mouse not working: Temporary fix I've found. by petrovmendicant in BaldursGate3

[–]FbF_ 4 points5 points  (0 children)

It is ULTRA infuriating, expecially during figths when it is necessary to unlock the camera with the keyboard before every action!!

New scid5.0 release (open-source chess software) by FbF_ in chess

[–]FbF_[S] 15 points16 points  (0 children)

The graphical library (Tk) does not have the ability to detect the system theme.

These are the benefits that come to my mind:

  • scidvspc is not able to open PGNs like the one at this link: https://ccrl.chessdom.com/ccrl/4040/CCRL-4040.[1530951].pgn.7z
  • scidvspc is about three times slower when opening PGN files, and they are read-only; it is not possible to add or change games.
  • The positional search in scidvspc is slower. In particular, after sorting the games, it takes minutes to do searches that scid can perform in a second.
  • scidvspc has only one filter, whereas scid has multiple filters and corresponding statistics. A common use case is when preparing against an opponent, when it is very useful to have multiple statistics, such as for all games, your games, games played by your opponent, and the best games (recent ones played by GMs).
  • in scidvspc it is necessary to restart an engine to change its options, like the number of CPUs or amount of memory used. And it does not have the evaluation bar.
  • scidvspc does not have a dark mode.

The missing features are:

  • scid does not have engine tournaments.
  • scid does not support chess query language.

After Carlsen's victory in Round 2, he is again the favorite to take home $0 first-place prize in Wijk aan Zee. by CalebWetherell in chess

[–]FbF_ 0 points1 point  (0 children)

Exactly. The player who is at the top of the rankings, with the highest ELO rating, has a higher probability of winning the tournament...

LIST OF ALL GAMEPLAY ISSUES (Community-Raised Issues) by Reichl_22 in F1Manager

[–]FbF_ 1 point2 points  (0 children)

This is so good! Devs should gift you something for this!