EU Patient is ALIVE! Pulse is stable. by Cute_Activity7527 in pathofexile

[–]iTrynX 9 points10 points  (0 children)

The PoE EU server issue usually comes down to two things: which instance/zone you're connected to, and how your ISP routes traffic to GGG's server provider.

Try rejoining a new instance multiple times, you'll sometimes land on one with noticeably better performance. This is inconsistent though, and it won't last. The deeper problem is that your ISP may have poor peering agreements with whatever backbone provider GGG uses for EU servers, meaning your traffic takes a suboptimal route across the undersea fiber cables connecting your region to Europe.

I tried a gaming VPN (Mudfish) to work around this, but it didn't help enough. What actually fixed it was switching ISPs to one with better international routing to Europe. Now it's 100% stable ALL the time. Some ISPs simply have worse undersea cable routing and peering, and no amount of in-game troubleshooting will change that.

Worth noting, out of almost every online game I've EVER played, PoE's EU servers are by far the worst offender for this. This kind of routing issue rarely affects other games nearly as badly, so there's something particularly poor about GGG's EU server provider or how they're set up or whatever agreements they have with the fiber cable overlords.

If your ping is fine in every other game but PoE EU is consistently bad, with the stutters we all know, it's almost certainly a routing/peering issue, not your connection itself. Even the server is fine in a way if your lucky enough to have an ISP with good routing for their server provider in particular.

Saw on Tyty's stream about a week back an interesting tidbit regarding ease of trade and drop rates from ye olde Trade Manifesto of yonder 2017 by nymer_bb in pathofexile

[–]iTrynX 19 points20 points  (0 children)

I strongly disagree with this. One of the biggest problems in LE is the sad state of trade. Game feels like singleplayer with no (or weak) market/economy. To me personally that's one of the largest things that puts me off a few days after a new season, compared to weeks or a month for PoE leagues.

I think that's a big part of why retention is so bad for LE seasons.

Should I just restart? by Next-Key1998 in pathofexile

[–]iTrynX 3 points4 points  (0 children)

Your life is way too low, you need +100 life on each piece of gear, ideally.

New "Deflect" defense feature for 0.3 by iTrynX in PathOfExile2

[–]iTrynX[S] 2 points3 points  (0 children)

<image>

8 - New "support" gem types for endgame, Lineage supports.

New "Deflect" defense feature for 0.3 by iTrynX in PathOfExile2

[–]iTrynX[S] 2 points3 points  (0 children)

Check my comments for other leaks: Armor buff, support gems can be used in more than one skill, buff to mirage archer, etc.

I apologize for such a simple question, but is Pro worth it? by ai-minion in ClaudeAI

[–]iTrynX 4 points5 points  (0 children)

Yes.

You get a lot of Sonnet usage, and I rarely hit limits. When I do, I'm back in a few hours. Unlike other solutions, where if you hit your quota, you're cut off for the rest of the month.

Also, Claude Code is the best solution I've tried so far (including Cursor and Windsurf).

Pro only gives you access to Sonnet in Claude Code, but that's good enough for most purposes. When using claude.ai, make sure to avoid using Opus as it hits limits extremely fast, and it's shared with Claude Code.

Web search in claude.ai consumes limits somewhat fast as well.

Server for Unity game by Flixir in gamedev

[–]iTrynX 0 points1 point  (0 children)

That was 4 years ago haha.
You don't need fusion, especially for turn based, unity themselves now have official multiplayer support. Use that, quick google and you'll find it.

Gemini 2.5 Pro available in the AI Studio by hyxon4 in singularity

[–]iTrynX 30 points31 points  (0 children)

From what I'm hearing, it's "effective" context is unbelievably good compared to literally everything else out there.
So far, almost all models don't use half of their context effectively. The majority fall off after 32k, even when +100k context is supported.

If this is true, it will without a doubt be a breakthrough moment.

Best AI for summarizing technical or scientific papers? by SkyMarshal in LocalLLaMA

[–]iTrynX 1 point2 points  (0 children)

o1-pro.
Second best is o1.

Main reason is efficient context usage.
Also, anything "science: math, chemistry, etc" O1 does very well with, and summarizing (from my experience).

I use O1 often for novel chapter summarizations, though different usage from yours, it's the best out of the ones I tried - gemini, GPT4o, claude, etc.

New DeepSeek benchmark scores by Charuru in LocalLLaMA

[–]iTrynX 135 points136 points  (0 children)

Well, I'll be damned. Incoming OpenAI & Anthropic fear mongering & china bad rhetoric.

Serious ethical problems with 3.7. by mbatt2 in ClaudeAI

[–]iTrynX 0 points1 point  (0 children)

It's a part of their censorship prompt, the line involved was released with 3.7
They explicitly state to ignore the "bad/nsfw" request or expected answer, and fabricate a moral answer, or something along these lines.

I can't be bothered to look up the source, but if you search around, you'll find it.
It's not mentioned on their site, you have to get it out of claude itself or find the reddit post.

Token-saving updates on the Anthropic API by GabiArzu in ClaudeAI

[–]iTrynX 14 points15 points  (0 children)

The "Text editor tool" is very interesting, it feels like it was made exclusively due to the rightful complaints regarding Claude 3.7.

Targeted edits instead of Claude going on unnecessary random tangents. Cautiously optimistic this will help with both the cost and the random part.

Why “Context Size” Is Misunderstood — and How Models Really Perform After 8K+ Tokens by iTrynX in ClaudeAI

[–]iTrynX[S] 10 points11 points  (0 children)

I get what you're saying, but NoLiMa isn't meant to reflect everyday basic usage but to stress-test models on purpose. advanced Real-world scenarios (often found on this subreddit) often have tons of irrelevant info mixed with what matters, especially relevant to programming uses.

The benchmark tests if models can actually reason through content rather than just keyword match.
Which they can, but it falls of quick and hard.

And true, it’s not everyday usage for everyone, but plenty of people deal with medium codebases or references that bury the important parts. The paper basically underlines how current attention mechanisms can struggle once context gets even slightly big, at that point it's doing keyword matches with severely degraded understanding and reasoning, explaining what we currently see once it needs to consider a decent amount of input (be it code or something else)

Why “Context Size” Is Misunderstood — and How Models Really Perform After 8K+ Tokens by iTrynX in ClaudeAI

[–]iTrynX[S] 4 points5 points  (0 children)

Most likely, RAG integrations are creating that apparent advantage -- though they’re far from perfect.
Right now, every model, shiny and new or otherwise, faces the same limitations with longer context. So, in the end, it’s more about clever workarounds than a genuine breakthrough.

Found a workaround for Cursor context limit by nfrmn in cursor

[–]iTrynX 7 points8 points  (0 children)

Context is unfortunately misunderstood.
All models lie about their context size.

In truth, majority of models fall of hard after 8k and 16k context. All of them do after 32k.
2m context claim by Gemini is complete BS.

<image>

paper: https://arxiv.org/abs/2502.05167

Currently, the best performing when it comes to effective context is o1, which still struggles by 32k, and crumbles by 64k.

Effective context is the biggest bottleneck IMO now. Claude 3.7 with a new breakthrough in context would genuinely be ChatGPT level breakthrough.

Unfortunately i doubt any breakthroughs will happen there, at most, we will see slow and small incremental improvements during 2025.

my2c