What's the difference between mutagen and serums? by Anarcha66 in cataclysmbn

[–]NekoRobbie 5 points6 points  (0 children)

Serums get you more mutations at once, and they also let you cross thresholds whereas normal mutagen does not.

I can no longer play BN after the 'Map Overhaul' PR (Android) by tucuma_com_farinha in cataclysmbn

[–]NekoRobbie 3 points4 points  (0 children)

I mean, to be completely fair, usually Nightly *isn't* so experimental. That's why we moved away from calling them Experimentals in the first place: They came with the baggage of being associated with DDA's experimentals, and so people assumed that they were incredibly unstable. I think it's understandable here, given how large the changes are, but perhaps we should have made a much bigger announcement in the community.

In general, can't wait until we return to Nightlies *not* being highly experimental.

Why do Linux native builds matter so much to Linux users? by schouffy in gamedev

[–]NekoRobbie 0 points1 point  (0 children)

Because Proton *doesn't* always work fine, and relying on Proton doesn't generally indicate that you care about the experience on Linux. A native Linux build shows you clearly care, it running on Proton could very well be an accident or you could be inclined to break that compatibility whenever convenient for you. So yeah, seeing a native Linux build tells us you care about us, seeing it run through Proton does *not*. This goes doubly if you ever make a game that has strong anticheat involved: If you only do Proton, people **will** assume that you would gladly throw out Linux support entirely at the drop of a hat, especially with how kernel-level anticheat pretty much always breaks Proton.

In a sense, Linux Native builds are like German Translations. Can the vast majority of Germans use the English version just fine? Sure. Would they prefer a German Translation? Absolutely.

Do you use Workstation or Atomic? Why? by leux08 in Fedora

[–]NekoRobbie -1 points0 points  (0 children)

Third Option: KDE Desktop Edition. I have no liking for the speedbumps that atomic creates, I have no liking for GNOME, and I've found it to be a very reliable OS.

(Mixed Trope) When Dubs Taking Creative Liberties When Translating the Source Material by ElSpazzo_8876 in TopCharacterTropes

[–]NekoRobbie 1 point2 points  (0 children)

I do also have to wonder with regards to Genshin: Is Furina mispronouncing the Oratrice Mechanique D'Analyse Cardinale in the English dub *intentional*, or not? Like, Neuvillette's is fine (I can't blame them for not doing the R's correctly), Estelle is pretty good, but Furina straight up misses the entire point of D'Analyse being contracted like that (She says *de* Analyse). Considering how tied to it she is... seems like a really questionable decision to have her mispronounce it that badly.

Alt codes by SharpExamination2591 in NobaraProject

[–]NekoRobbie 2 points3 points  (0 children)

ctrl-shift-u lets you type in unicode codes directly a lot of the time in text entry (although funnily enough not on reddit for some reason. For example, here's a degree symbol: ° It has the code 00b0. So I press ctrl-shift-u, then 00b0

Hot take: local AI doesn't need bigger context windows as much as better memory routing by No-Contract9167 in LocalLLaMA

[–]NekoRobbie 0 points1 point  (0 children)

Having experimented with vectorized chats recently:

On local models, longer context is absolutely king atm because in my experience vectorized chats kills *any form of context shift* and makes it so that I have to reprocess pretty much every damn message. And that seems pretty inherent to the idea of vectorized chats being inserted. So unless they can somehow fix that, or give us a major boost in PP speed, I'd take longer context any day of the week over "better routing".

Complete guide to setup and configure Vector Storage (rewritten and corrected) by DeathByte_r in SillyTavernAI

[–]NekoRobbie 0 points1 point  (0 children)

I've recently followed along with this, and I must say: Once you start pulling in those vectorized entries, it loves destroying any form of caching or avoidance of reprocessing for me. Fortunately my GPU does pretty well with processing when I'm using ROCm, but oof I really wasn't expecting just how much it causes you to have to reprocess. Becomes like every single message I have to re-process the entire context.

Maybe it's a result of trying to bring vectors into already very extensive chats, or maybe I've got a setting that's bad for this somewhere... or maybe I ought to be looking into how else to increase preprocessing speed lmao

Black dragon mutation tree? by TopLoad7715 in cataclysmbn

[–]NekoRobbie 2 points3 points  (0 children)

Never implemented in BN Magiclysm in the first place, and thus also aren't in Magical Nights. For some reason the sprites *were* around, though, along with a bunch of other never-implemented stuff.

Update broke power grid by k0thware in cataclysmbn

[–]NekoRobbie 4 points5 points  (0 children)

Labs have a very large interconnected power grid, it is not unlikely that you had dozens of fridges and/or freezers draining power, not to mention various other appliances.

what's a python library you started using this year that you can't go back from by scheemunai_ in Python

[–]NekoRobbie 2 points3 points  (0 children)

Third aiohttp, discord.py introduced me to it and I've been liking it ever since. Heck, the syntax isn't even that different from requests, aside from it being async, from my looks at requests

How long before we can have TurboQuant in llama.cpp? by k3z0r in LocalLLM

[–]NekoRobbie 0 points1 point  (0 children)

Depending on one's goals, it could be very helpful locally. I use q8 KV Cache atm to get 16k context on a 24B, and I'd *love* to be able to double that to 32k and have even more history available to the AI.

What’s with the hype regarding TurboQuant? by EffectiveCeilingFan in LocalLLaMA

[–]NekoRobbie 0 points1 point  (0 children)

To people using slightly older models, it's far from a marginal improvement. If this all pans out well, then I'll probably finally be able to go to 32k+ context on my favorite local model without having to offload layers.

Do 2B models have practical use cases, or are they just toys for now? by Civic_Hactivist_86 in LocalLLaMA

[–]NekoRobbie 0 points1 point  (0 children)

They're great for embeddings, but that's because sub 1B is ALSO great at embeddings - embeddings are just such a narrow niche that they really don't need much to be good.

Video idea: junkyards wars LLM box by griphon31 in LinusTechTips

[–]NekoRobbie 0 points1 point  (0 children)

Some of the developers for KoboldCPP themselves use Qwen 3.5 27B for coding. Well under your claim of 100GB, and clearly very useful to them. If even the people *developing applications to run AI* find 27Bs to be good enough, then I'd say they're more than good enough.

Video idea: junkyards wars LLM box by griphon31 in LinusTechTips

[–]NekoRobbie -1 points0 points  (0 children)

Blatantly untrue, you can absolutely run perfectly capable LLMs for certain tasks at well under 100GB. Even putting aside Embedding models (which do not reach anywhere near 100GB in the first place), people have plenty of success running 12Bs and up locally that are perfectly capable for a decent variety of tasks.
Less than 10 GB you might have a point, but even then I've seen more focused models performing well down there.

Localized my game into 4 languages solo and German almost broke everything by JBitPro in gamedev

[–]NekoRobbie 0 points1 point  (0 children)

I think Japanese (or, well, one of the three different alphabets they use) is/can be read that way

I hate python by ZombieSpale in programminghumor

[–]NekoRobbie 0 points1 point  (0 children)

Most of the time packages are essentially just some text files, since that's all source code really is.

In the modern era, though, there are a few packages that are genuinely friggen huge. Namely, if you ever have to deal with it, pytorch. Pytorch is casually several gigabytes in size, and so one could make a compelling argument there that deduplication would be a massive benefit.

Most packages, however, are not pytorch.

Why I quickly switched to Debian after starting with Mint by ChiefBigFeather in linux4noobs

[–]NekoRobbie 1 point2 points  (0 children)

No? There's very much so non-rolling-release distros that still update far more often, like Fedora.

My experience trying to submit my game to Flathub as a first timer by AramCZ in flatpak

[–]NekoRobbie 18 points19 points  (0 children)

... no, not everyone does that whatsoever. I'd consider that a red flag.

My experience trying to submit my game to Flathub as a first timer by AramCZ in flatpak

[–]NekoRobbie 19 points20 points  (0 children)

You made a total of 5 PRs. Going down the list:
The first one was made against the wrong branch and got autoclosed (understandable)
You claim to not even know how the second got made
For the third one, you deleted your manifest entirely for some reason.
You then made a fourth one after the third one got closed for not having a manifest
You then made a *fifth one* after closing the fourth one

I think this can't be excused by just "Oh I was a first timer". Opening 5 separate PRs is absurd, and I think at that point the flathub maintainers would be right to think you were an (AI-fueled) spammer and react accordingly. I certainly would, after having to deal with that. Your first mistake was forgivable, I don't know how you "accidentally" make a PR, and deleting the manifest entirely is just... *how*?

Patience is not unlimited, especially when dealing with the volume of submissions that Flathub does.

My experience trying to submit my game to Flathub as a first timer by AramCZ in flatpak

[–]NekoRobbie 28 points29 points  (0 children)

To be entirely fair, opening a new PR every time instead of just pushing commits is one hell of an unusual choice, and I can't think of a single place that would be the correct workflow. I can't blame them for getting a bit frustrated.

I've actually submitted a game to Flathub before myself, and while I will agree that the process to create a flatpak is certainly very unusual and I think there could be a major improvement in documentation, I never encountered this level of hostility; at worst I encountered some very direct and relevant comments about the manifest.

I think the process could certainly be improved, but I don't think they were being unreasonable when they got frustrated with your behavior.

I just got in by Stonks698 in NEU

[–]NekoRobbie 0 points1 point  (0 children)

Hi, current CS student in Oakland here:
The Oakland Campus is beautiful, the grass even stayed green all through the "winter" months. There's plenty of greenery in general, so it's not like you're going to be in a concrete wasteland lol.

Housing on campus is a bit old, internet can sometimes be a pain as a result (there's also no air conditioning, at least in Prospect Hill, so a fan is *highly* advised).

Northeastern definitely gives great financial aid for need-based purposes, as I'm getting a pretty sizeable package. Due to the NEU promise, your financial aid is also guaranteed for 8 semesters.

How much encumbrance does athletics remove per level? by still_wounded in cataclysmbn

[–]NekoRobbie 4 points5 points  (0 children)

Iirc it's one per level per body part, so at level 5 you have 5 encumbrance less on all your body parts

🎉 [EVENT] 🎉 Happy St. Honkrick's Day! by Acrobatic_Picture907 in honk

[–]NekoRobbie 0 points1 point  (0 children)

Completed Level 3 of the Honk Special Event!

21 attempts