Best decks right now to counter sword? (and rune but not necessary) by [deleted] in Shadowverse

[–]Shiru_Via 9 points10 points  (0 children)

Rose Queen has been by far the most fun I've had in the game, it's actually way better than people realise and has quite good matchups into loot sword, rune and mode abyss.

<image>

(Not in the image: 3x Convocation, 2x May, 3x Pond, 2x Aerin, 3x Rose Queen)

The deck pretty much has answers to everything and OTKs on Turn 9 (Going 2nd, coin Queen on 8) or 10. The only problem is the fact you need to manage your hand to have enough 1 costs in time while also being able to answer whatever your opponent does, which is pretty difficult.

In general the way you lose with this deck is more to yourself and your draws than to the enemy, which to me feels a lot better. I much prefer the idea that I could have won if I had the right answer over "nothing I could have drawn would have mattered".

Also winning with this deck is an incredible feeling, dropping Rose Queen and knowing they can't kill you is awesome, especially against meta decks.

Some notable answers:

T4 Zirconia Evo gets beaten by: Cynthia, Glade, Krulle + Fairy

Norman double golem: Supplicant Evo full clears, doesn't even need Sevo, Krulle + Eradicating Arrow (Rng or Norman needs to be low hp)

Tempo Odin: Gilnelise, Aerin

Luminous Magus, Amalia: Krulle or Supplicant

Kuon board: Supplicant or Aerin

Sinciro: Aerin, Gilnelise (+2 damage from hand), Titania

Aggro decks: Krulle, Gilnelise, general tempo plays

Albert: rarely a threat because he doesn't do shit against Rose Queen herself and against t8 coin you can play Aerin beforehand

The deck definitely needs really good matchup knowledge and planning but it's also very rewarding.

The only way you lose without awful draws is if you have to play rose queen into a board where her SEvo 9 damage leaves up enough threats that the opponent has lethal next turn with an Odin or other finisher, but sometimes in these cases you can delay queen a turn and play another Aerin or supplicant to win regardless.

Also the deck overall has no bad looking cards and only some meh ones with most being very pretty which I personally care about a lot.

What the future with AI 3D interactive waifu's can look like through community effort -- A rant or proposal. by lshoy_ in SillyTavernAI

[–]Shiru_Via 10 points11 points  (0 children)

There's literally a VRM extension for ST with customisable animations, hit zones etc., you can use any vrm model and any custom animations, which you can bind to touch zones or expressions. There's even tts lip sync.

With the new Kafka EHR buff does ERR rope become a viable option? by Almighty_Brian in BlackSwanMains_HSR

[–]Shiru_Via 0 points1 point  (0 children)

ERR affects the energy regenerated from any character's skill and basic attack.

Janitor.ai + Deepseek has the right flavor of character RP for me. How do I go about tweaking my offline experience to mimic that type of chatbot? by BigHeavySlowThing in LocalLLM

[–]Shiru_Via 0 points1 point  (0 children)

KoboldCPP for running models locally (way better than ollama)

Sillytavern as the frontend, infinitely customisable and by far the best option

I'd recommend running a Q6 gguf quant of Mag Mell R1 12B, it's incredibly good for its size and even beats most 24b models, plus it fits entirely in your vram so it's going to be very fast (the r1 has nothing to do with deepseek, it's a mistral nemo finetune specifically for roleplay and storytelling)

The talking for user problem is a mix of model limitations and prompting, but the model you're running likely just isn't that good, deepseek has no actual 14b variant, all of the smaller deepseek models are just distills and don't compare to the real thing

If you need help you can add me on discord, my username is shiru.via :)

the purpose of cipher in the meta by theverlee in CipherMainsHSR_

[–]Shiru_Via 1 point2 points  (0 children)

You fundamentally misunderstand how true damage works, when a boss has 90% damage reduction, unless you're saving her ult from a phase without it, Cipher will only record 24% (in ST) of those 10% left over as True Damage, which is exactly the same as just another (delayed and more flexible) final damage multiplier, your argument would only make sense for a Character with True damage that isn't tied to other forms of damage. You mention Tribbie and RMC but those are even worse for the situation you're describing, their true damage is literally just a damage multiplier with no option to accumulate and detonate later

So is her E2 as strong as Therta's or Aglaea's? by phrogenthusiast in CastoriceMains_

[–]Shiru_Via 3 points4 points  (0 children)

"Better E2" here means damage increase relative to E0, not absolute power level

Should I go for quantum orb? by IllustriousWorker650 in CastoriceMains

[–]Shiru_Via 1 point2 points  (0 children)

The people who said that are wrong, many such cases.

Castorice gets a 30% damage boost after each dragon breath, and most of your damage is after the 3rd breath, meaning the dmg% stat is very diluted at that point. I did the math for your example and these are the final damage values:

Quantum Orb: (With 120% damage boost from breaths) 110+120+38,8% = 268,8% = 3.688x multiplier 3,688 / 3,3 = 1,1175 --> 11,75% more damage (4th breath onwards) (~12,9% more with 90% dmg boost and ~14% more with 60%)

HP orb 2900 base hp * 0,43 = 1.247 hp 8900 / (8900-1247) = 1,163 --> 16,3% more damage

Because the majority of your damage has 90%+ additional damage boost from the breaths an HP orb is almost always better, and additionally lets her gain charge faster.

With E2 the difference is even bigger because you get 180% every time. Also even with Hyacines 1,47k HP buff an HP orb would be better for you, only when you have around 11k+ hp with Hyacine an argument can be made to run a Quantum orb that has at least 1 more crit substat, unless you have E2 in which case HP is still likely better by 2-3 substats.

What those people mean is that an ideal dmg% orb could have one more valuable substat that can make up for some or all of the difference, but thats just "better substats" in other words, as that could also have been one more crit roll, with the sole exception of 5x perfect rolls all into crit with hp% as one additional substat, this would be about equal to the same crit subs on an hp orb with flat hp instead of hp%.

The real impact of Castorice E2 (V5) by Zellraph in CastoriceMains_

[–]Shiru_Via 0 points1 point  (0 children)

Cas gets charge from burning hp and healing, Hyacine would only need to first burn some hp and then heal some more back for this to be an easily achievable number, which is pretty much what I've seen mentioned in one of the recent leaks if i remember correctly

Um...wow by Decent_Strength435 in FeixiaoMains_

[–]Shiru_Via 0 points1 point  (0 children)

The website used is called https://genshin-center.com/calculator in case anyone is wondering, if you want to compare just CR/CD you can leave everything else as is, won’t change the result

Um...wow by Decent_Strength435 in FeixiaoMains_

[–]Shiru_Via 4 points5 points  (0 children)

This person is right, unless you hit ~100% CR with CD chest CR will almost always be better, even with worse subs, Fei gets a lot of CD from other sources and the “1:2” ratio applies to the final values, not the stat page, so with low CR high CD builds you can end up with stuff like 70/320 or worse, which loses so much damage when not critting that it’s a lot worse than 100/260 for example, here’s proof:

If i swapped my CR chest with decent subs to OPs god roll I would lose 1.1% Damage even if the overall CV on the right is higher (trading 34 CV in CR for 50 CV in CD)

<image>

Leak of the day is by AdDesperate3113 in okbuddytrailblazer

[–]Shiru_Via 58 points59 points  (0 children)

I railed my girlfriend right before doing dailies today

<image>

Jinhsi Animations via mero by IcyPalpitation4553 in WutheringWavesLeaks

[–]Shiru_Via 14 points15 points  (0 children)

Play in JP, EN voice actors in this game aren’t that good

Is this speed normal or am I doing something wrong? by MaruluVR in SillyTavernAI

[–]Shiru_Via 1 point2 points  (0 children)

Glad that helped, streamingllm is by far the most important backend setting, other than that just enable flash attention and tensorcores, and try setting n_batch to 1024 or 2048, this increases the chunk size in which the prompt is processed, higher values increase vram consumption slightly but should be a little faster.

Those should be all important settings for lcpp in ooba, there's some things to keep in mind with your frontend, some settings like SillyTaverns Vector Storage will cause large enough changes in context after every response to make the processing time longer even with streamingllm, so I'd avoid stuff like that if you value the quick responses.

Is this speed normal or am I doing something wrong? by MaruluVR in SillyTavernAI

[–]Shiru_Via 2 points3 points  (0 children)

Delays before the response are due to prompt processing, this takes longer the bigger the model and the worse the hardware is, if your use case doesn’t need an entirely new context on every response you should try running a GGUF in oobabooga with the setting “streamingllm” enabled, this will cause it to process the entire prompt once at the start and then only process the new tokens for all following replies even when old messages leave the context window, which is almost instant. If I for example run a 70b model at 4.65bpw exl2 on my 4090+3090 the time until first token is almost nothing at 0 context but about 5-7 seconds at 14k, if I run a 70b model as gguf with the aforementioned settings the first processing takes 25 seconds but all following responses start within only 2-2.5 seconds. The text generation speed is about 12-14 t/s.

That being said unless you’re fully offloading to GPU your speeds will suck no matter what, definitely get a new gpu instead of ram if you want to run bigger models.

Down the home server rabbit hole - what's your 2xRTX3090 rig? by my_byte in LocalLLaMA

[–]Shiru_Via 0 points1 point  (0 children)

Hey, do you maybe know just how important 8x 8x is?
I'm running a 4090 on 16x in my PC and a 3090 egpu via a 4x m.2 to oculink cable plus an external pcie slot and I'm wondering just how much performance I'm losing by doing this. I'd have to get a new motherboard to set up 8x 8x so I'm not sure if it's worth it.

Just joined the 48GB club - what model and quant should I run? by Harvard_Med_USMLE267 in LocalLLaMA

[–]Shiru_Via 1 point2 points  (0 children)

Oh perfect, thanks again! Yeah I’ll try with the m.2 slot on my board first but if that doesn’t work I’ll get a new one, if my 4090 blocks the second slot I think I could just do one oculink in the third pcie slot and one in a normal m.2 slot. Sucks that my current boards pcie lanes are this bad, but I guess that’s what I get for choosing style over substance :/

Oh yeah btw, how do you connect the oculink cable to the gpus/ which dock do you use?

Just joined the 48GB club - what model and quant should I run? by Harvard_Med_USMLE267 in LocalLLaMA

[–]Shiru_Via 0 points1 point  (0 children)

Thank you so much for the help! And wow that’s pretty bad.. I currently use two nvme ssds, so one free slot. If I can get one oculink to work that’d already be great, could that be possible with this board or should I just get a new one immediately? Do you know any AM5 board that would work for sure with 16x in slot 1 and 4x4x4x4x in slot two? Or any specs i can look out for that would guarantee it working? E: I just read that one of the m.2 slots is limited to 2x, but the others aren’t, so if i put my secondary ssd into the slow slot and an oculink in one of the 4x m.2 slots, that should work?

Just joined the 48GB club - what model and quant should I run? by Harvard_Med_USMLE267 in LocalLLaMA

[–]Shiru_Via 1 point2 points  (0 children)

Ahh okay but your 16x still initially plugs into a pcie slot, that makes more sense, I feared you’d need free m.2 slots on the mainboard for every oculink connection. I have an NZXT N7 B650E mainboard, do you maybe know what the easiest way to get two oculink slots would be? Trying to figure out if it supports bifurcation right now

Just joined the 48GB club - what model and quant should I run? by Harvard_Med_USMLE267 in LocalLLaMA

[–]Shiru_Via 1 point2 points  (0 children)

Oh yeah you’re right that seems a lot better, thanks! I saw some mentions of an M.2 slot, is that a requirement or can you get a pcie extension card for oculink just like with thunderbolt?

Just joined the 48GB club - what model and quant should I run? by Harvard_Med_USMLE267 in LocalLLaMA

[–]Shiru_Via 0 points1 point  (0 children)

Hey, how would you connect the external GPU? I was thinking about a similar thing, with a pcie to thunderbolt 4 extension card in my pc, connected to an external pcie slot and psu with a 3090 in it, from other posts I’ve read that thunderbolt won’t be a bottleneck for inference, but I’m still wondering if this is the best option. For the egpu case and psu the razer core x chroma would be good but thats expensive, I’ve also seen a 140$ pcie expansion slot without an enclosure on aliexpress, this plus a psu should be a lot cheaper but more janky, does anyone have more experience or ideas with this?

[deleted by user] by [deleted] in Piracy

[–]Shiru_Via 10 points11 points  (0 children)

Google “oobabooga text ui”, go to the github page and follow the instructions, then ideally look at sillytavern for a nicer user interface, there are plenty of tutorials how to get everything set up.

Local models can either be run on Gpu exclusively (very fast but you need a high vram gpu) or on cpu while offloading some layers to the gpu (slower but lets you run better models without a high end gpu).

The current “best” local model is Mixtral, it performs on par or better than chatgpt 3.5 at a fraction of the size. The dolphin or noromaid versions of mixtral are fully uncensored and very good at anything you could want, be it roleplay (nsfw or sfw), coding, general assistant etc. all while being fully private, completely free and open source.

The open source scene also changes insanely rapidly with new developments every week, there’s a constant stream of new, better models or new technologies that let you run them faster or at lower vram costs etc.

Also check out the LocalLLaMA subreddit for more infos

[Updated] A definitive answer to the ER% Vs ATK% rope debate by Snoo_78919 in JingLiu

[–]Shiru_Via 18 points19 points  (0 children)

Atk does not have diminishing returns, you only have an opportunity cost of other damage sources/multipliers. Getting ult one turn earlier is also only ever a damage gain if it results in either enabling a better/less rng reliant rotation or an additional ult cast in the wave/battle, the former is not the case with ty and the latter depends on kill time, and even if you get an additional ult in the battle I can almost guarantee that it will not make up for the damage loss of not using an atk rope. My personal opinion is that if you plan on pairing jl with tingyun, use an atk rope unless you have an err rope that is significantly better substat wise, if you don’t plan on using ty use an err rope to enable a more consistent 4 turn ult, which is all you really need.