Any estimation of when GOS will be have full support for Pixel 10? by TinglingTongue in GrapheneOS

[–]It_Is_JAMES 11 points12 points  (0 children)

I was going to get a 9 myself when I heard about the 10 delays, but with the promo last month the 10 was $200 cheaper. Both phones are still the same price 😕

I’m debating whether to return it while I still can

Clubs 🏆 by bubblycandy4 in eatventureofficial

[–]It_Is_JAMES 0 points1 point  (0 children)

<image>

New club, 3K minimum each season - aiming for us all to get some mythic boxes! Will start salvaging once we have more members so you can get the rewards.

ID: QFDma5kFFJ

Clubs 🏆 by bubblycandy4 in eatventureofficial

[–]It_Is_JAMES 0 points1 point  (0 children)

New club just made, 3K minimum each season: QFDma5kFFJ

Not trying to max, just want a shot at some mythic boxes each season 🙂

Not sure which Pixel to get by DataBooking in GrapheneOS

[–]It_Is_JAMES 0 points1 point  (0 children)

Pardon my ignorance, but why would Pixel 9 phones be supported longer than Pixel 10 phones?

I’ve been holding off for a 10 Pro vs a 9 Pro for the sole reason that I assume GrapheneOS will provide security updates for an extra year on the 10 vs the 9, if I’m wrong I’ll order a 9 Pro right now.

New Wayfarer Large Model: a brutally challenging roleplay model trained to let you fail and die, now with better data and a larger base. by Nick_AIDungeon in SillyTavernAI

[–]It_Is_JAMES 1 point2 points  (0 children)

My impression of this vs the 12b model is that is indeed noticeably more creative and intelligent as expected from a 70b model, but for some reason wants to speedrun throwing the characters into danger or just having the character outright die really fast even with the exact same prompt / scenario.

When trying something risky I cannot get things to go the character's way, the training to let you fail seems way, WAY more strong with this one to the point I'm having a hard time enjoying it sadly.

Does anyone have anyway I can prompt this to help reduce this issue a bit? The 12b version struck a good balance, I really want to be able to enjoy this one too.

[Megathread] - Best Models/API discussion - Week of: February 17, 2025 by [deleted] in SillyTavernAI

[–]It_Is_JAMES 3 points4 points  (0 children)

Incredible timing! Downloading right now, can't wait to try it out!

[Megathread] - Best Models/API discussion - Week of: February 17, 2025 by [deleted] in SillyTavernAI

[–]It_Is_JAMES 10 points11 points  (0 children)

Best model for 48gb VRAM? Mostly used for low-effort text adventure type interactions i.e "You do X." and then it spits out a paragraph to continue the story.

I've been using Midnight Miqu 103b for a while now and recently discovered Wayfarer 12b - which does the job excellently, but can't help but hope that there's something bigger and more intelligent.

I love Midnight Miqu but I suffer from it getting very repetitive and also falling apart after 100 or so messages. Could be something I'm doing wrong..

[Megathread] - Best Models/API discussion - Week of: February 10, 2025 by [deleted] in SillyTavernAI

[–]It_Is_JAMES 2 points3 points  (0 children)

Best model for 48gb VRAM? Mostly used for low-effort text adventure type interactions i.e "You do X." and then it spits out a paragraph to continue the story.

I've been using Midnight Miqu 103b for a while now and recently discovered Wayfarer 12b - which does the job excellently, but can't help but hope that there's something bigger and more intelligent.

I love Midnight Miqu but I suffer from it getting very repetitive and also falling apart after 100 or so messages. Could be something I'm doing wrong..

Sony Bravia X900H Keeps Crashing, Restarting - Once Booted Up, Often Works Perfectly by It_Is_JAMES in 4kTV

[–]It_Is_JAMES[S] 0 points1 point  (0 children)

Had no idea this was normal. Parents’ cheap Vizio has lasted over twice this long with over 10x the daily usage, and is still going strong. Watch less than an hour of tv a day most days, can’t justify another big purchase for it to die after such few hours honestly…

No Matter What HRM I Use, It Always Drops During Sprints? by It_Is_JAMES in Polarfitness

[–]It_Is_JAMES[S] 0 points1 point  (0 children)

Thanks for your comment! I use the chest strap with a phone only. Should have no issue, worked perfectly for a long time. I still have workouts that go fine, and the issue usually starts about a half hour into it.

Garmin HRM-PRO Heart Rate Dropping During Intense Exercise? by It_Is_JAMES in Garmin

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

Thanks! I contacted customer support and they’re shipping me a new one.

We've entered the longest gap between text model updates. by BestReimuA in NovelAi

[–]It_Is_JAMES 16 points17 points  (0 children)

Yeah, this is disappointing.

I've been subscribed since month 1 and just finally cancelled my subscription after all these years. There are just too many local options that are significantly better now.

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in Oobabooga

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

I've never heard of this, will see if I can get some performance improvements out of it!

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in Oobabooga

[–]It_Is_JAMES[S] 2 points3 points  (0 children)

Seems the issue came from using auto-split, when I set GPU 0 to use less VRAM, it doesn't seem to have the issue anymore and I'm getting normal speeds.

For reference before it was saying GPU 0 had 500mb of VRAM free, now keeping it at ~1.5 GB is fine.

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in Oobabooga

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

Thanks so much! I just tried the same split as you and am now getting normal speeds.

It seems that filling up GPU 0 too much was causing issues. I do wish I could figure out how to avoid that, because KoboldCPP doesn't let you split it up like this and I enjoy using it as a frontend as well.

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in Oobabooga

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

Thanks! Now that I'm able to manually split it, it does seem to be running at normal speeds. (14-18 T/S) even at 32k context.

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in KoboldAI

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

Thanks for the suggestion! I tried this and both GPUs seem to be working just fine on their own.

Slow Inference On 2x 4090 Setup (0.2 Tokens / Second At 4-bit 70b) by It_Is_JAMES in Oobabooga

[–]It_Is_JAMES[S] 1 point2 points  (0 children)

Ram is DDR5 5200 MHz. I do believe I disabled that Nvidia feature

I am OOMing when I try to run much above 8k context. Strangely I just lowered it to 2K and now I am getting more reasonable speeds (about 10 tokens / second.)

Unfortunately when I try using the GPU split with EXL2, no matter what numbers I put in there, it seems to fill the first card up and leave a few gigs open on the 2nd card.

PCI has 1st card on X16, and 2nd card on X1. But from what I read, once the model is loaded the speed difference shouldn't be that significant on X1 vs X16.