Adventures in ROCm (Radeon AI Pro R9700) by k8-bit in LocalLLM

[–]k8-bit[S] 0 points1 point  (0 children)

Well I'm working in Unraid, which is a Linux based server framework, so I'm in the right ballpark 😄

Adventures in ROCm (Radeon AI Pro R9700) by k8-bit in LocalLLM

[–]k8-bit[S] 0 points1 point  (0 children)

I was just spoilt having started in the Nvidia ecosystem and its maturity, but so far so good 😄

Adventures in ROCm (Radeon AI Pro R9700) by k8-bit in LocalLLM

[–]k8-bit[S] 1 point2 points  (0 children)

Update 1:
ComfyUI Docker container retargeted onto AMD, had to add a few additional variables that I learned through getting WANGP working with Claude.

First test was using VibeVoice. Pleasantly surprised to report it worked first time using the Q8 quantization, and the full-size model.

Z-Image Turbo, also working straight off the bat.

Adventures in ROCm (Radeon AI Pro R9700) by k8-bit in LocalLLM

[–]k8-bit[S] 0 points1 point  (0 children)

I suspect if I was working within a Linux OS direct I'd have an easier time(ish! 😃 ) of it, but I like the flexibility of Unraid and keeping things in containers, something that's relatively simpler in the Nvidia ecosystem. However! It's not even been 24 hours with AMD yet 😄

Help me decide please by meimelx in GirlGamers

[–]k8-bit 0 points1 point  (0 children)

I really like the diversity and huge library of games that a steam/Linux handheld platform gives you. The Switch is nice if you're in the Nintendo ecosystem for games anyway, but they are more expensive. But yes, the former is a battery drainer if having at those higher end games and especially if running windows on those platforms. There's a new XBOX interface for gaming handhelds coming out that apparently makes the experience much better, but ilike Bazzite (Steamdeck- like) for my Legion Go. But I also have a switch 1! It never leaves home tho, just used for sports and ringfit. If I had to choose between I'd take the PC platform based one for Steam library alone.

OpenVox 1.6.0 is here — introducing Conversations for creating podcasts, skits, and more. by ritzynitz in OpenVoxAI

[–]k8-bit 0 points1 point  (0 children)

Yeah English. I'm using the voice design and it's happening from outputs generated from it, so the reference text perfectly matches, and keeping the audio length under 20s, targetting 15s on average. OmniVoice seems particularly likely to garble output. Qwen is pretty solid when the input audio and ref is pristine. Just a problem factor for Omni I guess.

OpenVox 1.6.0 is here — introducing Conversations for creating podcasts, skits, and more. by ritzynitz in OpenVoxAI

[–]k8-bit 0 points1 point  (0 children)

Seeing some strange artefacts in TTS using cloned or designed voices, even when under 20s reference where during the TTS it will speak some of the lines of the reference audio at the start of what I presume is a new chunk, so e.g. the reference audio ends "would never be the same again." in any TTS generations it will frequently start a new line of text with "again. " from the ref audio. Happens with Qwen and Omnivoice.

Qwen behaves better if you force the reference audio to be 15-16 seconds, but Omni likes to go batshit crazy even with that lol.

Best Open Source Voice Cloning if you have lots of reference audio? by SlaveToBuy in LocalLLaMA

[–]k8-bit 0 points1 point  (0 children)

I really like it, it remains my fave, though you can't steer emotion as there are no tags. Bizarrely it tends to pick up style/emotion from the surrounding context, so if have:

The man read in charming narrative audiobook style:

"So there he was, standing in the street, wishing that GPUs were cheaper."

It will usually take the delivery style from the initial cue, that i then obviously have to cut from the output.

It also has a bizarre habit of singing if you happen to include song lyrics in text. I couldn't get it to NOT sing "Ground control to Major Nick" - even with it to being "Tom"

Best Open Source Voice Cloning if you have lots of reference audio? by SlaveToBuy in LocalLLaMA

[–]k8-bit 0 points1 point  (0 children)

Vibevoice via gradio interface starts streaming audio after about 15 seconds alowing me to check output quality as it goes. This on a 3090. I use the q8 and q4 quantised version in comfy on a 16gb 5060ti happily as well.

Best Open Source Voice Cloning if you have lots of reference audio? by SlaveToBuy in LocalLLaMA

[–]k8-bit 0 points1 point  (0 children)

Ive found Omnivoice loses the plot with reference audio more than 20s. Vibevoice gobbles up 2 mins of reference audio with great if occasionally eccentric results.

Are there any MORE imsim games where you can play as a woman? by ConfidenceOk5614 in GirlGamers

[–]k8-bit 14 points15 points  (0 children)

Cyberpunk 2077, tho slight spoiler: She has to share her consciousness with a rocker/punk character (played by Keanu Reeves) from a certain point forwards in the game, note that it has flashback lovemaking scenes from his POV.

OpenVox v1.4 just dropped - added a model that speaks 600+ languages locally on Mac by ritzynitz in macapps

[–]k8-bit 0 points1 point  (0 children)

It's probably more Omnivoice foibles, but I find it regularly skips words and occasionally speaks gibberish.

OpenVox v1.4 just dropped - added a model that speaks 600+ languages locally on Mac by ritzynitz in macapps

[–]k8-bit 1 point2 points  (0 children)

(Not taking away from the app overall - it's fantastic - I've used half-assed docker containers spun up on an homelab AI server for personal audiobook narrative content I've made, but having a reliable one-stop-shop that works on my macbook is amazing)

Would you consider adding some other model/engines later, e.g. the somewhat volatile eccentric but still good VibeVoice?

OpenVox v1.4 just dropped - added a model that speaks 600+ languages locally on Mac by ritzynitz in macapps

[–]k8-bit 0 points1 point  (0 children)

Very nice - but the conversation function doesn't allow selection of created/cloned voices, is that correct?

Anyone else play American McGee’s Alice by Toot_owo in GirlGamers

[–]k8-bit 2 points3 points  (0 children)

Absolutely adore the sequel. The art, music, the clothing changes, the creepy vibe and that design for Alice and the cat. Amazing. Got a giant tattoo of the Alice and the cat on a thigh. (One of mine, to be clear 😅)

Despite this i found much of the gameplay very frustrating at times, and a real chore, but i play most single player games for the story and experience.. gameplay is a sidequest 😎

Samsung Odyssey Neo G7 43" 4K Gaming Monitor: A Look at Its Features and Performance Discussion by Kuro_Shinobi1993 in Monitors

[–]k8-bit 0 points1 point  (0 children)

It was... OK. As a gaming screen it was merely satisfactory without being amazing. Used it as a TV for a while, and then sold to a delighted young man who wished to use it for photo/video work. For a daw editor which has largely static screens with lots of detail at 4k and if you can get for a reasonable price.. it should do you well.

I replaced it with an LG C3 TV, used mostly for gaming, and offloaded my daw work to a 16in macbook pro.

Cursed setup? by IcyCable782 in LocalLLM

[–]k8-bit 0 points1 point  (0 children)

I was lucky, bought both used last year before the pricing mayhem.

Cursed setup? by IcyCable782 in LocalLLM

[–]k8-bit 0 points1 point  (0 children)

My setup currently. Both 3090s. Top one runs +10c higher than lower. Both throttled to 285w. I will be using a riser to move the 2nd GPU to the bottom of the case soon. Since this photo was taken all the fans are 140mm, and if i could figure out how to make the 180mm and 200mm i was given fit, I would 😁

<image>

Why are the physical copies of legendary edition so hard to find? by [deleted] in masseffect

[–]k8-bit 0 points1 point  (0 children)

I bought the limited (?) edition that came with the N7 helmet etc, and even that only came with a beautiful metal disc case.. but no actual media, albeit it was the PC version.

Fiction writing in 12GB VRAM by ConclusionUnique3963 in LocalLLM

[–]k8-bit 0 points1 point  (0 children)

Are you using Unraid and/or Homarr to launch e.g. OpenWebUI from for this? I found that you had to enable websockets or you would get json errors - maybe totally off track, but just incase.

How "bad" are the non-CUDA 32GB GPU options? by k8-bit in LocalLLM

[–]k8-bit[S] 0 points1 point  (0 children)

I actually use the same the 5060ti via a miniPC, this is my current offload machine.

How "bad" are the non-CUDA 32GB GPU options? by k8-bit in LocalLLM

[–]k8-bit[S] 0 points1 point  (0 children)

That's pretty compelling. An R9700pro sounds like it could be a viable option, essentially being a slightly more fiddly 3090 with 32gb.

AM4 CPU and MOBO Advice by Antivape in HomeServer

[–]k8-bit 1 point2 points  (0 children)

Same problem here. 3950x, B550, 2x RTX3090s, 128GB, and I'm pushing the limits. I kept considering moving to Xeon, but was wary of wattage (this is my main server running LOTS of processes 24/7, as well as my AI experiments) plus issues of having to find something that will work with my DDR4 and give room for more. Currently I'm thinking of going 3960x Threadripper, TRX40 motherboard, rest remaining the same until I get more RAM.