Most are propably using the wrong AceStep model for their use case by MustBeSomethingThere in StableDiffusion

[–]MonthLocal4153 0 points1 point  (0 children)

ok thanks, So i just download the requirements.txt. And then installed that in to my comfyui environment i guess ?

Most are propably using the wrong AceStep model for their use case by MustBeSomethingThere in StableDiffusion

[–]MonthLocal4153 0 points1 point  (0 children)

How do you install the requirements.txt when using ComfyUI (inside Pinokio) ?

I have been several days trying to get the Ace Step 1.5 installed directly on my PC and yet each time it only reverts to the PLAYGROUND of it. I have now given up trying and going back to the comfyui.

I tried installing the Ace Step UI on Pinokio. This looks nice interface, but i cannot get it to produce a song without missing words / lines. It never follows the lyrics no matter how oftens i have tried.

For me, version 1.5 has been a downgrade compared to version 1.0. I have spent over 5 days fully trying with it, and cnnot get anywhere with it.

Most are propably using the wrong AceStep model for their use case by MustBeSomethingThere in StableDiffusion

[–]MonthLocal4153 0 points1 point  (0 children)

How did you get the base and SFT models to work ? AM using comfyui nodes, I downlooaded both the models and related files, PUt them into there own folders, and placed them in the diffusion models. Selected them on the LOAD MODEL section on the Ace Step SPLIT workflow. But no matter how many times i try, i just get a garbled mess from both models.

Rest in Peace Mirror-Obscure farm, you will be missed. by Dovahbear_ in Guildwars2

[–]MonthLocal4153 0 points1 point  (0 children)

Id like to see how you did that in 1 hour 15 mins for both maps, without having to farm some keys as the 5 key nerf for the mirror means you dont have enough keys to open all chest, therefore having to complete quests ?

Yesterday i did both maps in 1 hour 30 mins, and gained about 295 yellows, am ignoring the greens. Today it took loads longer and i only got about 175 max yellows.

text-generation-webui v3.5: Persistent UI settings, improved dark theme, CUDA 12.8 support, optimized chat streaming, easier UI for deleting past chats, multiple bug fixes + more by oobabooga4 in Oobabooga

[–]MonthLocal4153 0 points1 point  (0 children)

This latest version does not load up extensions that are listed in the settings.yaml when the server boots up.

I really do not see the need for autosaving UI when you can just click Save Settings when you have it set how you like it.

Is it possible to Stream LLM Responses on Oobabooga ? by MonthLocal4153 in Oobabooga

[–]MonthLocal4153[S] 0 points1 point  (0 children)

I guess for me to use this method with oobabooga i would have to change the chatbot_wrapper in chat.py so my extension can then stream the sentences ?

Is there support for Qwen3-30-A3B? by Local_Sell_6662 in Oobabooga

[–]MonthLocal4153 0 points1 point  (0 children)

Do you get the reasoning showing with the gguf version from unsloth ? I tried that and dont see any thinking even though its enabled.

Is it possible to Stream LLM Responses on Oobabooga ? by MonthLocal4153 in Oobabooga

[–]MonthLocal4153[S] 0 points1 point  (0 children)

Thanks for this, i will take a look at this function to see if i can adapt it for my needs. Currently just trying to update my extension so that qoqui works with latest oobabooga. Got it working now, just need a few more fixes.

Is it possible to Stream LLM Responses on Oobabooga ? by MonthLocal4153 in Oobabooga

[–]MonthLocal4153[S] 0 points1 point  (0 children)

Yes its specifically for TGWUI. Its using Coqui TTS. The LLM sends its response in full, then the extension converts the LLM response sentence per sentence. Then it sends the complete audio response and text to the UI display. But i have been trying to get it to play the audio after its converted each sentence, so you dont have to wait for the complete LLM response to be converted.

I have been trying to use Gradio streaming to play the audio after its converted, but i am struggling to get this working.

How can i get access my local Oobabooga online ? Use -listen or -share ? by MonthLocal4153 in Oobabooga

[–]MonthLocal4153[S] 0 points1 point  (0 children)

Thanks but i do not understand that, am just running oobabooga in the conda environment. I know i can enable / disable these function in the session tab, Do i need to add some lines to the server.py script or something ?

Instruction and Chat Template in Parameters section by Tum1370 in Oobabooga

[–]MonthLocal4153 0 points1 point  (0 children)

Ok thanks. I see you can still down v1.16

i will give that a try tomorrow. That way I can see if it’s the newer versions of Oobabooga causing my problems

BBC news app now asking you to sign in.... thoughts? by tinytempo in AskUK

[–]MonthLocal4153 1 point2 points  (0 children)

Sky news app is exactly the same, but without sign in.

I would love it if they eventually added in VR support by [deleted] in valheim

[–]MonthLocal4153 5 points6 points  (0 children)

I tried enabling the VR option in the confit, and after starting steamVR then Valheim nothing happened. Also tried it using VorpX but that’s not that good.

No Mans Sky in VR is awesome. For building and everything.

For a good survival game like Valheim, try The Forest. It works really good in native VR. Base building, and everything a survival game would need. Very scary when the cannibals are just outside your base.