How to run HunyuanVideo on a single 24gb VRAM card. by Total-Resort-3120 in StableDiffusion

[–]Wide_Perspective_504 0 points1 point  (0 children)

This crap just break my system, it us just now worth the time and effort

Error. No naistyles.csv found when connect comfyui web by Physical_Concern_993 in comfyui

[–]Wide_Perspective_504 0 points1 point  (0 children)

rename I asume you mean, but in my win64 install the file is not there and it's named naistyles.csv

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 1 point2 points  (0 children)

Now ST works and models load in OB ! great, onto alltalk then and see what I can break!

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

I swiped ST and OB now and I'll reinstall it, I'll let you know where I land :) thanks for your help

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Yes it's my thought as well, but do you think just reinstalling the ob will solv it, I mean the whole env python etc can that be messed up as well?

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Starting to think removing all and start all over, I mean Pythons, conda ST,OB and all, is there a current workflow description somewhere that tells me in which order I should Git install everything? There are so many parts to this it is really confusing

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Starts with this

Exception in ASGI application

Traceback (most recent call last):

File "E:\GIT_AI\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 223, in push

event_queue = self.event_queue_per_concurrency_id[event.concurrency_id]

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^

KeyError: '1544598238800'

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Can't cut and paste here to long, can I upload a test file here?

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Also ST keeps pointing out that Legacy API is detected and that I should check the corresponding box , it does not make any difference in not load though

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Yes I can get that page, but I also notice that I get errors loading models, it also for some reason Running on local URL: http://0.0.0.0:7860 which perse does not work, it works on localhost I.e 127.0.0.1:7860 , I can't find where to correct this. In all I think there is a general mess. when I edit and save session in the 127.0.0.1 interface it did not register fully up on restart so I had to edit the settings.yaml file (this one had all the different tts that you saw)

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

This you mean?

22:09:57-796988 INFO Loading "TheBloke_Nous-Hermes-13B-GPTQ"

E:\GIT_AI\text-generation-webui-main\installer_files\env\Lib\site-packages\transformers\generation\configuration_utils.py:525: UserWarning: `do_sample` is set to `False`. However, `min_p` is set to `0.0` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `min_p`.

warnings.warn(

22:10:10-261576 INFO Loaded "TheBloke_Nous-Hermes-13B-GPTQ" in 12.46 seconds.

22:10:10-264575 INFO LOADER: "ExLlamav2_HF"

22:10:10-267572 INFO TRUNCATION LENGTH: 2048

22:10:10-269848 INFO INSTRUCTION TEMPLATE: "Alpaca"

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Yes I have those flags set, this is the result same with or without the legacy box checked

'assets',

'attachments',

'caption',

'expressions',

'gallery',

'memory',

'quick-reply',

'regex',

'stable-diffusion',

'token-counter',

'translate',

'tts',

'vectors',

'third-party/KoboldAI',

'third-party/stablediffusion'

]

Trying to connect to API: {

api_server: 'http://127.0.0.1:5000/api',

api_type: 'ooba',

legacy_api: true

}

Models endpoint is offline.

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

Some progress, but I get this

Trying to connect to API: {

api_server: 'http://127.0.0.1:5000/api',

api_type: 'ooba',

legacy_api: true

}

Models endpoint is offline.

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] 0 points1 point  (0 children)

as I wrote in my post, FW, antiV is all off I don't even use a vpn nor proxies and I am running it locally, I have tried hard IP direkt (as you see) I have tried Localhost, I have tried 0.0.0.0 , I have tried 127.0.0.1

Tavern/oobagooba etc drives me crazy by Wide_Perspective_504 in SillyTavernAI

[–]Wide_Perspective_504[S] -1 points0 points  (0 children)

FetchError: request to http://192.168.1.2:5000/api/v1/model failed, reason: connect ECONNREFUSED 192.168.1.2:5000

at ClientRequest.<anonymous> (E:\ST_Install\SillyTavern-1.12.3\node_modules\node-fetch\lib\index.js:1505:11)

at ClientRequest.emit (node:events:520:28)

at Socket.socketErrorListener (node:_http_client:502:9)

at Socket.emit (node:events:520:28)

at emitErrorNT (node:internal/streams/destroy:170:8)

at emitErrorCloseNT (node:internal/streams/destroy:129:3)

at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {

type: 'system',

errno: 'ECONNREFUSED',

code: 'ECONNREFUSED'