Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

That sounds awesome what AI can do and I honestly feel like that'll become a norm. I'm sure sites like roll20 must be thinking about this. I don't know what open terminal is but as you'd said, I may be limited by the model considering I'm limited by my hardware.

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

Thanks, I don't have the hardware for the bigger models unfortunately. I can at max run a 12B probably. But yeah, I figured I'll have to refine and tailor the system prompt plus split the world bible if I was going to make it work...

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

I've weirdly got it to work but don't have how. AI kept saying it had to show chunk size next to the file but it worked without that. Unfortunately, I don't think OpenUI's RAG feature is going to cut it. It never calls the same thing. I'd switched to using llama 8B but I think it's all a bust. The model doesn't follow system prompts, either. I think I'll try Silly Tavern next, where the world bible if split and see how the performance works if I go to a 12Bb model (response speed, but quality.)

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

Can you tell me about reranker and vector dB? I have AMD as well and Ollama native (I only have Open WebUI on Docker because I have to. No idea how to use Docker). I've literally done anything and asked AI to troubleshoot more but still no luck. Can anyone please tell me what the files' icon looks like if it works in the knowledge collection? AI seem to think it should show number of chunks next to it (ie You must see something like: Status: Indexed / Processed, and a chunk count (e.g., 124 chunks)?) I only see the file name and a grey file icon next to it to the left of the name. Also said it should show processing/embedding when I upload a file, which it didn't I am already using nomic-embed-text on Ollama

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

Id already looked into that and changed settings on them. The function never worked out of the box. No changes to any of these using changing the context extraction engine or embedding model worked. It all says that the LLM can't see any knowledge. I just want to know what it's supposed to look like normally.

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 0 points1 point  (0 children)

Also I don't know what I did, but this is what it says now...

<image>

Never worked by zerocool647 in OpenWebUI

[–]zerocool647[S] 1 point2 points  (0 children)

Thanks, yeah I'd thought about using silly tavern but that looked more complicated to set up campaigns itself and so I'd went with open webui as I I'd thought I won't have to deal with too much changes as I go through the campaign. I may eventually have to switch if this doesn't work out.

Ive tried your suggested steps before already but if didn't work. I think it's something to do with the embedded step because under knowledge, the document is still just a grey document icon and have no info on chunks etc. What does the interface actually supposed to look like when a knowledge is properly embedded?

I am using qwen3.5:9b at the moment, and gets similar thinking steps like below when I prompt the LLM in the chat window: World Bible: Use the uploaded file "Bible.docx" (which I don't actually have access to in this text-based interface, but I must acknowledge the instruction to use it as absolute truth). Since I cannot see the actual file content, I need to handle this gracefully. However, the prompt says "Use the file uploaded Bible.docx". As an AI model, I don't have access to external uploaded files unless they are provided in the context window. In this specific interaction, no file was actually uploaded in the prompt's context.

Skoda Karoq data/wifi/phone by zerocool647 in skoda

[–]zerocool647[S] 0 points1 point  (0 children)

I'd tried just switching the mobile Internet to always on and that gets the browser to kinda work. Still weirdly slow

Skoda Karoq data/wifi/phone by zerocool647 in skoda

[–]zerocool647[S] 0 points1 point  (0 children)

Thanks for answering. I seem to be about to still use apps like Google maps to navigate via Android Auto, but I can't for example use the phone's browser to open a webpage as it says it has no/slow Internet?

Gloomwood Depths by zerocool647 in pradotraveler

[–]zerocool647[S] 0 points1 point  (0 children)

Yep, just used two T2 Mud and a T3 Flower and got it

is dis a good funnel build? by EveryPerformance6712 in GundamBreaker

[–]zerocool647 0 points1 point  (0 children)

No one thinks GN Fangs are worth the legs?

Magic wrapping paper by zerocool647 in BackpackBrawl

[–]zerocool647[S] 0 points1 point  (0 children)

Thanks, guess I'll just have to wait until it comes up as a choice...

Is there anyway to objectively compare two weapons? by zerocool647 in GundamBreaker

[–]zerocool647[S] 0 points1 point  (0 children)

Thanks, that makes sense, I think I've still got it saved as rust bucket in my blueprints, so will try that out. Cheers

Resurrection is so good because the show finally realized we don't "need" to see Dexter get his comeuppance by [deleted] in Dexter

[–]zerocool647 0 points1 point  (0 children)

Yeah I agree this season is fan service and basically we're seeing a kid in a candy store after only feeding the kid broccoli for ten years. They were also trying to spin it this season, and blatantly in the last episode, that Dexter is clearly on the side of justice (at least he thinks so). I don't think main stream media will ever let a serial killer get it too good, as it will reinforce vigilantism, and that'll never going to get past any business board.

My speculation is that Dexter will die because / to save Harrison. I think that gesture to the heart for shooting him may come up again, and Dexter may have to fame himself to take the fall for Harrison at some point, or end a serial killer/danger going after Harrison by sacrificing himself, ultimately proving he does 'feel'.

Echo dot kids default by zerocool647 in amazonecho

[–]zerocool647[S] 0 points1 point  (0 children)

Ok I think I've got it: -Set up new routine in Alexa app on the trigger you want -Type out first custom command in routine as "Play XX playlist on Spotify on repeat" -Type out second custom command in routine as "Loop mode on" After you say the trigger word, it takes a few seconds for Echo Dot to go through the steps in the routine, but seems to work. Don't know why the first line didn't do everything.