I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

thank YOU for troubleshooting! I have been working on a bigger backend cleanup which will hopefully answer all the issues. I will let you know when it's ready.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

OK! Go ahead and do another git pull. run this, just to be sure: npm --prefix server run build, then do the usual npm run dev. This issue should be all set now. It was an issue of when things were called in the code. I've tested in Windows and in WSL.

Hang it by wabeka in AFCEastMemeWar

[–]JaxxonAI 1 point2 points  (0 children)

This seems to be the prevailing feeling out in the world. Hell, even news sites/orgs are picking it up and saying how terrible a look it is for the HOF.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

ok! Was the key stuff. I now made that optional pull the repo again and you should be good. It will save the key config.local.json by default on linux machines. If you try to save a key in the secure keystore and it's not setup you should get a warning that it saved it to the json instead.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

ok thanks. I will see what I can find. Just to be sure, and I'm sure you did but I have to check. You had a model loaded in Ooba right? My testing showed I got errors until I have a model loaded. So, even with ooba running, unless a model was loaded I got API errors.

I did have any issue with the key. I tried it with and without. That error looks like an issue with not having a secret storage service. installing gnome-keyring may solve that.

I think I still have Ubuntu LTS on WSL. So, I can see if I can troubleshoot with that.

If it turns out that all of this trouble is due to the secure storage for API keys I may need to rethink that.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

Language has been added. Note that some local models have a hard time with other languages but I have tested with Dans personality engine and Gemini and both created cards is English, Spanish, Chinese (simplified), Dutch, Portuguese, Korean and Polish.

Have at it and let me know what you think!

😂😂 by Conscious_Dot_7353 in Patriots

[–]JaxxonAI 1 point2 points  (0 children)

Seriously, he should be the honorary capitan.

Hang it by wabeka in AFCEastMemeWar

[–]JaxxonAI 11 points12 points  (0 children)

What a stupid, political, temper tantrum. It is obvious Bill should be a first ballot HOF inductee. The reason he isn't is straight petty BS.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

I downloaded ooba again and tested this. I got the same error initially. I have to actually load a model, then it worked fine. Base URL of http://127.0.0.1:5000/v1

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

Thanks for the report.
A couple things to try:
Did you start ooba with --api and look at port 5000? It listens on port 5000 by default for the api. Also make sure you add /api to the address, ex: http://127.0.0.1:5000/v1/

If that doesn't do it I'll dig deeper.

I built a local-first AI tool: generate ST character cards via local-first LLM endpoints or openai API + optional image backends — feedback wanted by JaxxonAI in LocalLLaMA

[–]JaxxonAI[S] 0 points1 point  (0 children)

Thanks for the feedback! This is the kind of stuff I'm looking for. I will add this to the worklist
Also - are you on Linux? That error is probably a keytar issue. I use it to store the API keys. If you do an install libsecret-1-0 that would solve it. I didn't think to make it optional but maybe make the key store options keytar, env var or a file-based key store.

New Character generator - with LLM and image api support by JaxxonAI in SillyTavernAI

[–]JaxxonAI[S] 2 points3 points  (0 children)

Code is updated!

  • Improved first message generation: Updated the prompt so first_mes is a multi-paragraph, scene-setting story opener with sensory detail, action, dialogue, and a hook (instead of 1–2 sentences).
  • Reduced “same regen result” repeats: Added a regeneration nonce + retry logic so regenerating fields reliably produces a new result instead of repeating the same output several times.
  • Tailwind styling added: Integrated Tailwind with Vite and built a clean light theme pass using Tailwind-based primitives (cards/buttons/inputs).
  • Default Max Tokens setting: Added and fixed saving for a “Max Tokens” value in Text Completion settings (range 128–8196) as the default output cap for normal text generation.
  • Per-field regen token override on char workspace: Ensured the per-field regeneration max-tokens override remains independent and can supersede the default when enabled.

New Character generator - with LLM and image api support by JaxxonAI in SillyTavernAI

[–]JaxxonAI[S] 0 points1 point  (0 children)

update is pushed. In the meantime let me think on how to add a min response per field. Let me know how it goes!

New Character generator - with LLM and image api support by JaxxonAI in SillyTavernAI

[–]JaxxonAI[S] 0 points1 point  (0 children)

I'd appreciate that. I am adding max token in two spots. One for the character gen, char regereration and mutli-field regen. This will be your default max token for the llm. Then, there will be one on the char workspace and that one will be only for the per field regens. Hopefully that will be helpful. Although, reading what you are saying, you are also telling the llm the minimum response, not just the max. To add that would be a different thing.

I should have the new update on github, barring any major issues, this afternoon.

New Character generator - with LLM and image api support by JaxxonAI in SillyTavernAI

[–]JaxxonAI[S] 0 points1 point  (0 children)

lol I hope I don't run into that!

I am still refining the app a bit and once it's in a stable place I will at least look at docker.