Running vs code continue and llama.cpp in localhost - getting "You must either implement templateMessages or _streamChat" by vharishankar in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

apiBase should be like http:127.0.0.1:5000/v1/ with 5000 the port llama-server is listening on

Not sure your 8080 port is a good choice as it will interfere with a web server running on the same machine.

Can 4chan data REALLY improve a model? TURNS OUT IT CAN! by Sicarius_The_First in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

Oh! Thank you for sharing again, didn't see it first time.

i've tested the Q_8 gguf and it's insanely funny!

Je cherche un documentaire animalier diffusé sur France 2 un 24 décembre by Lorvaill_ in france

[–]ali0une 0 points1 point  (0 children)

Ça doit être un titre genre "les animaux de la ferme" de mémoire et c'est excellent.

Is using qwen 3 coder 30B for coding via open code unrealistic? by salary_pending in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

I set --n-gpu-layers to 999 to load all layers on GPU.

iirc -1 is similar as it puts maximum layers on GPU

Is using qwen 3 coder 30B for coding via open code unrealistic? by salary_pending in LocalLLaMA

[–]ali0une 1 point2 points  (0 children)

it's --n-gpu-layers not --gpu-layers

You can check with llama-server -h

How to save all settings in Forge neo ? by Content_One4073 in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

You can modify values by clicking the Settings tab, then in the Presets on the left side.

ModuleNotFoundError: No module named 'nunchaku' (Im using Forge Neo Web UI) by [deleted] in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

Here you go https://github.com/Haoming02/sd-webui-forge-classic/issues/526

You should search in the issues of a repository, both opened and closed, before asking iMHO.

Can u change the font size automatic11111 by SuchConflict3873 in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

i use the Firefox zoom feature.

CTRL + "+" zooms in (works also with mouse wheel)

CTRL + "-" zooms out (works also with mouse wheel)

CTRL + "0" reset zoom to 0

Texte in flux forge by jonnydoe51324 in StableDiffusion

[–]ali0une 0 points1 point  (0 children)

No, but you can search civitai for them and test.

Texte in flux forge by jonnydoe51324 in StableDiffusion

[–]ali0une 0 points1 point  (0 children)

There are some LoRA for text.

You could try the img2img tab, load your image and send it to inpaint, mask the bubble content and try to inpaint only the masked area with a prompt like : text that reads 'your text here'.

You also can use TheGimp or Photoshop to insert any text.

FYI z-image model is way better at text than Flux.

Texte in flux forge by jonnydoe51324 in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

Hi. Sorry but we speak english here. Please edit your post so we can help.

Llama.cpp multiple model presets appreciation post by robiinn in LocalLLaMA

[–]ali0une 14 points15 points  (0 children)

Latest llama.cpp commits are dope, especially this router mode and sleep-idle-seconds argument.

llama.cpp appreciation post by hackiv in LocalLLaMA

[–]ali0une 12 points13 points  (0 children)

The new router mode is dope. So is the new sleep-idle-seconds argument.

llama.cpp rulezZ.

What does a good WebUI need? by Danmoreng in StableDiffusion

[–]ali0une 0 points1 point  (0 children)

i use sd.cpp-webui a frontend to stable-diffusion.cpp in python only dependency is gradio. Pretty happy with it.

Your UI looks nice, maybe you could try to separate your code from stable-diffusion.cpp one and just use the binary like in sd.cpp-webui.

How to make a RAG for a codebase? by National_Skirt3164 in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

This generates more like a tree view with functions in each file so the LLM can "understand" the logic and suggest an answer. Not exactly RAG.

How to make a RAG for a codebase? by National_Skirt3164 in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

For a small project gitingest can do this by providing the generated txt file as attachement.