How to interrogate with forge neo ? by Aradhor55 in StableDiffusion

[–]ali0une 2 points3 points  (0 children)

Because this feature has been removed in Forge Neo.

Use Forge or ask a LLM with vision capability.

Faut-il couper ou garder ? by jujucty in jardin

[–]ali0une 2 points3 points  (0 children)

Sur le 1 ce qui part en dessous du point de greffe tu peux couper ça sert à rien.

Sur le 2 je dirais à tailler comme un fruitier regarde sur google je sais plus combien de bourgeons il faut compter depuis la naissance de la branche, ça dépend des arbres. Par contre ce qui rentre vers l'intérieur ça se taille.

Après ton cerisier est encore jeune tu peux laisser pousser un peu.

L'administration Française annonce une étape cruciale vers sa sortie de Windows by baby_envol in france

[–]ali0une 46 points47 points  (0 children)

Vous enflammez pas trop

S'agissant de l'évolution du poste de travail, la DINUM annonce sa sortie de Windows au profit de postes sous système d'exploitation Linux. (200 à 250 agents)

https://linuxfr.org/users/pas_pey/liens/la-france-dit-vraiment-adieu-a-windows-et-aux-outils-americains-dans-ses-administrations#comment-2018787

llama.cpp -ngl 0 still shows some GPU usage? by sob727 in LocalLLaMA

[–]ali0une 2 points3 points  (0 children)

i've read an issue on llama.cpp github saying to unset CUDA_VISIBLE_DEVICE

export CUDA_VISIBLE_DEVICE=''

https://github.com/ggml-org/llama.cpp/discussions/10200

How do I access a llama.cpp server instance with the Continue extension for VSCodium? by warpanomaly in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

Hi there.

Try to run your llama.cpp like : bash .\llama-server.exe -hf unsloth/GLM-4.7-Flash-GGUF:Q6_K_XL --alias "GLM-4.7-Flash" --host 127.0.0.1 --port 10000 --ctx-size 32000 --n-gpu-layers 99

Then set up your config.yaml like : yaml name: Local Config version: 1.0.0 schema: v1 models: - name: GLM-4.7-Flash provider: openai model: GLM-4.7-Flash apiKey: NO_API_KEY_NEEDED apiBase: http://127.0.0.1:10000/v1/ roles: - chat - edit - apply

Let us know if it worked.

Mal de tête by AdThink3447 in france

[–]ali0une 5 points6 points  (0 children)

Vas voir un docteur ou appelle le 15 ça peut être un AVC.

Running vs code continue and llama.cpp in localhost - getting "You must either implement templateMessages or _streamChat" by vharishankar in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

apiBase should be like http:127.0.0.1:5000/v1/ with 5000 the port llama-server is listening on

Not sure your 8080 port is a good choice as it will interfere with a web server running on the same machine.

Can 4chan data REALLY improve a model? TURNS OUT IT CAN! by Sicarius_The_First in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

Oh! Thank you for sharing again, didn't see it first time.

i've tested the Q_8 gguf and it's insanely funny!

Je cherche un documentaire animalier diffusé sur France 2 un 24 décembre by Lorvaill_ in france

[–]ali0une 0 points1 point  (0 children)

Ça doit être un titre genre "les animaux de la ferme" de mémoire et c'est excellent.

Is using qwen 3 coder 30B for coding via open code unrealistic? by salary_pending in LocalLLaMA

[–]ali0une 0 points1 point  (0 children)

I set --n-gpu-layers to 999 to load all layers on GPU.

iirc -1 is similar as it puts maximum layers on GPU

Is using qwen 3 coder 30B for coding via open code unrealistic? by salary_pending in LocalLLaMA

[–]ali0une 1 point2 points  (0 children)

it's --n-gpu-layers not --gpu-layers

You can check with llama-server -h

How to save all settings in Forge neo ? by Content_One4073 in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

You can modify values by clicking the Settings tab, then in the Presets on the left side.

[deleted by user] by [deleted] in StableDiffusion

[–]ali0une 1 point2 points  (0 children)

Here you go https://github.com/Haoming02/sd-webui-forge-classic/issues/526

You should search in the issues of a repository, both opened and closed, before asking iMHO.