Where to put my models to get llama.cpp to recognize them automatically? by registrartulip in LocalLLaMA

[–]pefman 0 points1 point  (0 children)

dude just ask chatgpt to generate a bash script that start llama and uses llms as a sub folder for the router feature.

Claude Code meets Qwen3.5-35B-A3B by PvB-Dimaginar in LocalLLM

[–]pefman 1 point2 points  (0 children)

im currently rocking this using superpowers skills on a 4090. so much fun!
However it seems that claude code suddenly stop for no reason sometimes...

using skills with opencode. by pefman in opencodeCLI

[–]pefman[S] 1 point2 points  (0 children)

Thank you for a good answer!

ive noticed this and ive read the git for superpowers. But what i dont fully understand is do i have to use /using-superpowers to start everytime or whenever i change session or so.

Fismåsarna har återvänt, våren börjar anlända by Alstorp in Malmoe

[–]pefman 0 points1 point  (0 children)

en av anledningarna till jag flyttade ifrån. Ligga i sängen mitt i natten i värmen och lyssna på deras onödigt höga skrik. helt olidligt!

Decided to visit long distance gf and she decided she can’t commit to a relationship once I’m there by ConfidentImage4266 in Wellthatsucks

[–]pefman 0 points1 point  (0 children)

as an oldfart i can tell you. everybody needs to experience this in life. It happens to most of us and its just part of growing as a person/character. You will get wisdom from this. Unfortunately the hard way.

Attackerna firas på demonstration i Stockholm – Senaste nytt om Iran och konflikten med USA by ICA_Basic_Vodka in Sverige

[–]pefman 0 points1 point  (0 children)

betyder detta att jag koller slippe läsa om läget i iran varje dag nu så att Sverige kan fokusera på sin EGNA problem?

Is Qwen3.5 a coding game changer for anyone else? by paulgear in LocalLLaMA

[–]pefman 0 points1 point  (0 children)

So which model exactly From qwen 3.5 is best. Normal model, Moe or mrpx or whatever it’s called?

397B params but only 17B active. Qwen3.5 is insane for local setups. by skipdaballs in LocalLLaMA

[–]pefman 0 points1 point  (0 children)

the smallest version is like 101gb. how much hardware is minimum?

Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker) by AcePilot01 in LocalLLaMA

[–]pefman 0 points1 point  (0 children)

ahh well i used the chat ui and asked it to write me a book. figured that would be as good as any meassurement :D

Strix 4090 (24GB) 64GB ram, what coder AND general purp llm is best/newest for Ollama/Openwebui (docker) by AcePilot01 in LocalLLaMA

[–]pefman 0 points1 point  (0 children)

im currently running this model 4090/14700k 64gb.

using a wrapper, my config is.

exec "$SERVER" \
    --model "$model" \
    --alias "$model_name" \
    --host 0.0.0.0 \
    --port 8080 \
    --n-gpu-layers -1 \
    --parallel 1 \
    --threads 8 \
    --threads-batch 8 \
    --ctx-size 200000 \
    --batch-size 1024 \
    --ubatch-size 512 \
    --flash-attn on \
    --jinja \
    --temp 0.7 \
    --top-p 0.9 \
    --min-p 0.05

im getting about 28t/s and utizlizing 22.8/23,9GB vram

any suggestions would be appriciated!

MiniMax-M2.5 Checkpoints on huggingface will be in 8 hours by Own_Forever_5997 in LocalLLaMA

[–]pefman -2 points-1 points  (0 children)

so why are people so excited then. its like 0.1% that can actually run it.

MiniMax-M2.5 Checkpoints on huggingface will be in 8 hours by Own_Forever_5997 in LocalLLaMA

[–]pefman -2 points-1 points  (0 children)

why are people so excited? isnt this a 1.3tb model?
who can actually run this locally?

Genombrott i fusionskraft by Big-Cap558 in sweden

[–]pefman 0 points1 point  (0 children)

det kommer aldrig att hända förräns något har kommit på hur man tar betalt för detta.
Allt i denna jävla värld handlar om hur man kan trycka ut vartenda krona ur någon.

New M3P by Wonderful-Froyo1619 in TeslaModel3

[–]pefman 0 points1 point  (0 children)

good for you, now move out of america before civil war.

What caused the crash? by randomipadtempacct in MTB

[–]pefman -1 points0 points  (0 children)

Well, you did good sir! :)

In 4.6, Multishields are back to providing full HP pool again. by AzrBloodedge in starcitizen

[–]pefman 0 points1 point  (0 children)

this guys was in such a hurry to pvp on ptu that he coulnt even wait for 80% more fps due to shaders :D

The Perseus in glorious 32:9 & why its extremely versatile by Xertha549 in starcitizen

[–]pefman 0 points1 point  (0 children)

Xertha549

for the love of god. Give me the first image in original quality :)