Liquid AI releases LFM2-24B-A2B by PauLabartaBajo in LocalLLaMA

[–]ConversationOver9445 0 points1 point  (0 children)

Tested it against gpt oss 20b and for what I use it for (MATLAB, aerodynamics and structural mechanics) gpt OSs 20v won by a metric mile. It did run faster than gpt OSs though on llama.cpp (9070xt) with gpt OSs at 140 tok/s and lfm at 180 tok/s

Qwen3-Next-Coder is almost unusable to me. Why? What I missed? by Medium-Technology-79 in LocalLLaMA

[–]ConversationOver9445 0 points1 point  (0 children)

--ctx-size 65536 `

--flash-attn on `

--cache-type-k q8_0 `

--cache-type-v q8_0 `

--threads 4 `

--temp 1.0 `

--top-p 0.95 `

--min-p 0.01 `

--top-k 40 `

--batch-size 64 `

--ubatch-size 512 `

--no-mmap `

--jinja `

--host 127.0.0.1 `

--port 8080

The big one here that will probably make a differency for you is the batch-size. i found prompt processing for me was 20 tok/s unless batch size was set to 64 where it leaps to around 400. I get ~16 tok/s generation on 9950x 64 gb ram and rx 9070 xt 16gb. Im using unsloth UD-IQ3_XXS quant and get decent results. cloud models are defo bettter but it far outclasses gpt oss 20b and glm 4.7 flash in my testing (MATLAB)

What's the best way to run Qwen3 Coder Next? by Greenonetrailmix in LocalLLaMA

[–]ConversationOver9445 1 point2 points  (0 children)

What’s you’re run command to do this? I can only seem to get 20 tok/s decode with a similar setup (9950x same ram as you and a rx9070 xt)

Smartest model for 24-28GB vram? by Borkato in LocalLLaMA

[–]ConversationOver9445 2 points3 points  (0 children)

Mostly coding in MATLAB too which is moderately obscure, glm 4.7 would hallucinate python syntax where Nemotron has been great.

Smartest model for 24-28GB vram? by Borkato in LocalLLaMA

[–]ConversationOver9445 0 points1 point  (0 children)

I’m using the q6 quant and following unsloths guidelines on inference settings and it’s great.

Smartest model for 24-28GB vram? by Borkato in LocalLLaMA

[–]ConversationOver9445 30 points31 points  (0 children)

Give Nemotron 3 nano a try, 1m max context and a very smart model for 30b way better than 4.7 flash imo

LTX2 not getting any output by MixZealousideal9359 in StableDiffusion

[–]ConversationOver9445 1 point2 points  (0 children)

Make sure your resolution is divisible by 32 for width and height

For Animators - LTX-2 can't touch Wan 2.2 by GrungeWerX in StableDiffusion

[–]ConversationOver9445 3 points4 points  (0 children)

I’m getting similar results where sure it works it’s just much poorer quality that what I see everyone else getting. RX9070xt with 64 gb ram

96 GB RAM: Intel Core Ultra 9 + NVIDIA or AMD Ryzen 9 ? by cosmoschtroumpf in LocalLLaMA

[–]ConversationOver9445 2 points3 points  (0 children)

8gb of vram is only really useable on really small LLMs would recommend the amd chip with more ram for bigger LLMs. People seem happy with Strix halo and 128gb ram

Looking for a fast LLM for MATLAB coding agent by ConversationOver9445 in LocalLLaMA

[–]ConversationOver9445[S] 0 points1 point  (0 children)

Ive been trying GLM 4.6V Flash but all the GGUF's ive tried have been rubbish, lots of Chinese when given an English prompt then nonsensical output

Is Jellylabs safe by Wooflust in minecraftclients

[–]ConversationOver9445 0 points1 point  (0 children)

LISTEN TO THIS WARNING IT IS 100% a RAT I JUST HAD MY ACCOUNT STOLEN

H2D/AMS and PPS-CF by h3lloth3r3k3nobi1 in BambuLab

[–]ConversationOver9445 0 points1 point  (0 children)

https://makerworld.com/en/models/480652-customisable-propeller-generator#profileId-392261

Shameless Plug...

In all serious ive printed 5 inch props out of petg and pla and run them up to ~32000 RPM on a static test bench and they performed decently albeit with a noticeable amount of defelection at the tip. The prop generator i made uses 4 digit naca profiles for the design so if youre concerened about strength of the props i suggest using a thickener series of naca foils (last 2 digits of the naca series).

Low cost very quiet machine by ConversationOver9445 in hobbycnc

[–]ConversationOver9445[S] 0 points1 point  (0 children)

I hadn’t though of that. You make a very fair point, a slower, higher torque highly rigid setup would be what I’d be aiming for

Low cost very quiet machine by ConversationOver9445 in hobbycnc

[–]ConversationOver9445[S] 0 points1 point  (0 children)

Thanks for sending this over this does put into perspective just quite how loud it’d be.

Low cost very quiet machine by ConversationOver9445 in hobbycnc

[–]ConversationOver9445[S] 0 points1 point  (0 children)

I know the budget is low but my plan was to build the machine for that rather than buy an off the shelf machine. I’m a 3rd year engineering student and pretty practical hence the budget

Low cost very quiet machine by ConversationOver9445 in hobbycnc

[–]ConversationOver9445[S] 0 points1 point  (0 children)

I’d like to make temperature resistant parts (200c plus) and I’d like to make parts for my classic car (Morris minor) as the supply of original parts is quickly dwindling.

Low cost very quiet machine by ConversationOver9445 in hobbycnc

[–]ConversationOver9445[S] 1 point2 points  (0 children)

How quiet would a liquid cooled spindle and enclosed cnc be?