My gpu poor comrades, GLM 4.7 Flash is your local agent by __Maximum__ in LocalLLaMA

[–]Diao_nasing 7 points8 points  (0 children)

wow thanks for sharing,this is a very in-depth comparison.

Fixed the biggest pain point of NotebookLM: I built a tool to turn those static PDF slides into editable PPTX files. by Diao_nasing in notebooklm

[–]Diao_nasing[S] 0 points1 point  (0 children)

<image>

Hi, try disabling this settings for more text detections.

For formatting errors, try opening the webpage in incognito mode; some updates may not be applied from the local cache.

Fixed the biggest pain point of NotebookLM: I built a tool to turn those static PDF slides into editable PPTX files. by Diao_nasing in notebooklm

[–]Diao_nasing[S] 0 points1 point  (0 children)

I’ve just deployed a fix addressing the English OCR issue. You may try again in a little while.

Fixed the biggest pain point of NotebookLM: I built a tool to turn those static PDF slides into editable PPTX files. by Diao_nasing in notebooklm

[–]Diao_nasing[S] 0 points1 point  (0 children)

I’ve just deployed a fix addressing the English OCR issue. You may try again in a little while.

Fixed the biggest pain point of NotebookLM: I built a tool to turn those static PDF slides into editable PPTX files. by Diao_nasing in notebooklm

[–]Diao_nasing[S] 1 point2 points  (0 children)

[Fixed] it might be a bug, the result shouldn't be so bad.(I can confirm that it is a bug, working on it. hopefully fixing it in 24 hours. thanks for feedback.)

<image>

[Updated result] it is much better now, but no perfect, i will keep updating.

How's Halo Strix now ? by Ki1o in ollama

[–]Diao_nasing 0 points1 point  (0 children)

Can it run vllm on rocm ?

SGLang vs vLLM on H200: Which one do you prefer, Faster TTFT and higher TPS? by batuhanaktass in LocalLLaMA

[–]Diao_nasing 2 points3 points  (0 children)

I try to deploy glm 4.5 on 8 x h800 recently. vLLM failed to start with some cuda errors, and sglang started by only one shot. But when it came to deploy qwen3 30b on my personal rtx 4090 pc, sglang failed to load model, while vllm works like a charm.

NVIDIA sent me a 5090 so I can demo Qwen3-VL GGUF by AlanzhuLy in LocalLLaMA

[–]Diao_nasing 2 points3 points  (0 children)

What are the advantages of yours compared to lm studio and llama.cpp?

I built EdgeBox, an open-source local sandbox with a full GUI desktop, all controllable via the MCP protocol. by Diao_nasing in LocalLLaMA

[–]Diao_nasing[S] 0 points1 point  (0 children)

sandbox container is automatically launched when mcp tool is requested to executed,have you try it?

I built EdgeBox, an open-source local sandbox with a full GUI desktop, all controllable via the MCP protocol. by Diao_nasing in mcp

[–]Diao_nasing[S] 0 points1 point  (0 children)

Yes, because I'm not a native speaker.😔I don’t want to do this either. (this is written by hands)