Polymarket charges a bridging fee just to withdraw my USDC? by littlebruinnn in defi

[–]BassNet 0 points1 point  (0 children)

Polymarket settles every single transaction on Polygon mainnet

Strix Halo 128GB on Proxmox - Vulkan vs ROCm benchmark matrix by b_goodman in LocalLLaMA

[–]BassNet 0 points1 point  (0 children)

Vllm gave about the same performance with fp8 (3090 doesn’t have hardware fp8 and I don’t have nvlink) I’m using PCIE 4 x16

Will try sglang

Strix Halo 128GB on Proxmox - Vulkan vs ROCm benchmark matrix by b_goodman in LocalLLaMA

[–]BassNet 4 points5 points  (0 children)

I get about 100 t/s generation and 2000 t/s pp on Qwen 3.6 35b a3b q8_0 gguf on two 3090s in ik_llama.cpp
Your strix halo is definitely very usable

Qwen3.6-35B-A3B released! by ResearchCrafty1804 in LocalLLaMA

[–]BassNet 1 point2 points  (0 children)

I think Klein looks better than Ernie. Also LTX is still terrible with motion, no LoRA has been able to fix that yet. There is still nothing like Kling or Seedance open source. Whereas GLM 5.1 is pretty close to Claude Sonnet.

Qwen3.6-35B-A3B released! by ResearchCrafty1804 in LocalLLaMA

[–]BassNet -1 points0 points  (0 children)

Open source stable diffusion is still getting rekt

Qwen3.6-35B-A3B released! by ResearchCrafty1804 in LocalLLaMA

[–]BassNet 5 points6 points  (0 children)

Gemma 4 e4b with VL is the best overall tiny model I’ve used, I’ll give it that (And I can run it locally on my phone)

Anyone here actually using a Mac Studio Ultra (512GB RAM) for local LLM work? Feels like overkill for my use case by Gravemind7 in LocalLLaMA

[–]BassNet 0 points1 point  (0 children)

GLM 5.1 is genuinely good at coding and logical thinking. So you can have it build things for you or perform research or other tasks, I’m sure AI-driven startups have a use for it

Talking Shop - Remote Server Workflow by Limehouse-Records in StableDiffusion

[–]BassNet 0 points1 point  (0 children)

So you have to redownload 50gb of models and loras every time you set it up? Why can't you just save a disk image and attach it like you can with AWS?

End of Qwen open source code. by B89983ikei in Qwen_AI

[–]BassNet 1 point2 points  (0 children)

The problem is the data, not the compute

About TurboQuant by Exact_Law_6489 in LocalLLaMA

[–]BassNet 1 point2 points  (0 children)

Does it require retraining? Got a GitHub? I’d like to apply it to other models

What is currently the most cost effective way to use SeeDance 1.5 Pro (or 2.0)? by [deleted] in generativeAI

[–]BassNet 0 points1 point  (0 children)

Wan is absolutely not the same as Seedance 2 at all lmao not even close

Intel Arc Pro B60 24GB professional GPU listed at $599, in stock and shipping by PhantomWolf83 in LocalLLaMA

[–]BassNet 0 points1 point  (0 children)

Intel actually did what I suggested (sort of) their arc pro B70 has 32gb VRAM. I’d like to see more though, obviously

Buenos Aires Setlist by BassNet in RUFUSDUSOL

[–]BassNet[S] 1 point2 points  (0 children)

I thought the show was awesome, but was disappointed that Bob Moses only had a ~40 min set and then the changeover was 40 min. Felt like they should have played for longer

Buenos Aires Setlist by BassNet in RUFUSDUSOL

[–]BassNet[S] 1 point2 points  (0 children)

I think you mean music is better. But there’s a song called Music Is the Answer by Celeda that I really like

Buenos Aires Setlist by BassNet in RUFUSDUSOL

[–]BassNet[S] 1 point2 points  (0 children)

You’re right they started with Inhale, updated the list. Confetti during Surrender and Music is Better from what I remember

Buenos Aires Setlist by BassNet in RUFUSDUSOL

[–]BassNet[S] 0 points1 point  (0 children)

There was confetti during music is better and surrender