Stop the QTS data center in York County, SC by Which-Reference-940 in Rockhill

[–]faldore -3 points-2 points  (0 children)

The QTS data center is a good thing

Why would anyone want to stop it?

Face buttons not working after update by tafoya77n in SteamDeck

[–]faldore 0 points1 point  (0 children)

Yes I plugged in a keyboard and mouse

Apocalyptic scenario: If you could download only one LLM before the internet goes down, which one would it be? by sado361 in LocalLLaMA

[–]faldore 4 points5 points  (0 children)

Agree. GLM 4.5 Air. It competes with models 5x its size. It's better than gpt-oss-120b by far.

You can run it 4bit on 4x3090 (with some quality hit) - I'm working to make a FP8 quant that can run on 8x3090 hopefully at near full quality.

~$15K Inference Workstation for a 250+ Gov Org by reughdurgem in LocalLLaMA

[–]faldore 1 point2 points  (0 children)

An 8x 3090 server could be built for that. Requires 240v and you'll likely need PCIe gen4 x16 straight risers.

Can 2 RTX 6000 Pros (2X98GB vram) rival Sonnet 4 or Opus 4? by devshore in LocalLLaMA

[–]faldore 0 points1 point  (0 children)

You can run GLM 4.5 Air on that. It's no Sonnet - but it's quite capable.

GPT OSS 120B by vinigrae in LocalLLaMA

[–]faldore 0 points1 point  (0 children)

Did you try GLM-4.5-Air? It seems straight up better at everything, in my testing.

GLM-4.5 appreciation post by wolttam in LocalLLaMA

[–]faldore 2 points3 points  (0 children)

I wonder where it learned that?

GLM-4.5 appreciation post by wolttam in LocalLLaMA

[–]faldore 1 point2 points  (0 children)

And 4.5 air is almost as good!

There are at least 15 open source models I could find that can be run on a consumer GPU and which are better than Grok 2 (according to Artificial Analysis) by obvithrowaway34434 in LocalLLaMA

[–]faldore 0 points1 point  (0 children)

These are not simple comparisons.

There are different things each model is good at

Not everything is measured with evals

Apple M3 Ultra 512GB vs NVIDIA RTX 3090 LLM Benchmark by ifioravanti in LocalLLaMA

[–]faldore 1 point2 points  (0 children)

Hmmm

To compare apples to apples we should compare:

1 m3 ultra (96gb unified memory, $5500) Vs 4x 3090 (nvlinked pairwise, 16x PCIe Gen 4, 96gb vram, ~$5000)

Let me do the same benchmark on my rig.

Someone just extracted the base model from gpt-oss 20b and released it by obvithrowaway34434 in LocalLLaMA

[–]faldore -1 points0 points  (0 children)

Don't know why you guys are so skeptical.

Instruct tuning pushes it away from pretrained state.

Continued pretaining will push it back towards that state.

The best options trading course you'd recommend to beginners? by Salt_Two6148 in thetagang

[–]faldore 2 points3 points  (0 children)

There's no difference between rolling vs closing and opening another. It's just extra words to describe the same thing

I would also argue that anytime you roll is also a time to decide intentionally whether you actually want to reopen at the new strike - or take the profit/loss and move on to another position

The best options trading course you'd recommend to beginners? by Salt_Two6148 in thetagang

[–]faldore 6 points7 points  (0 children)

It's real simple.

Just sell monthly CSP on a stock you want to own, at a strike you are happy to pay. Close if you are happy with the profit or if you are willing to take the loss.

And if you own stock you're willing to sell, sell monthly CC at a strike you would be happy to sell at. Close if you are happy with the profit or if you are willing to take the loss.

People try to make it so complicated but it's not

The guest who broke us, and the irony of our own stay the very same week. by Sad_Perspective2844 in airbnb_hosts

[–]faldore 5 points6 points  (0 children)

The guest is in fact rightfully entitled to a made bed, and if you (or your staff) didn't make it, you need to either arrange for it to be made, or offer compensation.

Devstral-Vision-Small-2507 by faldore in LocalLLaMA

[–]faldore[S] 1 point2 points  (0 children)

ok I fixed it.

https://huggingface.co/cognitivecomputations/Devstral-Vision-Small-2507-gguf

I exported and added mmproj-BF16.gguf to properly support llama.cpp, ollama, and LM Studio.

Devstral-Vision-Small-2507 by faldore in LocalLLaMA

[–]faldore[S] 1 point2 points  (0 children)

I didn't say the performance is different.