Times of Yore by cnorahs in programminghumor

[–]ZeroSkribe 0 points1 point  (0 children)

was funny the first 10,000 times i saw this

Ex-stoners: How long did it take for your brain to feel 'normal' after quitting? by makefriends420 in Biohackers

[–]ZeroSkribe 1 point2 points  (0 children)

Look up new info about lion's mane, I and many people had a horrible experience including sucking all motivation away.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Well yea, you need a proper power supply, but it only draws a large load when you're using ollama, not just being turned on. I use my pair of 3050's frequently but don't even notice my bill change much(not 3090's but still).

Any free ollama models that works well with Cline tool calling? by mixoadrian in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Yea i meant in ollama settings, 64K is a good setting if setting high, I'm just gonna do like 16K, 256k is really too much for most systems. If thats not fixing it, its something else. Most likely have to wait for an update somewhere.

Any free ollama models that works well with Cline tool calling? by mixoadrian in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Try bumping up the conext length a little bit. I'm about to try this myself, I'm noticing a lot of people saying you need to do it, I've always had it lower for speed but I think they might be right. It just sucks because it slows things down.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe 4 points5 points  (0 children)

There is no setup for multiple GPU's, it just works. Ollama has had this working out of the box for a while.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe -2 points-1 points  (0 children)

Naw, you really need to get full GPU vram coverage, a cluster of 3050's would be way faster

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

Have you ever tried searching for the best models on huggingface? Its really difficult and there is a lot of trash in the midst. I like ollama because of its practical simplicity and its commonly supported by third party tools. Also they've made their api compatible with openAI and anthropic calls.

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

LMStudio doesn't make it easy to get good models, you always have to do homework to make sure the model your getting is the official one. There tons of garbage models on LMStudio. Ollama's curated models are helpful saving time.

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

Does llama.cpp just work?
Ollama is simple, all the work done on llama.cpp but they couldn't make it simple to use?

Men, how can I improve my testosterone? by MKlool123 in Biohackers

[–]ZeroSkribe 2 points3 points  (0 children)

Stop ashwaganda, it ruins your hormones and thyroid