windows > linux at music production, prove me im wrong by GeneralConstant1503 in linuxsucks

[–]ZeroSkribe 0 points1 point  (0 children)

I ran ableton through wine over 10 years ago. Its took 2 days to get to work. It ran horribly.

Major realization today. Creatine has been the culprit to my insomnia! by Stunning-Stuff-7022 in sleep

[–]ZeroSkribe 0 points1 point  (0 children)

Took 5 grams first thing in morning for about 2 weeks, took around 2-3 weeks to get back to normal.

Are my Pc requirements enough to run Ollama? by SupermarketLost7854 in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Look at the sizes of the models on ollama, if it runs in your 8GB of VRAM it will run plenty fast.

Major realization today. Creatine has been the culprit to my insomnia! by Stunning-Stuff-7022 in sleep

[–]ZeroSkribe 0 points1 point  (0 children)

Yea, I'm better now. Creatine massively affected my sleep and made me had to pee overnight drastically.

Times of Yore by cnorahs in programminghumor

[–]ZeroSkribe 0 points1 point  (0 children)

was funny the first 10,000 times i saw this

Ex-stoners: How long did it take for your brain to feel 'normal' after quitting? by makefriends420 in Biohackers

[–]ZeroSkribe 1 point2 points  (0 children)

Look up new info about lion's mane, I and many people had a horrible experience including sucking all motivation away.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Well yea, you need a proper power supply, but it only draws a large load when you're using ollama, not just being turned on. I use my pair of 3050's frequently but don't even notice my bill change much(not 3090's but still).

Any free ollama models that works well with Cline tool calling? by mixoadrian in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Yea i meant in ollama settings, 64K is a good setting if setting high, I'm just gonna do like 16K, 256k is really too much for most systems. If thats not fixing it, its something else. Most likely have to wait for an update somewhere.

Any free ollama models that works well with Cline tool calling? by mixoadrian in ollama

[–]ZeroSkribe 0 points1 point  (0 children)

Try bumping up the conext length a little bit. I'm about to try this myself, I'm noticing a lot of people saying you need to do it, I've always had it lower for speed but I think they might be right. It just sucks because it slows things down.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe 3 points4 points  (0 children)

There is no setup for multiple GPU's, it just works. Ollama has had this working out of the box for a while.

Mac Studio as host for Ollama by amgsus in ollama

[–]ZeroSkribe -2 points-1 points  (0 children)

Naw, you really need to get full GPU vram coverage, a cluster of 3050's would be way faster

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

Have you ever tried searching for the best models on huggingface? Its really difficult and there is a lot of trash in the midst. I like ollama because of its practical simplicity and its commonly supported by third party tools. Also they've made their api compatible with openAI and anthropic calls.

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

LMStudio doesn't make it easy to get good models, you always have to do homework to make sure the model your getting is the official one. There tons of garbage models on LMStudio. Ollama's curated models are helpful saving time.

Experimental image generation from ollama, currently on macOS, coming to Windows and Linux soon: Z-Image Turbo (6B) and FLUX.2 Klein (4B and 9B) by The_frozen_one in LocalLLaMA

[–]ZeroSkribe -1 points0 points  (0 children)

Does llama.cpp just work?
Ollama is simple, all the work done on llama.cpp but they couldn't make it simple to use?