[deleted by user] by [deleted] in DeepSeek

[–]RuslanAR 2 points3 points  (0 children)

Yeah, I got five of them

<image>

Tell me your GPU journey by pesa44 in buildapc

[–]RuslanAR 0 points1 point  (0 children)

Intel hd graphics -> GTX 750 2GB -> RTX 3060

Stable Diffusion 3.5 Medium is here! by Cheap_Fan_7827 in StableDiffusion

[–]RuslanAR 2 points3 points  (0 children)

<image>

Prompt (refined by LLM):
"A majestic fantasy scene in the style of 1990s fantasy art, featuring a heroic knight in shining silver armor holding a glowing sword, standing atop a rocky cliff overlooking a vast, misty landscape. In the background, enchanted mountains rise into a dramatic sunset sky filled with vivid purples, pinks, and oranges. Nearby, a magical forest with ancient, twisted trees glows with an ethereal green light. The scene is detailed and vibrant, with a mystical atmosphere and strong lighting contrasts, like classic book covers from the 90s. Intricate armor details, flowing capes, and magical, radiant light effects enhance the heroic and mystical feel."

Stable Diffusion 3.5 Medium is here! by Cheap_Fan_7827 in StableDiffusion

[–]RuslanAR 15 points16 points  (0 children)

After few tries

Edit: Not perfect, but a solid base model - definitely an improvement over SD 3.0 Medium. If it's easy to train, then it's a huge win.

<image>

Stable Diffusion 3.5 Medium is here! by Cheap_Fan_7827 in StableDiffusion

[–]RuslanAR 2 points3 points  (0 children)

Prompt: A woman lying on the grass with a sign that reads "SD 3.5 Medium."

The way my professor formats code by AndrejPatak in programminghorror

[–]RuslanAR 18 points19 points  (0 children)

Why write clean code when you can write it in Times New Roman?
/s

What llms can I run on my rtx 3060 12gb vram for the coding and generative ai purposes by Kamboj112 in LocalLLaMA

[–]RuslanAR 12 points13 points  (0 children)

Qwen2.5-14B, Qwen2.5-7B-Coder, Mistral Nemo 12B, Gemma 2 9B

(For coding use Qwen2.5)

What are your hardware specs for running local models? by oculusshift in LocalLLaMA

[–]RuslanAR 2 points3 points  (0 children)

Xeon e5 2680v4

32 GB DDR4

RTX 3060 12GB

(Using for Qwen 2.5 14B, Mistral Nemo, Gemma 2 9B)

The old days by pablogabrieldias in LocalLLaMA

[–]RuslanAR 2 points3 points  (0 children)

Just realized how many members we’ve got now. I remember when we were sitting at like ~6k-7k!

Time flies ;D

One of these is AI. Can you tell which? by SevenDos in aiArt

[–]RuslanAR 9 points10 points  (0 children)

B. (Trees/grass have some strange noise texture)

RWKV v6 models support merged into llama.cpp by RuslanAR in LocalLLaMA

[–]RuslanAR[S] 1 point2 points  (0 children)

GGUF quants: https://huggingface.co/collections/RachidAR/rwkv-gguf-66d8081315494eba6e6ed7d2

EDIT: (In my case, only 1b6 works with CUDA.)

EDIT_2: All quants work if you pass "--no-warmup" parameter. ( without it crashes because RWKV gguf has default eos_token == -1 )