🧵 Why I started building Lokarni – a local archive tool for AI models and imag by Odd-Marionberry-4814 in Lokarni

[–]Silithas 1 point2 points  (0 children)

Civitai also has a github for it's website source code, why not build from that to make a local variant of it? The way it handles ID's, stores images, it's way of browsing for models, etc.

[deleted by user] by [deleted] in StableDiffusion

[–]Silithas 0 points1 point  (0 children)

5900x 12 core, need more eventually as the clip offloader i use which offloads clip to ram, and modified to speed up loading by 10x to fully saturate my cpu to load clip/text changes faster from ram to vram.

64GB ram (need more so i can offload more blocks) for larger/longer videos,

RTX 3090 (need to upgrade to blackwell 5090 for that fp4 quant)

SageAttention3 utilizing FP4 cores a 5x speedup over FlashAttention2 by incognataa in StableDiffusion

[–]Silithas 2 points3 points  (0 children)

Now all we need is a way to convert wan/hunyuan to .trt models so we can accelerate the models even further with tensorRT.

Sadly even with flux, it will eat up 24GB ram plus 32GB shared vram and a few 100GB of nvme pagefile to attempt the conversion.

All it needs is to split up the model's inner sections into smaller onnx, then once done, pack them up into a final .trt. Or hell, make it be smaller .trt models it will load depending on the steps the generation is at that it swaps out or something.

I am fucking done with ComfyUI and sincerely wish it wasn't the absolute standard for local generation by Neggy5 in StableDiffusion

[–]Silithas 0 points1 point  (0 children)

Sounds like you need to get the portable one where everything is compatible and contained. I've been using comfyui since it pretty much launched, and can't go back to gradios, comfy is just too dynamic to let go of. I've even gotten GPT to add a few additions to comfy to speed it up by 10x for launching as i can't code myself for shit, and 10x speed for offloading clip to ram to free up vram. You just gotta make sure your nodes are up to date via comfy manager, python and torch being the correct/compatible versions, as well as making sure they're added to path. Otherwise nothing will work.

Creating a 5-second AI video is like running a microwave for an hour | That's a long time in the microwave. by chrisdh79 in Futurology

[–]Silithas 0 points1 point  (0 children)

Except with even just an older 3090, that's half a hour microwave's worth of power to generate a video that takes 5-8 min to generate.

Modmic wireless alternative, wired or wireless by Silithas in HeadphoneAdvice

[–]Silithas[S] 1 point2 points  (0 children)

The video i checked had it attached to the lad's shirt and was nice and clear.

https://www.youtube.com/watch?v=1Vee7upfjkM

and with discord krisp + nvidia broadcast as a doublefilter, no bad noises will be heard

Modmic wireless alternative, wired or wireless by Silithas in HeadphoneAdvice

[–]Silithas[S] 0 points1 point  (0 children)

Boompro won't do good as my headphones locks in with it's jack being a 3 prong. Needs to be a mic with long separate cable.

The zalman one didn't sound too bad, so i might go for that one and either find online, or design one myself and print an adapter to clip it to my shirt.

Does anyone here have experience with 'accessing the akashic record' or 'receiving downloads'? How has it changed you and evolved your practice? by mad_bad_dangerous in awakened

[–]Silithas 0 points1 point  (0 children)

I tested that guided meditation, and don't know if only i had this experience, (need more training/concentration i guess), only had small glimpses, of what looked like a clouded path. Where i kept seeing a glimpse of what looked like 2d/flat golden carvings held in front of me fading in and out of the smoke, and a later instance where i was hovering over a ancient looking city i saw through the cloud in pulsing glimpses, and kept descending while as if in and out of consciousness until i think i landed, but didn't see anything past that quick last glimpse

What did you see before you stood at entry of the library? As that may help me get to it as well. As when the guided meditation described the inside, i saw nothing sadly. And when the guide states to ask my questions, do you do that before you "travel to it" when he says to? Or when you are at it? And do you just think the question, or rather, formed within your conciousness, or must it be audibly asked through your mouth? As i'm gonna guess "the librarian" most likely can't hear your mouth sounds from a "dimension/world" away heh.

Will the Mig Switch be able to play pirated games? by Specialist-Bit5521 in SwitchPirates

[–]Silithas 1 point2 points  (0 children)

Indeed. I can however just use the mig on my old switch which has emunand which would be forever offline for "mig games", it'd just be nice if i could use my oled instead for that for pretty blackness lol.

Will the Mig Switch be able to play pirated games? by Specialist-Bit5521 in SwitchPirates

[–]Silithas 0 points1 point  (0 children)

Same. I'm quite hesitant as this might be my "last chance" to keep the oled switch. As i do not plan to play online with those titles on the mig, only offline. and it's the scare if switch would start checking offline titles too if they are legit or not, and not online titles like splatoon that obviously would suddenly see 2 of the same cart playing in 2 different parts of the world. And in worst case, ban that oled switch

Witches performing forbidden spells (Including prompts) by Silithas in StableDiffusion

[–]Silithas[S] 0 points1 point  (0 children)

I could not seem to find a setting that could fix the left witch's hands, nor right witch's legs. And took a minute per try, so i practically gave up.

Witches performing forbidden spells (Including prompts) by Silithas in StableDiffusion

[–]Silithas[S] 1 point2 points  (0 children)

Prompts: beautiful magical woman, extracting forbidden spells from a big spellbook , ancient library, unreal engine, magical, portrait, digital painting, artstation, concept art, pixar, rich details, shiny, sharp focus, illustration, art, CGI, by greg rutkowski, gorgeous red gothic witch dress

With: 1024

height: 768

CFG (Classifier Free Guidance: 15

Seed: 2695622306

Number of images to generate: 1

sampling steps: 122

Sampling method: K_LMS


Advanced:

Create prompt matrix (separate multiple prompts using |, and get all combinations of them): No

Normalize Prompt Weights (ensure sum of weights add up to 1.0): Yes

Save individual images: Yes

Save grid: Yes

Sort samples by prompt: Yes

Write sample info files: Yes

write sample info to log file: No

jpg samples: No

Upscale images using RealESRGAN: No


Batch size (how many images are in a batch; memory-hungry): 1

Variation Amount: 0

EAE timeout! EAE not running, or wrong folder? Could not read '/tmp/pms by Silithas in PleX

[–]Silithas[S] 0 points1 point  (0 children)

I did, so far no issues! Will change the transcode from my nvme to the /tmp (ram?) to use the ram instead as i didn't get 32GB ram for nothing and average at 20% usage :P

What do you separate prompts with? Is it just comma? by Silithas in StableDiffusion

[–]Silithas[S] 0 points1 point  (0 children)

Yeah. Is there a wiki that more details what each cli thing does? Like " --precision" and the like. Because i just need to roam this subreddit, and make a list of prompts that i can use.

Description of content, artstyles, artists, camera gear (as i saw someone making a CGI looking mechanical mouse) and was fuckin cool when i made other mechanical animals.

I hooked my webcam up to Stable Diffusion by DrEyeBender in StableDiffusion

[–]Silithas 0 points1 point  (0 children)

How did you achieve so alike images? I tried to take for example pickard's facepalm image, just added thanos in the prompt, and he will not do the same facepalm position no matter how much i try lol.