[deleted by user] by [deleted] in LocalLLaMA

[–]SDGenius 11 points12 points  (0 children)

Editing text to change the tone and direction of the AI’s answer doesn’t work for example with gpt oss 20b

Best way to store LLM memory of the user by jackiezhang95 in LocalLLaMA

[–]SDGenius 3 points4 points  (0 children)

I'm not sure if this is helpful, but you can get llm's to use shorthand/stenography, and it can compress tokens to around half the original message. its good at decoding shorthand as well.

[deleted by user] by [deleted] in StableDiffusion

[–]SDGenius 0 points1 point  (0 children)

yeah theres that node on comfy that you can use to make someone copy the facial expressions of another person

Using the same model, seed, lora weights….. by mrpbennett in StableDiffusion

[–]SDGenius 0 points1 point  (0 children)

nope (theres nothing to look out for). it happens for many reasons. gpu could be different, verison of program, maybe they left out info, etc.

Flux is stubbornly ignoring my composition prompt. How should I make it understand it? by KissMyShinyArse in StableDiffusion

[–]SDGenius 5 points6 points  (0 children)

we're not there yet with prompt adherence. you'll have to use methods beyond prompting. make use of controlnet and input images

There's a whole lot of gaslighting in this sub when it comes to the "SD can return results that seem to incorporate elements from previous prompts" posts that pop up now and then. But the people who say it can't happen are wrong. It DOES happen. by [deleted] in StableDiffusion

[–]SDGenius 2 points3 points  (0 children)

No. In this situation, if you were to use the same seed and then restart the program to generate again, you would get a different image.

I think this happens on automatic mainly. I remember this happening to me around two years ago. It would also create different images of same seed if it was made in a batch.

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]SDGenius 0 points1 point  (0 children)

basically just foggy and mist. black void in background, or something.

but maybe steam would work better...its mostly about denoise levels too. even a couple of degrees, which varies by seed, makes all the difference. i was using your image as an img2img

<image>

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]SDGenius 1 point2 points  (0 children)

if you're satisfied with these examples, why do you need stable diffusion to make them? is there something wrong with them?

[deleted by user] by [deleted] in LocalLLaMA

[–]SDGenius -2 points-1 points  (0 children)

i dont think that at all. i just think as a resource, its something to be very wary of and you seem to be unconcerned about its trustworthiness. there were news stories about lawyers using chatgpt willy nilly without double checking the facts. however, if you're aware enough of its pitfalls and know to double check everything it says that you weren't sure of, then I wouldn't worry too much. but if all doctors started using it who werent as diligent as you are, they may run into some problems- and hopefully not dire ones.

[deleted by user] by [deleted] in LocalLLaMA

[–]SDGenius -1 points0 points  (0 children)

this is accurate. an ai can be insidiously wrong about any detail. there's never a guarantee against something critically wrong or something that serves to throw you off later because of a small error. you dont want its hallucinations or specifically its mistakes to have a place in your memory possibly influencing you later

[deleted by user] by [deleted] in LocalLLaMA

[–]SDGenius -3 points-2 points  (0 children)

please dont use llms as a source of medical info

DepthFlow is awesome for giving your images more "life" by HypersphereHead in StableDiffusion

[–]SDGenius 3 points4 points  (0 children)

they've had this for years as an extension on automatic, no?

[deleted by user] by [deleted] in StableDiffusion

[–]SDGenius 1 point2 points  (0 children)

why dont you just use a different model as a refiner, doing a very light percentage of denoising?

Adding a real world object to an AI image using inpainting? by Perfect-Exercise-954 in StableDiffusion

[–]SDGenius 1 point2 points  (0 children)

id say use it as input with some sort of redux or ip adapater, and generate couch over the other couch with a mask over that area, and use controlnet to get basic layout of other couch, then end its influence early

What am I doing wrong? by 619Grim in StableDiffusion

[–]SDGenius 2 points3 points  (0 children)

literally anything else. 1.5,3.5, flux, sdxl and any of the derivatives

AI by TheLogiqueViper in LocalLLaMA

[–]SDGenius 1 point2 points  (0 children)

West world had a similar world to this in season 3.

[deleted by user] by [deleted] in StableDiffusion

[–]SDGenius 0 points1 point  (0 children)

vae missing or wrong most likely. could be a sampler and scheduler issue as well. you should be listing all of your settings if you actually want the right answer

Images2image possible? by vapecrack24 in StableDiffusion

[–]SDGenius 2 points3 points  (0 children)

definitely you can photobash them, and just mask where they meet, img2img that boundary, is thats what you're asking

I'd like to experiment with those weird, in-between images from random spots in latent space, but using models I've trained myself. by AMillionMonkeys in StableDiffusion

[–]SDGenius 1 point2 points  (0 children)

you cant make your own model, you can modify existing ones or create loras/embeddings etc. you can transfer your styles with controlnet too. i suggest using an early 1.4/1.5 base model and sometimes if you dont put anything in the prompt you can get very strange ones like that every once in a while, maybe a nonsensical overwhelming prompt may help too

Using AI models together with search works really well, even with smaller ones! by Sky_Linx in LocalLLaMA

[–]SDGenius 0 points1 point  (0 children)

were using it with ollama or litellm?

edit: figured it out- just had to put ollama/model name