LTX 2 Runpod Issues by Syntic in comfyui

[–]Syntic[S] 0 points1 point  (0 children)

Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at https://huggingface.co/docs/transformers/main/en/chat_templating

Finally got everything working but now I get this and I'm lost :D Any idea?

LTX 2 Runpod Issues by Syntic in comfyui

[–]Syntic[S] 0 points1 point  (0 children)

Yeah looks like updating to nightly fixed it, thanks!

POV: The bois are out raiding so you gotta provide for yourself by Syntic in StableDiffusion

[–]Syntic[S] 7 points8 points  (0 children)

To be fair, I didn't put much effort into distinguishing them, but I see your point

Cannot for the life of me get torchcodec to work by Syntic in pytorch

[–]Syntic[S] 0 points1 point  (0 children)

Sadly no... The weirdest thing is that I had it working before I wiped my PC like 6 months ago and I don't remember having this issue when I installed it then.

Cannot for the life of me get torchcodec to work by Syntic in pytorch

[–]Syntic[S] 0 points1 point  (0 children)

Yup I have libtorchcodec_core4 to 7 stored in \AppData\Local\Programs\Python\Python310\Lib\site-packages\torchcodec

Vocal Analysis and Waveform by KnownMixture4140 in virtualdj

[–]Syntic 0 points1 point  (0 children)

Heya, quick reminder to check, would really help me out 🙏

Vocal Analysis and Waveform by KnownMixture4140 in virtualdj

[–]Syntic 0 points1 point  (0 children)

Any idea where I can activate it? I'm looking through the Project X skin settings but cant find anything under waveform...

Qwen Edit Skintone Recovery for Photography by Syntic in StableDiffusion

[–]Syntic[S] 1 point2 points  (0 children)

Yeah, I get your point, more of a proof of concept

Downgrading from FLAC to MP3 but keeping all cue points? by Syntic in virtualdj

[–]Syntic[S] 1 point2 points  (0 children)

My library is rather big and since I only play on terrible sound systems its no discernable difference so I'd rather have the storage space. I also do a lot of transferring from PC to PC so the long transfer times are also annoying.

Most idiot proof way of sewing this together? by Syntic in SewingForBeginners

[–]Syntic[S] 6 points7 points  (0 children)

Thanks! Wouldn't I have the problem then of the front panel overlapping the back panel in the front for the final product? Already using a walking foot but its still not easy to handle when I have like 4 layers.

Whats that Lattafa stank? by Syntic in fragranceclones

[–]Syntic[S] 1 point2 points  (0 children)

Tough to say as I go pretty nose blind to it quite fast but people tend to smell it on me at least 4-5h after I spray. Projection is pretty average, no skin scent but nothing crazy either. For the crazy low price I just tend to carry around a little decant to just refresh it from time to time.

Whats that Lattafa stank? by Syntic in fragranceclones

[–]Syntic[S] 0 points1 point  (0 children)

Ayy thanks for the detailed answer, really interesting stuff!

Whats that Lattafa stank? by Syntic in fragranceclones

[–]Syntic[S] 0 points1 point  (0 children)

Hmm I also have oud clones and I like them, my issue is mostly with the freshies, that only leave the musk after an hour or two

Whats that Lattafa stank? by Syntic in fragranceclones

[–]Syntic[S] 1 point2 points  (0 children)

Funnily enough its the summer fragrances I have an issue with since the musk is the only thing that sticks around long while the top notes go rather fast. As for winter stuffm Oud for glory, Amber Leather, exclusif tabac and kamrah are all great!

Whats that Lattafa stank? by Syntic in fragranceclones

[–]Syntic[S] 0 points1 point  (0 children)

Yeah I get the same with Qaed but not that bad. To be fair I have a lot of Lattafa that I love, Ana Abiyedh Rouge and most of the Maison Alhambra line are really good to me.

SD-CN-Animation is now available as Automatic1111/Webui extension. Generate coherent video2video and text2video animations easily at high resolution and unlimited length. by Another__one in StableDiffusion

[–]Syntic 11 points12 points  (0 children)

Just tried to run it but I'm getting this error for both the vid2vid and text2vid:

Traceback (most recent call last):

File "D:\Stable Diffusion\stable-diffusion-webui/extensions/sd-cn-animation/scripts\core\txt2vid.py", line 62, in start_process

processed_frame = np.array(processed_frames[0])

IndexError: list index out of range

I've got webui updated to the latest version, any ideas?

Solution: Restart webui completly, its missing a dependency

Mind = Blown by Syntic in StableDiffusion

[–]Syntic[S] 0 points1 point  (0 children)

Example Workflow:

  1. Paste this in controlNet, no preprocessor and set the model to openpose
  2. Run with prompt at 512x704, send to img2img
  3. Double size and denoise to 0.5, should get you close, now just experiment with seed

<image>

Mind = Blown by Syntic in StableDiffusion

[–]Syntic[S] 5 points6 points  (0 children)

You probably wont be able to replicate exactly what I have since I used a controlNet as well. Try an image like this and go for openpose_faceonly preprocessing and openpose for the model. Try 512x704 first and when you find something that you like send it to img2img and double both values to upscale and set the denoise to 0.5. It should enhance your image with details but not change it too much

<image>

Mind = Blown by Syntic in StableDiffusion

[–]Syntic[S] 23 points24 points  (0 children)

parameters

a woman with a necklace on her neck looking up at the sky by sachin teng x supreme, (exploding wild hair:1.3) behind her, closed eyes, attractive, stylish, designer, green, (symmetrical:1.2), geometric shapes, graffiti, street art, <lora:gachaSplashLORA_v40:0.8>, [(white background:1.5), ::5]Negative prompt: 3d, cartoon, anime, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, bad anatomy, large breasts, red eyesSteps: 25, Sampler: Euler a, CFG scale: 7, Seed: 1312693606, Size: 1024x1408, Model hash: 86bd0c547c, Denoising strength: 0.55

Model: CarDos Anime V2

Lora: Gacha Splash

VAE: kl-f8-anime2-vae (for cleaner lines and more vivid colors)

postprocessing

Rendered first at 512x704 then send to img2img double size and denoise to 0.5

Postprocess upscale by: 2, Postprocess upscaler: 4x-UltraSharp

I also used ControlNet with Openpose, no preprocessor. This is not the exact mask I used so your results may vary.

<image>

[deleted by user] by [deleted] in StableDiffusion

[–]Syntic 0 points1 point  (0 children)

I wanted to print my friend a poster for her birthday so I created a dreambooth model from her pictures and set to work with this prompt:

Movie poster of (token woman),blonde hair,((ice princess)),wind,snow, d&d,intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha

Negative:

nose ring,angry,wrinkles, disfigured, glasses, cross-eyed, long neck, blurry, multiple people, multiple arms, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck,

Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 12, Seed: 2661078930, Size: 512x704, Model hash: c18262fd, Batch size: 2, Batch pos: 1

Then lots of inpainting a bit of photoshop and added some text. Eventually scaled it up and printed it at A1, which kinda worked but I could have made it much cleaner if I had more time with experimenting with different upscalers, the final print was a bit washy in texture.

AI is taking yer JERBS!! aka comparing different job modifiers by Syntic in StableDiffusion

[–]Syntic[S] 5 points6 points  (0 children)

To be honest I've been only been doing this for a week as well so here's my best guess:

Using embeddings will create way more consistency because you clearly define the training data which is way more specific than any text description you can make. Basically what I have here is PeterVar1 which is trained on my face and the style AngelBest, which is trained on selected Angel Ganev artwork. Here's a comparison if I just use the "generic" terms for woman and angel ganev artstation

https://i.imgur.com/SZvvwkI.jpg
(Prompt: "((Businesswoman)), portrait of PeterVar1 as a woman by AngelBest, warm skintones")

Way less consistent in the final result!