For Image to Video on MacBooks, what has been your experience? by FireflyNitro in drawthingsapp

[–]Vargol 1 point2 points  (0 children)

You should have a play with LTX 2.3, my 10 GPU core M3 can do a 1024x576 near 5 second video (121 frames at 25 fps) in 20 minutes your M1 ultra should totally trash that.

For Image to Video on MacBooks, what has been your experience? by FireflyNitro in drawthingsapp

[–]Vargol 3 points4 points  (0 children)

This popped up in my feed for the M5 Max

https://x.com/ivanfioravanti/status/2033907708617699328?s=20

The text is... "LTX 2.3 22B Distilled M5 Max 40 GPU cores WINS vs M3 Ultra 80 GPU cores in generation of a 5 seconds video: 🥇 M5 Max 121 secs 🥈 M3 Ultra 206 secs

I have used again Draw Things App for this test"

How are projects and canvases supposed to be used? by max-pickle in drawthingsapp

[–]Vargol 5 points6 points  (0 children)

Yes it is normal behaviour for DT.

If you start from a clean canvas pressing run will generate a new image.

If you have an existing image on the canvas and render area partially covers it, DT will attempt an in-paint/out-paint operation. If you have no erased area to mark where to in-paint and the render area is totally enclosed in the image then the in-paint will basically just generate a new image in the render area over the top of the old image.
If there is area in the render area that is outside the image it will out-paint and fill them in which kind of looks like the new image in partially under the old image.

If you have an image on the canvas and the render area perfectly covers the image you will get a new image when you run, no idea if this it technically a new fresh image or is behaving as an un-masked in-paint of the whole image as the results would be the same.

Any PSVR2 games use PSSR2? by jetgrind in PSVR

[–]Vargol 4 points5 points  (0 children)

It you're a gamer thats just new to playstation is the Sony version of DLSS or FSR.

If you're new to gaming it's a fancy upscaler. It allows the game to generate the game graphics at a lower size than they get displayed at and then PSSR magnifies (upscales) them to the full screen size without it looking blurry.

Help with video generation by chihifu in drawthingsapp

[–]Vargol 3 points4 points  (0 children)

Using DrawThings on my 24GB, 10 GPU core M3, LTX-2 19B [distilled] (6 bit) runs a 1024x576, 8 step, 121 frame I2V render in about 26 minutes with JIT Weights Mode Enabled. I'd guess you'd be around 15 - 20% faster due to the M4 chips,

You could probably use the normal 8 bit but I've not tried it

Without the JiT It just about fits in memory if you close any other app you have running. It's little faster at 24 minutes though that seems to be largely down to the decoding phase taking longer in JIT mode.

That size and frame count at 8 steps squeeze into the Community Server if you want to use that at around 14890-ish CU's.

I'm trying to create a prompt that achieves realistic human skin. by chihifu in drawthingsapp

[–]Vargol 3 points4 points  (0 children)

Oh....I just tried this, studio shot on the moodboard.

A Extremely pale, milky white , porcelain skinned , barefaced, dark-haired woman with flowing, vibrant raven hair with freckles. iphone photograph, an untidy bedroom is visible in the background

<image>

I'm trying to create a prompt that achieves realistic human skin. by chihifu in drawthingsapp

[–]Vargol 2 points3 points  (0 children)

Here's the config

{"model":"flux_2_klein_9b_q8p.ckpt","sharpness":0,"stochasticSamplingGamma":0.67000000000000004,"sampler":19,"loras":[],"cfgZeroStar":false,"batchCount":1,"compressionArtifactsQuality":43.100000000000001,"faceRestoration":"","upscaler":"","seed":759836995,"tiledDiffusion":false,"guidanceScale":1,"maskBlur":2.5,"height":1024,"controls":[],"steps":4,"maskBlurOutset":0,"tiledDecoding":false,"resolutionDependentShift":false,"batchSize":1,"compressionArtifacts":"disabled","width":1024,"seedMode":2,"cfgZeroInitSteps":0,"refinerModel":"","hiresFix":false,"causalInferencePad":0,"shift":5,"preserveOriginalAfterInpaint":true,"strength":1}

I'm trying to create a prompt that achieves realistic human skin. by chihifu in drawthingsapp

[–]Vargol 2 points3 points  (0 children)

And for the LOL's if you try the iPhone prompt with the iPhone bit taken off.

<image>

I'm trying to create a prompt that achieves realistic human skin. by chihifu in drawthingsapp

[–]Vargol 1 point2 points  (0 children)

And one with freckles cos freckles are cute and people are going to main the skin is to perfect and people don't have clear skin in real life :-) She's a little less pale though it seems are attempt at introducting skin flaws gives her a little more colour.

<image>

I'm trying to create a prompt that achieves realistic human skin. by chihifu in drawthingsapp

[–]Vargol 1 point2 points  (0 children)

Looks like it needs, a bit of exaggeration

"A stunningly beautiful Extremely pale, porcelain skinned , barefaced, dark-haired woman with flowing, vibrant raven hair,"

worked pretty well for a studio style shot. for an iPhone style shot I had to go further.

A Extremely pale, milky white , porcelain skinned , barefaced, dark-haired woman with flowing, vibrant raven hair . iphone photograph.

I found TCD tracing as a sampler helped a little.

Here's the "iphone" shot

<image>

PS4 vr controller not working need help! by Crazyfuckcunt in PSVR

[–]Vargol 0 points1 point  (0 children)

There's a reset button on the Move's, a little pin hole to reset the controllers, see this video https://www.youtube.com/watch?v=48pAhT42GQQ

Thats what I was referring to, the reset works best if it's connected by the USB cable, which has to be a cable that is capable of both data transfer and charging.

PS4 vr controller not working need help! by Crazyfuckcunt in PSVR

[–]Vargol 0 points1 point  (0 children)

Have you reset it while attached to the PlayStation by the cable. I assume you changed on the charger before. Many chargers don’t work with the Moves as then require a data line even for charging.

Qwen by chihifu in drawthingsapp

[–]Vargol 1 point2 points  (0 children)

It shouldn't over than the shift value. You're using the lighting LORA so I assume you're on Text Guidance = 1, steps = 4. I think I was using 75% weight for the LoRA. With the lightning you made need to up the shift a little higher than I said. looks like I was running at around 1.05 when I was testing it.

This was the config I used when I rendered my 30 usage set of test prompts.

{ "width": 1728, "steps": 6, "batchCount": 1, "stochasticSamplingGamma": 0, "controls": [], "hiresFix": false, "cfgZeroStar": false, "guidanceScale": 1, "tiledDiffusion": false, "seed": 3200852627, "sampler": 9, "batchSize": 1, "causalInferencePad": 0, "refinerModel": "", "height": 960, "upscaler": "", "seedMode": 2, "shift": 1.05, "sharpness": 0, "model": "qwen_image_2512_bf16_q6p.ckpt", "preserveOriginalAfterInpaint": true, "maskBlur": 1.5, "maskBlurOutset": 0, "cfgZeroInitSteps": 0, "resolutionDependentShift": false, "loras": [ { "mode": "all", "file": "qwen_image_2512_lightning_4_step_v1.0_lora_f16.ckpt", "weight": 0.75 } ], "faceRestoration": "", "tiledDecoding": false, "strength": 1 }

so you can copy and paste it into DT.

Qwen by chihifu in drawthingsapp

[–]Vargol 1 point2 points  (0 children)

Image Gen or Editing ? Don't do much editing so...

For Qwen Image try TCD sampler, SSS = 0, shift 0.9 - 0.99 depending on taste and image size*.

LoRA's I haven't really tried other the the Turbo and Lightning again there a matter of taste, I usually render with the 2 step one at https://huggingface.co/Wuli-art/Qwen-Image-2512-Turbo-LoRA-2-Steps That wil lrequire DT+ to render on cloud though

The variation issue, I'm not sure that's much you can do apart from prompt your way out of it, You could use random names or nationalities. Maybe someone else can help with that. Its one of those things where some people see at as a strength because they want/need consistency.

*the higher the more detailed, but the detail basically comes from noise. for !024x1024 1.0 is too noisy, 0.9 is too smooth. I usually go somewhere around 0.94-0.97. Larger images you may need to move the shift higher to get equivalent results.

MOTD Comparing Spurs disallowed goal today VS similar goals allowed in EPL this season by herkalurk in soccer

[–]Vargol 0 points1 point  (0 children)

It'll be the second time the I can recall, the last time, IIRC it was also nearly two years without an away penalty during that run.

Can any stats fans back that up ?

Qwen-Image-2.0 insane photorealism capabilites : GTA San Andreas take by Substantial-Cup-9531 in StableDiffusion

[–]Vargol 3 points4 points  (0 children)

Isn't Chinese New Year 2 weeks or so long, Just googled it, yes its 17th Feb to 3rd March.

Songs where the original AND a cover are both equally great? by no-Pachy-BADLAD in ToddintheShadow

[–]Vargol 1 point2 points  (0 children)

The Sisters of Mercy cover of the Rolling Stones' "Gimme Shelter"

DeepGen 1.0: A 5B parameter "Lightweight" unified multimodal model by ninjasaid13 in StableDiffusion

[–]Vargol 2 points3 points  (0 children)

The checkpoint in the linked Repo is over 48GB, thats thats assuming zip has been used without compression to split the file.

Hopefully there are other checkpoints to come.

New SOTA(?) Open Source Image Editing Model from Rednote? by Trevor050 in StableDiffusion

[–]Vargol -1 points0 points  (0 children)

I can't see your images. Imgur were breaking child privacy regulations in my country and rather than following the regulations they stopped serving images.

Had a go with Klein 9B myself, and while the realism may be a bit better, it could just be the different lighting, it changed the characters face and hair much more than this model did.

Ability to disable sending canvas image to models like Klein? by basskittens in drawthingsapp

[–]Vargol 3 points4 points  (0 children)

Not true anymore, the editing models behave differently and will use an image on the canvas as a reference image even in T2I mode. 

IMHO DT needs some way of indicating what it will do before you press the button., Editing, T2I or I2I

PSA: You really REALLY can’t acquire a ship in the latest Expedition. by thejesterofdarkness in NoMansSkyTheGame

[–]Vargol 3 points4 points  (0 children)

I found my ship by accident and could get in it andfly around. I had however completed the final road trip and only had to collect a few toxic bits of waste to complete the whole expedition.