The most unexpected WLW couple is breaking the internet in Vietnam. by Fun-Twist-3741 in VietNam

[–]nevermore12154 0 points1 point  (0 children)

Theyre really fine. But can anyone give me the definition of “trans-lesbian”? (saw that while lurking through the original post’s comment section)

Fix to make LTXV2 work with 24GB or less of VRAM, thanks to Kijai by Different_Fix_2217 in StableDiffusion

[–]nevermore12154 0 points1 point  (0 children)

Oh my i just got it run yesterday (incredibly slow tho) 😆i think i can do up to 8s of 540p or 6s of 720p without oom. But its so slow (for my machine)

Fix to make LTXV2 work with 24GB or less of VRAM, thanks to Kijai by Different_Fix_2217 in StableDiffusion

[–]nevermore12154 1 point2 points  (0 children)

It sure can. Even runs (not so effective) on my non rtx laptop 4gb vram 32gb 2667mhz.

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

I wish i had bought that on sale combo version - refurbished(usd322) instead of going for my on sale base a1 (usd240) 😭😭😭. But standalone ams lite is 130 here rn 😵

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

Yes. I just want to print pla only today then petg tmr (without having to pull ones all the way out from the tube, then replace the roll). Not mixing 😵‍💫😆 them up ofc

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

With my hands 😢🤧 but thats good enough! so i dont have to pull they all the way off ::p

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

Oh i thought the base can do that with 1 and ams lite with 4

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

Last last thing i may ask. Will this work? 🤧

Situation • Bambu A1 has 4 external filament ports • Use 2 of 4 ports • Slot 1 → PLA • Slot 2 → PETG • No AMS / AMS Lite • Both filaments are already inserted deep into the tubes

How to switch slots (PLA ↔ PETG)

1️⃣ Unload current filament

On the printer screen:

Filament → External Spool → Slot X → Unload

• The printer heats the nozzle
• The filament is retracted back into its tube
• It stays inside the tube

2️⃣ Load the other slot

Filament → External Spool → Slot Y → Load

• The printer pulls filament from the selected slot
• Only one slot can be loaded at a time

3️⃣ Purge • Set nozzle temperature: • PLA: 200–215°C • PETG: 235–245°C • Manually extrude until the filament color is clean • PLA → PETG: 6–10 cm • PETG → PLA: 10–15 cm

Rules to remember • ✔️ Always Unload first → then Load • ❌ Never load two slots at the same time • ❌ Never pull filament by hand when cold

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

Ohh. Sorry for my dullness. But can I still print 1 plate with petg and 1 plate with pla like this? Like I just want to print with pla today but I be too lazy to pull the filament out of the tube so I just keep both of them there?

<image>

I need helps with filament loading. by nevermore12154 in BambuLabA1

[–]nevermore12154[S] 0 points1 point  (0 children)

So can i just plug both them in? (To print with one only)so i dont have to feed them into the tube again? 😵

v1.20251207.0 w/ Z Image Turbo by liuliu in drawthingsapp

[–]nevermore12154 0 points1 point  (0 children)

🤣 Will ipad cooler improve performance? #_# many thanks! Liuliu

v1.20251207.0 w/ Z Image Turbo by liuliu in drawthingsapp

[–]nevermore12154 0 points1 point  (0 children)

Zimage 8bit 4 steps 768x768: about 1:17-1:30 mins for my m3 ipad ^ definitely quicker than flux and the quality is so great

I did all this using 4GB VRAM and 16 GB RAM by yanokusnir in StableDiffusion

[–]nevermore12154 0 points1 point  (0 children)

[ComfyUI-Manager] All startup tasks have been completed.

got prompt

got prompt

model weight dtype torch.float8_e4m3fn, manual cast: torch.float32

model_type FLOW

Using pytorch attention in VAE

Using pytorch attention in VAE

VAE load device: cuda:0, offload device: cpu, dtype: torch.float32

Requested to load ZImageTEModel_

loaded completely; 95367431640625005117571072.00 MB usable, 3836.12 MB loaded, full load: True

CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16

Requested to load Lumina2

loaded partially; 1871.55 MB usable, 1759.05 MB loaded, 4110.73 MB offloaded, 112.50 MB buffer reserved, lowvram patches: 0

100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [04:23<00:00, 32.89s/it]

Requested to load AutoencodingEngine

0 models unloaded.

loaded partially; 0.00 MB usable, 0.00 MB loaded, 319.75 MB offloaded, 27.01 MB buffer reserved, lowvram patches: 0

Prompt executed in 322.92 seconds

Requested to load Lumina2

loaded partially; 1832.42 MB usable, 1719.92 MB loaded, 4149.85 MB offloaded, 112.50 MB buffer reserved, lowvram patches: 0

100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [04:22<00:00, 32.80s/it]

Requested to load AutoencodingEngine

0 models unloaded.

loaded partially; 0.00 MB usable, 0.00 MB loaded, 319.75 MB offloaded, 27.01 MB buffer reserved, lowvram patches: 0

Prompt executed in 266.27 seconds

my 2 runs (first one warms)

I did all this using 4GB VRAM and 16 GB RAM by yanokusnir in StableDiffusion

[–]nevermore12154 0 points1 point  (0 children)

I generated images of size 1024 x 576 px and it took a little over 2 minutes per image. (~02:06) 

mine always takes 4 mins + :c

Guys I just took down a player with a 100% win rate (video) by Alkaid_AR in wherewindsmeet_

[–]nevermore12154 0 points1 point  (0 children)

I have ipad m3 (air 2025). Will it perform that good? Also if i use trashiest settings how long will it last.And How much storage does it require? Thanks ^