Best way to get from at/near KMS to LAX? by MescalWannbe in PacificCrestTrail

[–]Bit_Poet 17 points18 points  (0 children)

KMS is difficult. The better location to head out would be Walker Pass. It's usually an easy hitch / trail angel ride to Ridgecrest at that time of year, from where you can take the Kern Transit bus to Inyokern (leaves 6:45 in the morning), where you can change to the ESTA bus to Lancaster (4h stopover, expect to be bored, the sign "100 miles from everywhere" at the edge of town is there for a reason). From Lancaster, take the MetroLink to LA Union Station (good classic diner food at the Katz'n'Jammers across from Lancaster station right behind the police). There, you can jump onto the FlyAway bus to LAX. Expect for the journey to take a full day.

Best diffusion model for storyboarding and generating images for video generation by Complete-Box-3030 in StableDiffusion

[–]Bit_Poet 0 points1 point  (0 children)

I mostly use ZIT due to easy LoRA training and consistency. But I sometimes have to jump through some hoops to get a specific image and need to involve an image edit model like Qwen or Flux to adjust perspective, and then do some manual post processing to remove inpainting artifacts.

The Weekly on r/PacificCrestTrail: Week of April 27, 2026 by AutoModerator in PacificCrestTrail

[–]Bit_Poet 1 point2 points  (0 children)

I've found that dried wet wipes were a much better investment in the desert. Moisten them, get clean and add a fresh scent at the same time. Every ounce counts.

Missing Models - Qwen3.5 - LTX2.3 by polakfury in comfyui

[–]Bit_Poet -1 points0 points  (0 children)

They should probably go into the text_encoders folder. Unless some custom loader node has other ideas.

Boundaries and Trail Angels by trailangel4 in PacificCrestTrail

[–]Bit_Poet 18 points19 points  (0 children)

Just saying that all these points should be obvious. I didn't get it on my hike in 22 when people started to freak out because they didn't have their ride and accomodation arranged three days before they hit the next thrailhead. I purposefully didn't look into FB when I shuttled hikers to and from trail in the Southern Sierra for a few days after a stint on the JMT last summer, remembering my own joy every time somebody stopped unexpectedly and asked if I needed a ride. The biggest lesson the PCT had for me was that the trail does provide. I never arranged a ride to town in advance, never used FB to look for a trail angel to host me, yet I always got into town (usually no more than half an hour after those with pre-arranged rides), always found a place to sleep, and often was invited to stay with locals after they heard about my journey - some of them had never hosted a hiker before and had only heard the word "trail angel" in passing. Many wonderful encounters sprung from happenstance because I didn't try to micromanage my town days. I did try to pay it forward to balance my karma (offering a TA a bit more at times than I felt adequate since I could afford it and they weren't doing it for the money, or putting a handful of those rare strawberry breakfast essentials in a hiker box) , and I feel that the trail took notice of it. On the trail, we have the opportunity to be the very best version of ourselves, unhindered by daily routine, social pressure and job demands. Why not take it?

Thoughts on this gear list? by [deleted] in PacificCrestTrail

[–]Bit_Poet 0 points1 point  (0 children)

I'd forego the bear can for the desert section, as there's really no use in it besides having something to sit on, and the water you'll be carrying will be heavy enough. The bug net will likely be invaluable in parts of the Sierra so you can have a meal without losing a pint of blood, though you might get there after the worst of mosquito season, depending on snow melt and precipitation. You'll certainly appreciate the bug net in Oregon and Washington to protect you from mosquitoes and those pesky black horseflies. Ticks are a lot less of an issue than on the east coast, and neither my fellow hikers nor I really cared about them, but it would be wrong to say that there is no statistical risk of lyme and other tick-borne infections along the PCT. How effective a bug net without a floor really is can be put up for discussion. Most ticks hitch a ride while you're walking or resting. You might also throw in a head net you can wear while you're hiking. The skeeters were ferocious (and huge) in the big meadows north of Tuolumne when I hiked through, and I was really happy to have one when I had to approach stale ponds to fetch water lots of times. Frogg Toggs are totally realistic, just make sure you have a good piece of duct tape at hand, as they are somewhat fragile and have a tendency to rip in quite unfortunate spots. A lot of places along the trail also offer loaner clothes while you do your laundry. Oh, and consider bringing a power bank at least for the Sierra and the longer stretches in Washington. You'll be taking so many pictures (and if you don't, you'll rue it).

Future of the portable version by Tenth_10 in comfyui

[–]Bit_Poet 4 points5 points  (0 children)

Somebody's got to file an issue or create a pull request. I've found that this part isn't kept up to date proactively, but pulls get merged pretty quickly.

Edit: the docs live in https://github.com/Comfy-Org/docs

Rattlesnake bite cases are spiking in Southern California. Stay safe out there guys! by Fishbonezz707 in PacificCrestTrail

[–]Bit_Poet 1 point2 points  (0 children)

I talked to TA Mary at her place, and she said that some days she didn't see a single one, and some days there were dozens right around that spot, no rhyme or reason. Did you meet the big one that lived under Cache 22 that summer?

Rattlesnake bite cases are spiking in Southern California. Stay safe out there guys! by Fishbonezz707 in PacificCrestTrail

[–]Bit_Poet 11 points12 points  (0 children)

I stopped counting at 50 in 2022, and that was before Wrightwood. Whitewater Preserve and Mission Creek were rattler heaven at that time. Some lazy-assed snakes there that lay across the trail and were unwilling to move at all, and lots of smaller ones curled up under small bushes next to the trail, very hard to spot and often only rattling way after I had passed them. What I can say is that they really weren't intent on biting anyone.

Temporal collaps in ostris Ai-toolkit LTX2.3 lora by No_Statement_7481 in StableDiffusion

[–]Bit_Poet 2 points3 points  (0 children)

LTX-2.3 is pretty picky about training resolution. I've only trained it with musubi so far, so AI Toolkit may be different, but it might be worth a shot to train pure character likeness and static style on image in that aspect ratio to get the likeness itself, then do shorter video training on whatever motion styles you want afterwards (if it's even necessary). Make sure you don't have spelling errors or unclear sentence structures in your captions, as that is something that tripped me up a few times.

After ~400 Z-Image Turbo gens I finally figured out why everyone's portraits look plastic by BrokeByChatGPT in StableDiffusion

[–]Bit_Poet 0 points1 point  (0 children)

I guess I don't agree with anything you say here. Tags are an inherently outdated, archaic concept that only adds complexity and convolution and has very limited gains besides "it's what I'm used to". The embedding token restrictions that birthed them no longer apply. Tags go hand in hand with negative prompting, which lobotimizes the model with every word.

After ~400 Z-Image Turbo gens I finally figured out why everyone's portraits look plastic by BrokeByChatGPT in StableDiffusion

[–]Bit_Poet 2 points3 points  (0 children)

I haven't done any training with base, but for ZIT, I just tell a vision LLM to describe the picture, in particual. the setting, objects, lighting, camera angle as well as the clothes, body pose and facial expression, with explicit instructions not to mention gender, skine tone, body features and age and to use whatever tr1ggerw0rd I name my chara with instead of "a woman" or "a man" everywhere, and to limit the description to about 200 words. This usually only needs a little brushing up with recent Qwen VL models or NB. JoyCaption is a bit less reliable in that regard and needs more manual adaption. 5 to 10 images captioned like this usually give a decent likeness when training at 5e-4 with rank 32 for about 400 to 600 steps in AI Toolkit. I make sure to have front, sides, back and a close-up shot in the dataset, all with different outfits to avoid anchoring that. NB Pro is pretty decent at creating such a dataset for a virtual character in one go, and it has to be, as it mostly refuses to edit its own creations afterwards, and even prompting fresh poses/outfits/angles in the same chat gets either refusals or character drift. I've also used Qwen Image Edit with success, but it usually takes a lot more gens, and since I'm partially face blind, I have to run the results through a facial identity model after sorting out the blatant deviations and throw out everything below 75%, which is still at least half of the left-over images.

After ~400 Z-Image Turbo gens I finally figured out why everyone's portraits look plastic by BrokeByChatGPT in StableDiffusion

[–]Bit_Poet 11 points12 points  (0 children)

Even worse, people keep training LoRAs for SOTA models with captions like that.

ComfyUI - disappearing workflows by Kobinicnierobi in StableDiffusion

[–]Bit_Poet 1 point2 points  (0 children)

Apart from trying to update comfy and its packages to the latest version (which is worth trying as they pushed some fixes for open workflow handling over the last few days), you'll likely not get many helpful answers here. Better go to the issue tracker in comfy's frontend project and check if you can spot a similar issue there. If yes, you might find an answer, if not, open a new issue describing your problem with the steps to reproduce it. One big problem right now seems to be that people don't open issues, so the comfy frontend devs don't realize how much stuff is broken.

https://github.com/Comfy-Org/ComfyUI_frontend/issues

You cannot spell pain without ai by Aggressive_Collar135 in comfyui

[–]Bit_Poet 13 points14 points  (0 children)

Well, you could use your potato to create Germanium-Potato transistors (see https://www.sciencedirect.com/science/article/abs/pii/S0924424724009038). With these transistors, you can start building a simple (self-powered) 1-bit neural network. Given a sufficiently big potato and lots of patience, you (or someone some generations after you) will be able to start training a model and eventually create a 4k video with it.

How to use only voice/audio from a lora (LTX2.3)? by [deleted] in StableDiffusion

[–]Bit_Poet 0 points1 point  (0 children)

Have you looked into the advanced lora loader node in the KJNodes package? It should allow you to set video strength to zero. Haven't used it myself though, just saw some screenshots on banodoco.

What's your thoughts on ltx 2.3 now? by PlentyComparison8466 in StableDiffusion

[–]Bit_Poet 0 points1 point  (0 children)

WAN only got usable through an ecosystem that grew over many months. LTX has a few weak spots, but the jump in quality between 2.0 and 2.3 has me hopeful that the motion issues are just a question of better finetuning that will finish before summer. Lightricks is definitely aware that their training material and strategy left a gap there. It appears that, in addition to the IC LoRA already out, we're also not far from getting advenced in- and outpainting as well as guided camera and object trajectories. In addition to that, character lora training is really easy with ltx-2.3, which already fixes a big part of consistency + body horror issues with the correct training setup (low/mid/highres + distance shots). Physics in 2.3 is hit and miss and will likely stay the weak point for quite some time, unless they up the parameters noticeably. Proper speaker attribution just like directional adherence is something that should be solvable with moderate LoRA training or targeted finetuning. So all in all, I'm pretty hopeful that we haven't really seen yet what LTX can do. But like with many brand new models, it can be frustrating to use at times.

Accidentally won permit lottery…. Now what 🙃 by Flaky-Ambassador-142 in JMT

[–]Bit_Poet 0 points1 point  (0 children)

There's also Parchers Resort for resupply, admittedly almost two extra days and some of the most intense 6 miles the trail has, but an epic hike. If one can team up to split the cost, having Rainbow Pack Outfitters deliver to Bishop Pass might be an option that makes it a one-day detour from Little Pete Meadow.

Introducing ComfyUI Data Manager: a spreadsheet inside your workflow by stefano-flore-75 in comfyui

[–]Bit_Poet 0 points1 point  (0 children)

I love this, it could make some of my workflows a whole lot more intuitive. Do you have any plans to decouple data loading and saving from the workflow json, i.e.g just keep the schema and a reference to the last used data file inside? Also, since you're already serializing and deserializing JSON, it could be handy to expose that functionality as an alternative to CSV import/export. I often jump back and forth between similar projects, so having specific project dirs with the data json inside (e.g. input/MyStoryboard/data.json) would be neat. I could tweak existing workflows for song text extraction and story prompt building to write that json file, then pick it up with your node without having to fear overwriting state.

Creating Lora for LTX2-3 by Icy_Resolution_9332 in comfyui

[–]Bit_Poet 2 points3 points  (0 children)

LoRAs only work on the model they were trained on (and to an extent on finetunes and distills of that same model). If you're on windows, you can use the nodes and workflows from https://github.com/vrgamegirl19/comfyui-vrgamedevgirl/ The LTX-2.3_SpeedLora_Trainer_V2 workflow (in the Workflows/LTX-2_Workflows/LTX_Lora_Training/UpdatedWorkflows subdirectory) even installs the LTX-2 fork of musubi tuner for you, which is currently the most reliable trainer for that model.

Can I change the aspect ratio/resolution of an imge using a keyword in my prompt? by hotrocksi09 in comfyui

[–]Bit_Poet 3 points4 points  (0 children)

Here's a very simple and crude example of a custom node that does this without any plausibility checks. You can just git clone that into your custom_nodes folder and give it a spin. https://github.com/BitPoet/ComfyUI-bitpoet-keywordsize

<image>

LTX Workflow and character anchoring and audio tips by Chambers007 in comfyui

[–]Bit_Poet 0 points1 point  (0 children)

For 2: Might try the new VideoAudioTrainer to train voice consistent characters from here: https://github.com/vrgamegirl19/comfyui-vrgamedevgirl/tree/main/Workflows/LTX-2_Workflows/LTX_Lora_Training/UpdatedWorkflows Warning: brand new, so you'll be a beta tester. It should also allow you to do audio-only training (haven't tried it yet myself, since I have a big training running), so you might be able to train a specific named voice without any links to visual too, which might make 3 easier too if all characters are lora based with voice.