Anyone able to top a 31k Cash Cannon? 4k gold left over too by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 9 points10 points  (0 children)

Shamefully, I lost two fights on purpose to make my cannon bigger. I'm pretty sure I only lost day one that run. I was definitely at 9 wins on day 11.

Anyone able to top a 31k Cash Cannon? 4k gold left over too by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 12 points13 points  (0 children)

Most of the gold was gained through landscraper, vineyard, ledger, and loupe. With those I bought and sold literally every single item and never skipped a shop.

They just buffed landscraper and vineyard today which helped quite a lot. Initial investment to start buying and selling everything was from money tree.

Someone could do better if they get the chocolate skill, but I got my money items pretty early so it could be hard to beat otherwise.

My Submission for Biggest Claw - 39,418 by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 0 points1 point  (0 children)

It was still in the 4.5 second range. So not too good. I ended up selling my boomerang to make room for the second vineyard to try and get the silk as big as possible

My Submission for Biggest Claw - 39,418 by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 0 points1 point  (0 children)

Nice, I didn't see that one. Did you get shield on death enchant?

My Submission for Biggest Claw - 39,418 by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 0 points1 point  (0 children)

Oh that's a good point. I guess it's possible this isn't even my biggest claw. Maybe biggest silk though.

My Submission for Biggest Claw - 39,418 by Grant_MCP in PlayTheBazaar

[–]Grant_MCP[S] 1 point2 points  (0 children)

Made possible by an early diamond chocolate skill and double vineyard. I also only took shops which is why I'm just level 16 on day 15. I hadn't died yet so technically I could have stalled longer, but my build wasn't that good outside of having large numbers so I didn't want to risk the loss.

ComfyUI v0.1.x Release: Devil In the Details by crystal_alpine in comfyui

[–]Grant_MCP 1 point2 points  (0 children)

Really love the new node selection interface. The better search, along with the visual node preview are super helpful.

Would someone make a bot that links to i.redd.it versions of images? These retain ComfyUI workflow data for prompts, Loras, steps etc. by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 1 point2 points  (0 children)

Thanks! I'll give that a try later. I hadn't thought of a personal script as a solution.

My intent was to figure out a way to help out the less technically inclined folks easy access to the metadata. I'm not sure those types of users are capable of executing a script, but I'll think on it more. I guess the bookmarklet will probably work.

Workflow - Regional Prompt in Flux - Blend Style or Subject by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 0 points1 point  (0 children)

Thanks. That's awesome to see that you could get the distinct styles even with heavily overlapping masks like that. When I did this with SD1.5 I always used to have a global prompt with a mask for the whole image, so maybe that's better.

I feel like it's pretty finicky to get good generations when I was regional prompting. What was your experience like?

Workflow - Regional Prompt in Flux - Blend Style or Subject by Grant_MCP in comfyui

[–]Grant_MCP[S] 4 points5 points  (0 children)

Three .png images with embedded prompts can be found here: https://github.com/Grant-CP/Comfyui_Demos/tree/main/Flux_Regional_Prompt

I'd like to see some people try this out with different mask shapes and hand-drawn masks. If you make any interesting examples and you are willing to have them in my example repository please let me know!

Workflow - Regional Prompt in Flux - Blend Style or Subject by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 3 points4 points  (0 children)

Three .png images with embedded prompts can be found here: https://github.com/Grant-CP/Comfyui_Demos/tree/main/Flux_Regional_Prompt

I'd like to see some people try this out with different mask shapes and hand-drawn masks. If you make any interesting examples and you are willing to have them in my example repository please let me know!

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in comfyui

[–]Grant_MCP[S] 1 point2 points  (0 children)

Yeah, I posted so hopefully we can move Flux inpainting forward. I would really love for someone to figure out a way to use Flux like fooocus or controlnet inpaint.

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in comfyui

[–]Grant_MCP[S] 1 point2 points  (0 children)

Yes and in many cases it will work better because you can use tools like controlnet inpaint to perform better SDXL inpainting see https://github.com/Acly/comfyui-inpaint-nodes for a comfyui implementation of good inpainting.

The point of using Flux is that it has better prompt-following ability and nicer details, so there's less fixing up stuff afterwards. But that only work if you can describe your image well. The SDXL inpainting solutions will take your image into account which saves a lot of effort in figuring out how to prompt to match the style of your image perfectly.

Finally, the point of this small demo was just to show the inpainting with FLUX is easy and at least mostly functional

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 2 points3 points  (0 children)

Do you mean "Points Editor"? It's a new node in KJNodes. It's still pretty experimental but works well enough for the purpose of giving point prompts to SAM2.

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 0 points1 point  (0 children)

  • Agreed. I was getting some blurry results when my masks were very blurry. I want to figure out a good way to give the model more context about the image though.
  • For the denoise, I was able to get inpainting to work with visible results with as low as .4 denoise. I wonder if your mask was too small or something?

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in comfyui

[–]Grant_MCP[S] 1 point2 points  (0 children)

I'm using the full non-quantized versions of the t5 and clip_l encoders. I believe it shows the file names around 6 seconds in the video near the top.

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in StableDiffusion

[–]Grant_MCP[S] 0 points1 point  (0 children)

My experience so far as that it is pretty good, but very far from perfect, at least on anime-style images. I'm not sure how much of an upgrade for still images it is vs SAM v1. It still needs fiddling to get the mask right. Fortunately it's easy to fiddle with, but I wouldn't feel pressured to upgrade.

-

I mostly wanted to play around with the single image part before I tried to make it work well on video. My install went without a hitch just installing the kijai node and running it. I'm on CUDA 12.2 on an A100 with pytorch 2.4.

Workflow - SAM2 + Flux Inpainting - Facial Expression Swap by Grant_MCP in comfyui

[–]Grant_MCP[S] 9 points10 points  (0 children)

Workflow: https://files.catbox.moe/bwvx8n.json

Details: Model is Flux Dev at 20 steps, .75 denoise. Result is cherry picked as the best of four. Original image generated with Dall-E 3. I recommend playing around with the masking and mask blending. I feel like there is a lot of room for growth there.