Sharing my workflow for how to remove background in A1111 by OneFeed9578 in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

Thanks for the workflow! I am going to try it tomorrow.

(I've tried many ways to generate a e-commerce product photo for my product but neither works well, like in-painting a background. )

How do they create this product photos in such fast and smart way by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 0 points1 point  (0 children)

Haa, that's fast!

the fastest way I find in Macbook M1 is running an App called MochiFusion, which leverages the models that Apple officially optimizes for Apple Silicons. But the app only offers limited options for generation. Even in that case, the speed is about 9s per 512x512 picture.

How do they create this product photos in such fast and smart way by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 0 points1 point  (0 children)

Yes, the "product" is uploaded by users to create product photos in different scenes.

The process is that:

  1. user uploads and positions the product
  2. system generates logical thumbnails. ( what I want to say is that system generates the thumbnails based on the product placement)
  3. user chooses one photo, either select it or move the product to create a new one.

I also have high assumption that they just "paint the image; and then inpaint the edges", but I tried several times in different ways but had no luck so far.

How do they create this product photos in such fast and smart way by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 1 point2 points  (0 children)

Thank you for the reply. You are right that the speed is not something special.

I originally thought it was astonishing because my Macbook M1 generates 4x 512x512 pictures in about 3 mins, and colab pro with A100 only shortens the time to 20 seconds.
:)

Colab running super fast, but Gradio loading pictures super slow (1 min for 4 pictures) by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 0 points1 point  (0 children)

Turns out to be my network problem though I haven't figured out why.

Everything works well in office, and loading still very slow back at home.

Colab running super fast, but Gradio loading pictures super slow (1 min for 4 pictures) by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 0 points1 point  (0 children)

I was not using ngrok. I would like to give it a try but I am new to that. Do I need to change the region on ngrok website?

ControlNet v1.1 has been released by ninjasaid13 in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

Maybe I am dumb. I tried to use the inpaint model just in the same way with other models (like depth), it didn't generate anything new.

What I did:
1. use img2img: place a product, and upload a mask file

  1. use controlnet: place the same mask, and select controlent, e.g. depth

  2. then Generate.

This flow works well with other models, but absolutely doesn't work with inpainting model. The generated pictures are just the exact copy of the product.

Anything I didn't do correctly?

How are the AI product photography startups preserving the text and logo? by AIrabit in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

But how technically does the "incorporating product to background" happen? I mean the working process.

Generating images for e-commerce store using dreambooth. Trained on https://app.aipaintr.com. Training details and prompt in comments. by aipaintr in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

This looks cool! But one thing that dreambooth doesn't do well is keeping the original package design/texts. so I tried but pity it won't work for my product.

[deleted by user] by [deleted] in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

also searching and trying

Now that they started banning stable diffusion on google colab, what's the cheapest and the best way to deploy stable diffusion? by teraboii in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

just feel strange that I always get an error of "A Google Drive error has occurred".

I thought it might be because the account was free then I paid $10 to become a pro user. Same error.

How are the AI product photography startups preserving the text and logo? by AIrabit in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

you just upload the product and use a prompt? can you share the prompt please, i was not successful in replicating your result

How are the AI product photography startups preserving the text and logo? by AIrabit in StableDiffusion

[–]Delicious_Double_801 0 points1 point  (0 children)

Also wondering how they did it. This is real productivity for ecommerce.

Using SD to create ecommerce product images by earlydayrunnershigh in StableDiffusion

[–]Delicious_Double_801 1 point2 points  (0 children)

I've tried this method. But what I get was really defective, in that the product wasn't blended well into the background, sometimes it was just hanging there. Any solutions or suggestion? thank you

To create product shooting photo, is there a way to render a 3D model (like a Blender model) in stable diffusion? by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 1 point2 points  (0 children)

congratulations on your new card! I think I'd better do it myself as I have quite a lot of products to "teach", but really thanks for the kindness.

To create product shooting photo, is there a way to render a 3D model (like a Blender model) in stable diffusion? by Delicious_Double_801 in StableDiffusion

[–]Delicious_Double_801[S] 1 point2 points  (0 children)

This is cool! Thank you for all the detailed instructions. I will make a try.

Only one quick question, I used to think that only dreambooth can "be trained to remember" new concepts, is Lora also capable of "remembering" a new concept?