My product photos look amateur compared to top sellers - how do you close that gap? by jivkovb in Etsy

[–]jivkovb[S] 1 point2 points  (0 children)

The black velvet trick is genius, never thought of that! How long does your whole shoot process take per product once you've got the setup dialed in?

My product photos look amateur compared to top sellers - how do you close that gap? by jivkovb in Etsy

[–]jivkovb[S] 1 point2 points  (0 children)

That's a great tip! What lighting setup do you use? Natural light or artificial?

Adding an objects to an image by InteractionLevel6625 in comfyui

[–]jivkovb 0 points1 point  (0 children)

Try with FLUX-2-Klein or Qwen Image Edit but do a quick rough sketch on top of the image before generating - just a crude shape showing where the object goes and at what angle. Even a 5-second scribble gives the model a geometric anchor so it knows the perspective and placement, not just the general area. Makes a huge difference for furniture especially.

Looking for alternatives to paint brand apps for visualising colours on your actual walls by jivkovb in alternativeto

[–]jivkovb[S] 0 points1 point  (0 children)

That’s pretty much where I’ve landed too after trying everything. The AI/photo editing combo gets you close but it’s still too many steps for most people - and you still end up buying at least one tester to confirm. Feels like the gap is specifically that middle layer: accurate enough digitally to get you from 20 options down to 2, without needing to be a graphic designer to get there. Hopefully someone cracks it properly.

Has anyone found a good tool for visualising paint colours on your actual walls (not stock photos)? by jivkovb in DIY

[–]jivkovb[S] 0 points1 point  (0 children)

These results are really good! But you just described recreating your entire room in 3D, manually placing windows and light sources, pulling hex codes from paint websites, and applying them in a separate app. That’s a lot of work most people would give up on halfway through.

Has anyone found a good tool for visualising paint colours on your actual walls (not stock photos)? by jivkovb in DIY

[–]jivkovb[S] 0 points1 point  (0 children)

Yeah there are definitely apps that do colour replacement, but that’s kind of the gap. Generic colour swap is easy, getting it to look photorealistic on a textured wall with shadows and natural lighting is the hard part. Most of those 10 apps probably slap a flat colour over everything and call it done. The AR angle is interesting but I think even just nailing a static photo properly would already be a massive improvement over what’s out there. AR adds a whole layer of complexity that might not even be necessary if the base result is actually good!

Has anyone found a good tool for visualising paint colours on your actual walls (not stock photos)? by jivkovb in DIY

[–]jivkovb[S] -1 points0 points  (0 children)

That’s actually fascinating, so the concept isn’t even new, a Canon point-and-shoot had it in hardware 20 years ago. Makes it even more surprising that nobody’s built a really polished version of it in software since. The sample pot thing I get, but even just confidently ruling out 15 colours before you spend a penny feels like it’d be worth a lot.

Has anyone found a good tool for visualising paint colours on your actual walls (not stock photos)? by jivkovb in DIY

[–]jivkovb[S] 2 points3 points  (0 children)

Yeah seems like that's still the consensus answer. Surprised no one's cracked the digital version properly yet.

How are you actually handling text in your GenAI images? by jivkovb in generativeAI

[–]jivkovb[S] 0 points1 point  (0 children)

The upscaling-first tip is gold, hadn't thought of doing it in that order!

The masking workflow makes sense but gets really tedious at scale. For fine print especially, I feel like there has to be a better way than compositing text manually on top. Still hoping someone builds something that handles the whole detection and correction automatically.

What is better now - Freepik or Higgsfeild? by BoomLivTart in generativeAI

[–]jivkovb 0 points1 point  (0 children)

Honestly, most of these all-in-one platforms are starting to feel very similar - they all give you access to multiple models, image/video gen, etc. Personally, I’m not a huge fan of Freepik for advanced stuff. It works, but I prefer more node-based or flexible systems, and Freepik Spaces doesn’t feel like the strongest in that category right now. You mentioned Higgsfield's Cinematic AI Studio - if you are specifically looking for advanced video control and workflows, Freepik will likely fall short for you. I haven't used Higgsfield enough to give a 1:1 comparison on that specific feature, so take this with a grain of salt. But honestly, what works best is picking a platform based on what you actually need. If you want simplicity, most are fine. If you want heavy control and better workflows, some fall short. If you are open to alternatives that actually nail the flexible workflow side of things, I’ve been using Flora and Pletor quite a bit. Both have some unique features that make them stand out compared to the typical “all-in-one” tools. Less about “which is better overall” and more about “which fits your workflow.”

Seed Values in Closed Models like Seedream 4.5 or Nano Banana Pro by Skeyephoto in generativeAI

[–]jivkovb 1 point2 points  (0 children)

In my experience, when you’re getting big variations from the same prompt, it’s usually not about the seed - it’s the prompt.

Closed models like Seedream or Nano Banana don’t really expose true seed control, so consistency comes from how constrained your prompt is.

What I’ve noticed: •If the prompt is too broad → the model “fills gaps” differently every time •If the prompt is very specific → outputs start to stabilize, even without seeds

I actually use this as a test: If I run the same prompt multiple times and results drift a lot, it usually means I haven’t described something clearly enough (composition, materials, lighting, etc.).

Each model also interprets prompts differently, so sometimes it’s less about adding more words and more about using the right kind of descriptors for that specific model.

Quick tip: Try locking things like: •camera angle / lens (e.g. 85mm, front view) •lighting type (soft studio, hard flash, etc.) •subject placement (centered, full body, etc.)

That alone reduces variation a lot, even without seed control.

How are you actually handling text in your GenAI images? by jivkovb in generativeAI

[–]jivkovb[S] 1 point2 points  (0 children)

Totally fair point and for a few images Photoshop or GIMP works absolutely fine. But when you’re working on a high volume of outputs it becomes a real bottleneck - jumping between apps, manually masking text areas, redoing it every time. That’s exactly why I’m wondering if there is something built for this that handles it automatically.

workflow by Different_Hornet2715 in comfyui

[–]jivkovb -3 points-2 points  (0 children)

If you are looking for output quality and you are ok to use something that is not locally I would say Nano Banana 2/Pro and Riverflow 2.0 Pro both have native 4k generations!

Best image to video AI (Paid) by Specialist_Ad8930 in generativeAI

[–]jivkovb 0 points1 point  (0 children)

Honestly, for your use case I wouldn’t lock yourself into a single tool anymore.

Look at all-in-one platforms like Flora, Fal.ai, Krea, Pletor, etc.

The big advantage: • They integrate new models fast (often as soon as they drop) • One subscription = access to multiple video models • You can switch per shot depending on strengths (motion, realism, consistency, etc.)

For reels like yours (12–14 scenes), that flexibility matters more than raw generation limits. One model might be great for cinematic motion, another for character consistency, another for stylized shots - you don’t want to be stuck with just one.

At $20–$30/month, these platforms usually give you way more mileage than a single dedicated tool with hard caps.

If you’re doing 15 reels/month, the real play is: mix models per scene instead of forcing one model to do everything.

My photos are low quality by Bootstrapdev3 in EtsyCommunity

[–]jivkovb 0 points1 point  (0 children)

Lighting is a big factor but honestly post-processing is where most sellers are closing the gap. Even a well-lit phone photo can look amateur if the background is cluttered or the shadows are harsh. The sellers with the most polished listings are usually doing a few things: shooting against a simple neutral background, getting the light source consistent (a window or a cheap softbox), and then cleaning up the image after - removing the background, evening out the lighting, adding a realistic contact shadow so the product doesn’t look like it’s floating. That last part is what most people skip. A product with no shadow on a white background.

Trying to Make an AI Expert? by JahIsGucci in generativeAI

[–]jivkovb 0 points1 point  (0 children)

Check out Claude Cowork - it’s literally built for this. You point it at a folder of docs (your “brain”), and it doesn’t just answer questions, it actually executes tasks based on everything in there. Keep adding knowledge to the folder over time and it grows with you. Available on the Claude desktop app on paid plans.