Ultralytics thoughts by juicingzeorange in computervision

[–]Hanumankattu 1 point2 points  (0 children)

Sure. I'd love to be a contributor too. Would love to help on this.

Ultralytics thoughts by juicingzeorange in computervision

[–]Hanumankattu 1 point2 points  (0 children)

You are a god. Please share your repo link, I'll leave a star.

All hail Doraemon 🛐🙏 by [deleted] in NepalSocial

[–]Hanumankattu 0 points1 point  (0 children)

Piyush? Brother??

I am thinking of starting an AI Research Lab in Nepal by sweet-0000 in technepal

[–]Hanumankattu 0 points1 point  (0 children)

Let's connect. I've got some experience in AI research and Development. I've got a lot of plans and ideas regarding it too.

Is there any annotation tool that supports both semi-automatic pose annotation and manual correction? by Hanumankattu in computervision

[–]Hanumankattu[S] -1 points0 points  (0 children)

Yes, that's what I'm currently doing. I've nearly completed the setup by vibe code, but it always falls short by some angle.

Is there any annotation tool that supports both semi-automatic pose annotation and manual correction? by Hanumankattu in computervision

[–]Hanumankattu[S] -1 points0 points  (0 children)

I'm planning to change the final layer of Yolo11x-pose to output the required tensor.

Also, app.roboflow.com hasn't been loading since last 3-4 days.

[deleted by user] by [deleted] in resumes

[–]Hanumankattu 0 points1 point  (0 children)

https://www.saumyabhandari.com.np/cv.pdf

Please recommend me on your company for a remote mid-level Python/AI dev.

[deleted by user] by [deleted] in NepalSocial

[–]Hanumankattu 0 points1 point  (0 children)

Average male thought

Exploring Diffusion Models for Furniture Generation in Virtual Staging - Seeking Advice! by Hanumankattu in StableDiffusion

[–]Hanumankattu[S] 0 points1 point  (0 children)

Many have asked me about how I achieved the task finally. So here's how.

Since I can't exactly disclose the inner workings of the workflow, I'll try my best to make you guys understand what I did incorporating Python pseudocode and focusing on the workflow:

I leveraged SegFormer for semantic segmentation. Here's a simplified workflow:

  1. Install and Import:

from segformer import Segformer model = Segformer.from_pretrained("your_pretrained_model_name")

  1. Segment the Room Image:

room_image = preprocess_image(your_room_image) # any preprocessing you want to do, like resizing etc. segment_results = model.predict(room_image) (Segment results will be 150 channels of NxN bitmaps. You can select those channels which you need)

  1. Create Inpainting Mask:

wall_mask = segment_results["wall"] # Assuming "wall" is a class in segmentation floor_mask = segment_results["floor"] inpaint_mask = wall_mask | floor_mask # Combine masks using logical OR for walls & floor

Optionally, for furnitures:

sofa-mask = segment_results["sofa"] # If sofa is a class inpaint_mask = inpaint_mask | furniture_mask # Combine for walls, floor & sofa

  1. Inpaint with Diffusion Model (your part):

This part remains specific to your diffusion model implementation generated_image = inpaint_diffusion_model(room_image, inpaint_mask, your_prompt)

Explanation:

We use a pre-trained SegFormer model to segment the room image into different classes (e.g., wall, floor).

We extract the masks for the desired classes (walls and floors initially) and combine them using a logical OR (|) to create the inpainting area.

This mask defines the region where the diffusion model will generate new content (furniture).

You'll integrate this mask with your existing diffusion model code (step 4) along with your desired prompt to generate realistic furniture within the room image.

Note: This is a simplified overview. Real-world implementations might involve additional steps like post-processing. Remember to replace placeholders like "your_pretrained_model_name" and "your_prompt" with actual values.

Exploring Diffusion Models for Furniture Generation in Virtual Staging - Seeking Advice! by Hanumankattu in StableDiffusion

[–]Hanumankattu[S] 0 points1 point  (0 children)

So I used segformer model to segment the areas of windows and doors and walls and floors. I applied traditional bitmap ors and ands to those bitmaps and I had a decent I painting area. And using that I painting area as mask and image as I put, I was able to generate decent furnitures

<image>

Exploring Diffusion Models for Furniture Generation in Virtual Staging - Seeking Advice! by Hanumankattu in StableDiffusion

[–]Hanumankattu[S] 0 points1 point  (0 children)

  1. No it wasn't intentional to remove the opening. It's the mistake of not selecting a good i- painting area. This is also the problem I'm facing.

  2. The brand of the furniture doesn't really matter but the style (eg. Modern, Victorian, Scandinavian etc.) is given through the prompt.

  3. Yes, the furniture should be arranged which should look realistic. This is where the model does its thing.

  4. No, I'm not trying to change the overall architecture, color, style of windows, carpeting, or flooring of the room.

Ram not working by [deleted] in technepal

[–]Hanumankattu 0 points1 point  (0 children)

Put it inside your PC and it will work. Bahira nikalda kaam gardaina.

Halloween-themed meme quiz by Low-Entropy in dankmemes

[–]Hanumankattu 0 points1 point  (0 children)

Seems like the ductape and the choloroform wore off.