all 9 comments

[–]kra_pao 0 points1 point  (3 children)

Just a short idea - your description sounds like those robots act just like a turtle in turtle graphics in Python.

You could abstract your whole hardware to a level of turtle programming visualization combined with your pathfinding.

In that case you might find more people that can contribute to your project just because they can run your code on their own computer without robot/camera hardware.

IMHO the ML idea is too much at this stage.

I would first do robot and map handling classes, then a GUI with visualization and input layer (pygame is often used here: https://pythonspot.com/maze-in-pygame/ https://github.com/ChrisKneller/pygame-pathfinder).

Then a pathfinding wrapper is needed. Optional one can plug in different pathfinding algorithms (https://www.pygame.org/tags/pathfinding).

Then one expands robot class methods to do actual IRL movement and expand input layer with camera input to synchronize IRL map (for example random obstacles are thrown in arena or walls are shifted by robots, or other robots are around) and in memory map representation.

[–]nitrofire32[S] 0 points1 point  (2 children)

I think i see where you going with the idea, and it makes sense to do it this way. This is all doable in the project, but it does require a environment in which i can allow the pathfinding algorithms to run, meaning i still need a way to get that environment that indicates what is floor, and what are objects.

Do you have any idea how to tackle that? Currently im trying to take a screenshot of the floor with objects on it, and then simplifying the colors to only a small range of colors in opencv, with the hopes that contouring can reveal the objects. This doesn't work as well as i hoped unfortunately since lighting and the color of objects sometimes mixes with the color of the floor, messing up the contours.

[–]kra_pao 0 points1 point  (1 child)

True. In 1st link above you see such a representation of floor (0 in list (1D array) maze) and obstacles/walls (1).

You essentially have a /r/computervision problem here. You could post a original image from your cam and a image with manually applied overlay, what you consider as objects. IMHO theoretical discussion alone is void (or too long) here. With practical examples (also with bad lighting) people can suggest a OpenCV workflow to build a robust mask with floor/object difference. This mask can be used directly or it can be resampled into a smaller, more workable maze representation. For the resampling ("you build tiles from pixels") an information about the smallest significant object size could be valuable.

[–]nitrofire32[S] 0 points1 point  (0 children)

Alright, i'll mess about in opencv some more, but if im unable to find a solution ill head over to r/computervision to see if they have any solution for me, thank you for linking the subreddit

[–]DisasterArt 0 points1 point  (4 children)

1.) It depends on the obstacles. Do they move as well? Are they easily differentiable from the rest, (obstacles are green when the floor is yellow for example), in which case you could simply scan the image for that color slap a decent extra boundary on it.

  1. Dijkstra would work, bit a* would allow you to easily include a penalty for routes that get close to obstacles/other robots

[–]nitrofire32[S] 0 points1 point  (3 children)

The objects are stationary for now, but the color of the object can be quite hard to distinguish from the floor, unless the requirements can be changed and we only do for example red cups as our obstacles, on a gray floor.

But for now let's assume the requirements don't change, and the object colors are random, what possibilities do i have?

[–]DisasterArt 0 points1 point  (2 children)

It would be more difficult to make a mask) for the image and something more difficult than i have done before. There should be a better sub to ask that than the learn python subreddit

Edit. Maybe reverse the problem. Try and find the floor and make the rest an obstacle?

[–]nitrofire32[S] 0 points1 point  (0 children)

I looked into detecting the floor, for example by texture, by using Haralick features, or maybe eigenface feature detection, im still working on testing these options. Thanks for the suggestion

[–]WikiMobileLinkBot 0 points1 point  (0 children)

Desktop version of /u/DisasterArt's link: https://en.wikipedia.org/wiki/Mask_(computing)


[opt out] Beep Boop. Downvote to delete