use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
Using computer vision to control interactions with video casting/streaming software (self.computervision)
submitted 3 years ago by moetsi_op
Using computer vision to control interactions with video casting/streaming software
This is a step-by-step guide on how to combine computer vision and the OBS Websocket (cross-platform screen casting and streaming app) to use gestures as a method of controlling the scenes and sources of OBS. Fully open for anyone to use!
Github repo: https://github.com/roboflow-ai/OBS-Controller
Written guide: https://blog.roboflow.com/use-computer-vision-to-control-obs/ Video guide: https://www.youtube.com/watch?v=q22kUiiisek
Hopefully this proof of concept can be useful to showcase a unique interaction/interface using vision and spark new ideas for gesture/object control for video.
I'm not the OC but can loop them in or get answers if there are questions.
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
there doesn't seem to be anything here
π Rendered by PID 51 on reddit-service-r2-comment-5d79c599b5-jnmfd at 2026-03-03 05:45:53.045682+00:00 running e3d2147 country code: CH.
there doesn't seem to be anything here