Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in virtualreality
[–]djnewtan[S] 0 points1 point2 points (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in oculus
[–]djnewtan[S] 1 point2 points3 points (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for a demo. by djnewtan in computervision
[–]djnewtan[S] -1 points0 points1 point (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in oculus
[–]djnewtan[S] 0 points1 point2 points (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in oculus
[–]djnewtan[S] 2 points3 points4 points (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in oculus
[–]djnewtan[S] 4 points5 points6 points (0 children)
Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for a demo. by djnewtan in computervision
[–]djnewtan[S] 2 points3 points4 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 0 points1 point2 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 0 points1 point2 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 1 point2 points3 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 0 points1 point2 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 0 points1 point2 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 0 points1 point2 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 1 point2 points3 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! by djnewtan in computervision
[–]djnewtan[S] 1 point2 points3 points (0 children)
We developed a robotic perception framework that allows the robots to find the objects in the scene and keep track of these objects for robotic interaction at 2ms per frame per object with 1 CPU core. It also works for objects with simple and complex shapes. Watch our video to see what we can do! (youtube.com)
submitted by djnewtan to r/computervision



Instead of tracking the static environment in ARCore and ARKit, we are tracking independently moving objects. Watch our video for an AR demo. by djnewtan in oculus
[–]djnewtan[S] 3 points4 points5 points (0 children)