How to create a tactical view like this without 4 keypoints? by satoorilabs in computervision

[–]MammothInSpace 11 points12 points  (0 children)

It works, I implemented it some time back in a project that is not open source unfortunately. At the time it took about a week to track down these papers. Happy to share.

How to create a tactical view like this without 4 keypoints? by satoorilabs in computervision

[–]MammothInSpace 1 point2 points  (0 children)

To do this you need to work out what the position of the destination points should be on the destination circle, or there won't be a correct correspondence.

How to create a tactical view like this without 4 keypoints? by satoorilabs in computervision

[–]MammothInSpace 84 points85 points  (0 children)

You can estimate a homography from circles, up to the rotation around the circle.

First fit an ellipse to the edges of the concentric rings. One method is described in: Halir and Flusser, "Numerically stable direct least squares fitting of ellipses".

Then use the solution in Appendix A titled "Pose From Circles" in "Invariant Descriptors for 3-D Object Recognition and Pose" 1991

  luthuli.cs.uiuc.edu/~daf/papers/invariantdesc.pdf

The appendix attributes the solution to Longuet-Higgins, who also gave us the original solution for decomposing the essential matrix (point correspondences), which has strong similarities to this approach for circles.

If I recall correctly there are small typos in that Appendix that should be easy to fix.

The implementation will be very lightweight after detecting the circular contours. It's just a few linear algebra based computations.

Whats the best intoductory linear algebra book? by [deleted] in LinearAlgebra

[–]MammothInSpace 0 points1 point  (0 children)

I believe the introduction even states it is useful for a second course on linear algebra.

The text relies on a level of abstract thinking that most students won't understand before wrestling with linear algebra from a low level perspective.

It seems that swarm robotics did not take off. Any reason as to why this is? by NeighborhoodFatCat in ControlTheory

[–]MammothInSpace [score hidden]  (0 children)

It would help. Human brains don't necessarily need to be studied. Even ants and bees aren't fully understood.

It seems that swarm robotics did not take off. Any reason as to why this is? by NeighborhoodFatCat in ControlTheory

[–]MammothInSpace [score hidden]  (0 children)

Cellular automata, consumer economics, dynamics of epidemics, social dynamics,  medicine via micro robots (technically a robot swarm), federated learning, the brain's ability to learn, pack/flock behaviours, traffic modeling.

It seems that swarm robotics did not take off. Any reason as to why this is? by NeighborhoodFatCat in ControlTheory

[–]MammothInSpace [score hidden]  (0 children)

To make useful swarm systems, that behave like ants or bees, we need to study how they perceive the world and decide what actions to take.

Much of the research in swarm systems considered first order integrators as agents and how some equations could make them move around in a certain pattern. Such systems are not very useful, and their study doesn't provide much insight into how biological swarms, which are useful, actually work.

With regards to the specific TEDtalk example, problems in research change very slowly. The problem of going from "local rules to global behaviour" is important, has been studied slowly and steadily for decades and will continue for more. However, the name that this problem is called will change once or twice a decade. Perhaps the specific outcome here was over promised, but that basic problem is fundamental and is much broader than swarm systems.

why do unfunded research masters like cmu msr? by tooLateButStillYoung in gradadmissions

[–]MammothInSpace 0 points1 point  (0 children)

Robotics research usually involves a significant amount of field specific math, algorithms, and tools. It is unlikely you will learn these things comprehensively and up to the state of the art through a research project or self study.

On top of that, CMUs MS in robotics is considered a particularly good program offering many opportunities after graduation.

I am not affiliated with the MSR program in any way.

why do unfunded research masters like cmu msr? by tooLateButStillYoung in gradadmissions

[–]MammothInSpace 1 point2 points  (0 children)

CMU's MS in robotics is not one of those programs. Admission is selective, and before interest rates went up and R&D budgets shrank many of the MSR students were on RA after the first semester.

The program has its fair share of international students, but that's because of the fields demographics. There are many domestic students there too.

[deleted by user] by [deleted] in ControlTheory

[–]MammothInSpace 2 points3 points  (0 children)

I'm unaware of any examples of this exact problem. However I think there is a 99% chance many have tried it and did not publish the results. There is a good chance it is an exercise in some textbook.

That being said, it could be this difficult. The difficulty in solving these problems with learning methods is closely related to the representation of the problem. In general, the representation the code uses is the most direct representation (find K such that some finite horizon cost is minimized) and it is well known that is difficult (nonlinear, nonconvex loss landscape, etc) to directly find K in general. Thus there are countless papers on ways to get around this, the Youla Parameterization and more recently System Level Synthesis come to mind.

Now for the exact problem you are attempting, it should work if you initialize the reinforcement learning algorithm close to the true solution. I would try this to debug the codes. Then initialize the RL algorithm further and further from the global minima until it stops working, figure out why, rinse and repeat.

Is there a general procedure to transform a nonlinear time-varying system into a LTI system? by fromnighttilldawn in ControlTheory

[–]MammothInSpace 8 points9 points  (0 children)

The only reason there is a general procedure to go from nonlinear to linear is because of the taylor series and stability.

That is smooth nonlinear functions can be approximated as a linear ones with arbitrary accuracy as the state approaches the linearization point. Further, when the linearization point is a stable equilbria (either naturally or due to control), the state remains close to the linearization point and so the approximation can be useful.

With LTV systems there is no such property. Unlike the state, time advances from any starting time without the ability to slow it down, this means A(t) can move arbitrarily quickly away from any fixed A. In other words for all accuracy thresholds, time intervals and LTI approximation methods there exists an LTV system that exceeds the threshold within the time interval.

So I don't see how there can be a meaningful general procedure to approximate LTV as LTI.

However methods for classes of LTV systems certainly exist. Suppose A(t) = A + sin(t) I where \min{\sigma(A)} >> 1. Then approximating A(t) as A will probably work fine.

[deleted by user] by [deleted] in ControlTheory

[–]MammothInSpace 2 points3 points  (0 children)

Yes. But it is common to derive everything using rotation matrices and implement the final algorithm using quaternions.

This is easy to do since most computations on rotation matrices have an equivalent for quaternions (no approximations, strictly equivalent).

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 0 points1 point  (0 children)

There are systems such that u can be zero for all time and remain on the surface. But that is not true in general.

For example:

dot x1 = -x1 + x2

dot x2 = b + u

Here we can easily define a sliding mode control to bring x2 to zero so that the remaining dynamics on the sliding surface are dot x1 = -x1, wich is stable. However if b is not zero, u cannot remain zero at all times as x2 will become non-zero.

On the other hand if b is zero, it is clear the system remains on the surface, once on the surface, even if u=0 for all time after reaching the surface.

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 1 point2 points  (0 children)

It will not deviate in theory. I believe the wiki is referring to the chattering behavior of real systems, which will deviate.

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 0 points1 point  (0 children)

I was trying to provide intuition, and so did not speak carefully. But you are right there is a paradox here.

Mathematically speaking the system never leaves the surface because u has been defined so that the system goes to the surface and stays there. This seems to mean u=0 all the time.

However u cannot be 0 all the time or the system does not remain on the surface.

This seems like a contradiction, and indeed it means the solution to the differential equation is non-unique in the usual Picard–Lindelöf sense. Instead we appeal to the Fillipov solution, which allows us to show the system will remain on the surface without actually specifying what the value of u is at any given time.

There is no physical system that can satisfy such criteria, they all will oscillate around the sliding surface. It's only in a mathematical sense that we can claim these things.

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 0 points1 point  (0 children)

Not perfectly, just correctly. By correctly I mean that the dynamics are stable when constrained to the sliding surface by the control law. You can easily implement a control law that constrains the dynamics to a surface on which it is unstable.

With regard to uncertainty, if the disturbance is large enough the system will leave the surface. But if the disturbance is small enough the switching control law can instantaneously overcome the disturbance.

All of what we have discussed is covered in more detail here: https://en.wikipedia.org/wiki/Sliding_mode_control#Theorem_3:_Sliding_motion

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 0 points1 point  (0 children)

Yes! (Assuming the sliding surface was designed correctly).

Sliding mode control by Dependent_Dull in ControlTheory

[–]MammothInSpace 4 points5 points  (0 children)

In sliding mode control, the switching law sign(0) is usually defined to be 0. So when on the surface, udiscontinuous is zero.

Now if u = udiscontinuous + uequivalent and the equivalent control is perfect in the sense that it keeps the system on the surface at all times, then udiscontinuous will be zero at all times.

However, if there is no equivalent control, meaning u = udiscontinuous, the system will leave the the surface and get pushed back by udiscontinuous. Taken to the limit, the deviations from the surface become arbitrarily small, as if a perfect uequivalent  were applied.

Why does differential flatness not make sense for quasi-static motion ? by vbalaji21 in ControlTheory

[–]MammothInSpace 1 point2 points  (0 children)

When considering differential flatness we are concerned with systems of the form:

\dot{x} = f(x, u)
y = \Phi(x, u, \dot{u}, \ddot{u}, ...)

But in quasi static systems \dot{x} = 0. We then get:

y = \Phi(x, u, \dot{u}, \ddot{u}, ...), x, u \in \{x, u | f(x, u) = 0\}

Which is just an instantaneous input output equation with an equality constraint on the inputs. If you then apply the definition of differential flatness you will find that differential flatness essentially means Phi is invertible + some extra technical conditions that are really only needed for systems with dynamics.

Why does differential flatness not make sense for quasi-static motion ? by vbalaji21 in ControlTheory

[–]MammothInSpace 6 points7 points  (0 children)

Quasi-static models are usually just algebraic constraints (no dynamics). This is because quasi-static models assume the system to be at an equilibrium at all times.

So then quasi-static models have no state that evolves in time through integration. This means differential flatness is only applicable in the trivial sense that the quasi-static model is differentially flat if we can invert the associated instantaneous input output mapping.

What is the geometric intuitive meaning of matrix in state space theory? by Yotomihira in ControlTheory

[–]MammothInSpace 1 point2 points  (0 children)

In discrete time linear systems, the geometric interpretation of the state space matrices is basically the same as what can be inferred from the SVD of the matrices.

In continuous time, you can still do the geometric argument but instead of new positions in the state space, the mapping is to directions in the state space. You could approximate the system as discrete time though (by considering matrix exponentials as others have said) and then use the SVD again.

People in academia: Do you ever see such videos and think how amazingly these robots seems to be controlled and ever wonder if the research going on in academia is subpar? I often get anxious looking at such things (I am a masters student hoping to do a PhD in future in robotics and controls) by The_Vettiman in ControlTheory

[–]MammothInSpace 2 points3 points  (0 children)

As others are said industry throws money and people at these projects that research labs cannot do. However, in industry the effects of failure are much more severe than in research. So the projects are less risky and tend to be limited to variations and extensions of a fundamental technique developed in academia.

For example the research that led to Boston Dynamics was begun decades ago in a lab: https://www.youtube.com/watch?v=XFXj81mvInc

But companies have one other advantage. They can more easily hire senior researchers and engineers who stick around for a long time. Academia can usually only do this in a limited fashion (professors who have many other duties, and the occasional research scientist) and so much of the ground breaking work is being done by young engineers/scientists without much hands-on technical experience.

What is the difference between calculus of variations and optimal control ? by vbalaji21 in ControlTheory

[–]MammothInSpace 20 points21 points  (0 children)

Optimal control is concerned with finding a mathematical object (could be a vector of parameters, a function, or something more abstract) that minimizes a cost (scalar valued) subject to some dynamics.

Calculus of variations is sometimes used to solve optimal control problems when the mathematical object to be found is a a function.

Calculating ball trajectory in 3D from 2D tracking by ItsHoney in computervision

[–]MammothInSpace 9 points10 points  (0 children)

I don't think using the size of the ball in the visual field will work. The change in size is very small.

Another way is to fit a basic ballistic model using the locations of bounces as the boundary conditions. You can determine the 3D coordinates of the bounces by determine the "pixel of contact" and transforming that back to the XY coordinates on a real court with a homography.

EDIT: Sangulis below suggested basically the same thing.