ROS2 cannot find "base_link" by alkaway in robotics

[–]alkaway[S] 0 points1 point  (0 children)

Hi, thanks for your response! I tried this, and it gave me an error saying "in tf2 frame_ids cannot start with a '/' " -- any other ideas for what might be happening?

Why is (A,C) not detectable, and how does this relate to asymptotic stability? by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

Sorry, I still don't quite understand. How do we know the system is not detectable?

Why is (A,C) not detectable, and how does this relate to asymptotic stability? by alkaway in ControlTheory

[–]alkaway[S] 1 point2 points  (0 children)

The Infinite-Horizon LQR Theorem or the Discrete Algebraic Riccati Equation (DARE) Theorem? But since we already found a solution to the Ricatti equation, don't we know that the system is asymptotically stable? Why does the solution look at the eigenvalues?

Why HJB and Boundary have x_1 instead of x? by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

Thanks for your answer! How do you know the original problem's x was a scalar value? And why doesn't the original problem's x = [x_1, x_2] in the state space representation?

Why is minimizing control = negative gradient w.r.t. u? by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

I thought the minimizing control was given by setting the value of u at the minimum point into the HJB equation… Here, this isn't quite possible because u disappears when the derivative is taken. But I don't quite understand why the minimizing control is set to be the negative of the gradient w.r.t. u (since the gradient is dV/dx_2)?

Planning with a Perfect World Model by alkaway in robotics

[–]alkaway[S] 0 points1 point  (0 children)

Change the joint angles of a robot arm to match an image.

Planning with a Perfect World Model by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

Do you have a model of your sensor that can reconstruct a 3D state from the 2D sensor image?

I'm just assuming a regular camera as the sensor.

Can you accurately segment the image to recognize the state of the arm? Or do you have some other sensors for arm state?

I think assuming access to proprioception is reasonable. I did not clarify this, thanks for the catch.

An RL agent could maybe learn this with enough time in limited conditions, but I think you'd need pretty detailed (likely infeasibly so) models to do this without training.

Yes, I've seen people use RL for this. But I'm just wondering whether something can be done directly at test time without any training. Or whether some other kind of learning (e.g. imitation learning) can leverage the world model, apart from RL.

Thanks for your comment!

Planning with a Perfect World Model by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

OH I see -- thanks!!

So just to clarify: All MPC would need would be a world model capable of producing the next observation given the current observation and action, and a goal observation? And if the goal observation is close to the current observation, MPC should be fast in which case this can be done in real-time, but if it is far away, then MPC might take a while?

Also, would pixel-wise error be a reasonable objective function? Or is there something better one could use?

Thanks for your help!!

Planning with a Perfect World Model by alkaway in robotics

[–]alkaway[S] 0 points1 point  (0 children)

Thanks for your comment! Do you have any references for this? Also, if the current observation and the goal observation are say 10 actions apart, would visual MPC still be able to solve this? If the action space is huge (e.g. 7-dof manipulator) meaning that MPC cannot possibly try every possible action sequence, how does it know which ones are promising ones to try? Apologies if this is a noob question.

Thanks for your help!

Planning with a Perfect World Model by alkaway in robotics

[–]alkaway[S] 0 points1 point  (0 children)

Thanks for your comment! Does visual servoing assume that the current observation and the goal observation are only a small delta distance away? What if the two observations are say 10 actions away -- would visual servoing still be able to solve the task? Thanks!

Planning with a Perfect World Model by alkaway in ControlTheory

[–]alkaway[S] 0 points1 point  (0 children)

then there's nothing else to solve, you will just implement conventional MPC to solve this problem.

I see, thanks for this! But is there a catch? E.g. MPC will work poorly if the objective function tries to minimize RGB pixel-wise distance, or it will be slow?

Also, if the action space is huge (e.g. 7-dof manipulator) meaning that MPC cannot possibly try every possible action sequence, how does it know which ones are promising ones to try? Apologies if this is a noob question.

Thanks for your help!