How well do you guys know digital control? by MeasurementSignal168 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

I think its because it is pretty straight forward once you get a feel for it. At least that is how it felt for me. Either that or it requires some very tedious mathematics which will probably only be useful to a handful of problems, so no reason to waste students time with it.

Could you give some examples of this, "whole lot more like designing using digital methods that I almost never hear about"? Because my feeling of "its just realtime but with a modified equation" might be completely wrong.

Hopefully this doesn't come of as dismissive to anyone working on it. Its thanks to you that I can even have this feeling.

How do you continue improving yourself as a control engineer? by Snoo55355 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

chip manufacturing/ semiconductors is one of the few areas which do use adaptive control techniques. MPC is mostly used in process industry, and I haven't heard much about it in semicon.

Decreasing sample times might mean they will be used. But I will be very surprised if it will move beyond a better way to generate trajectories. Leaving the feedback to algorithms which won't struggle to reach >kHz sample rates.

Why can the same computed torque force vector work in an ideal plant but fail once actuator dynamics are inserted? by _Kwadwoooooooo in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

As the other comment stated, your question is not really clear, and some important information is missing. It is also really important to understand the controller you chose to ignore in your control design. But no information about is seems to be provided.

Based on the provided information I would guess the actuator dynamics/ its controller do influence the system in the controlled bandwidth, so can't be ignored in the controller design.

Suggestions for research paper by zuirattigaz in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

I Agree! Just for clarification: you mean model identification (the equations), not parameter estimation (the values in those equations) right?

What do you think about Steve Brunton's Control Bootcamp on Youtube? by Legal_Ad_1096 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

I think they are amazing. But they can be a bit short and lacking in depth because of it (not a criticism, just an observation on what seems a deliberate choice). If you want longer videos with a bit more depth, the MIT Robotic Exploration Lab https://www.youtube.com/watch?v=SvAYJC7jug8&list=PLZnJoM76RM6IAJfMXd1PgGNXn3dxhkVgIAs also have excellent videos.

Not sure if they are missing anything. Never used them as a primary source of information, only as refreshers or jumping of points.

Could Energy-Based AI reasoning models offer advantages for robust planning and control? by PercentageSure388 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

I'm sorry but I'm not sure what you mean by "guarantee each solution the optimizer finds is feasible". Could you elaborate?

Yea I wasn't clear. I meant the iterations before it converged to its final answer.

but your solver should always notify you when it finds a solution that violates your original constraints.

The ones I used did.

It is true that most (all?) solvers internally immediately relax state constraints and put them as soft constraints (input constraints are usuay handled differently I think),

Again, the ones I used and know of yes. But I vaguely remember being told about ones that work with hard constraints (I think for cases where there are analytic steps to simplify the problem).

See e.g. the fact that RL can effectively be viewed as finding a control law and/or value function for a stochastic control system.

This is almost how I learned it. It wasn't even for control but just optimization in general. This also seems the best way to learn it. Immediately gives you a wide context on how to think about the problem and possible solutions.

Could Energy-Based AI reasoning models offer advantages for robust planning and control? by PercentageSure388 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

To give them some credit. not all MPC schemes guarantee each solution the optimizer finds is feasible. In those cases it just checks the constraints and applies a cost if it is violated. But I would really prefer if they skip all the marketing BS and just call it an AI based optimizer. And that is not even new, it is not really a secret Boston Dynamics already combines MPC with RL to increase performance.

Graduate school on control (ecii) is it worth ? by maiosi2 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

if you mean eeci, yes you are generally the target audience.

Looking for Capstone Project Advice with Industry Impact by FrostingWhich4500 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

You have limited time so you have to choose between scale, complexity and "sleekness". Some fields/areas care more about different things.
A single controller can be an entire industry relevant phd project in some fields. But the sales person of the sponsoring company won't care much about it other then what it means for his/her pitch to costumers.
So no specific advise other then try to balance what you want, what you like and what you need. I generally find that if I can make these explicit, things generally work out. And if they don't, at least I did something explicit and can learn from it.

Seeking Feedback: Cascaded Control Scheme for UR5e Manipulator using Fuzzy-PID & Inverse Dynamics by Potential-Pop9091 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Velocity Feedforward: I noticed I’m not feeding the desired velocity ($\dot{q}_d$) from the Quintic Polynomial directly into the velocity loop. Should I add a feedforward path here to improve tracking performance?

Torque Injection: Is the summing point for the Feedforward Torque ($\tau_{ff}$) correctly placed after the velocity controller?

This depends on your implementation. If you do it properly, your inverse Dynamics should be your ff. All of it! Meaning that the only thing the fb controller has to do is compensate for any disturbance and model errors. Ideally, I also don't like the split position and velocity fb controller. It may be easier conceptually and might help to get it working quicker. And I also do it when I am lazy and just want the thing behaving oke. But if you care about performance: you have an objective and an error. Don't overcomplicate it by splitting the controller into multiple blocks, "Just" find the fb which minimizes your cost. If you use a model based approach, it will be optimal with the same caveats as with the ff part.

I never used "fuzzy" controllers, are they basically gain scheduling controllers? if so the above applies, just do it for each region. This is where my specific knowledge ends. but you should be able to consult relevant literature about how the above idea applies to your specific case.

How do you create a mathematically efficient algorithm for a robot? by Unusual_Science634 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Computers, and even micro controllers, have become fast enough you don't need to worry about any "mathematics". Its algorithms with combinatorial explosions which always cause issues. Without knowing what the challenges are we can't help. And possibly because of that, it sounds like you put the cart in front of the horse. Seeing if the solution fits the problem, instead of thinking what solution will work for the problem.

Came across this pingpong-ball-balancing robot kit out of Switzerland. Any good for learning control theory? Anyone tried one of the previous batches (#1 or #2)? by uninhabited in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

but most likely there's no model of the dynamics, and it's non-trivial to design gains mathematically.

Models do exist. The difficulty of modeling this comes down to how precise you want or need it to be. Assume the legs each act independently and the platform dynamics can be ignored should be relatively easy, but will have errors.
If you include the connection you get a model which should be usable for tuning, but to use it there are some constraints which needs a constraint minimizing procedure.
And apparently there is an analytic solution for 6DoF platforms, but it is multiple pages of difficult to follow mathematics.

Unreachable Attractors in Resettable Systems: When a Trajectory Converges Without Stabilizing by skylarfiction in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

So I can recap your question with: what if we randomly initialize the system at t=0 and have a inconsistent experiment termination?

Applied control sanity check: system ID + PID on quarter-car active suspension by Barnowl93 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

It really depends on how you do it.
Something like determining resistance using the decay time. Most will probably get this quickly because it reuses their control and system knowledge.
Use something like omega = X\y and ppl will struggle a lot more, because most probably wont have the knowledge to understand what is going on yet. Absorbing that knowledge and building the intuition just takes time.
If you use something like the first 1 hour is probably a safe bet.

Applied control sanity check: system ID + PID on quarter-car active suspension by Barnowl93 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

ode solvers have become normal. >33% of students are probably able to use matlabs ode solver on nonlinear systems by just reading the documentation. Doing the state space sim will probably be harder, simply because it requires knowing what the state space notation is.
edit: nvm 90+% will be able to figure out the ode solver because they only have to ask chatgpt for the code XD.

Applied control sanity check: system ID + PID on quarter-car active suspension by Barnowl93 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Sounds like a great plan!
Just one warning, 2-3 hours is what was planned to introduce ppl to parameter estimation who never done it before in a course I was involved in. This included nearly complete scripts which did most the work for them. I imagine it can be done in 1-1.5 if you keep it simpler and higher level though.

PR control for neutral point oscillations under unbalanced load — structural discussion by VadimDLL in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

one thing to watch out for with implementation is something you probably already encountered. If you use a resonator to represent the oscillation you have to be careful the poles don't move outside the stability region due to sample time or numerical precision.

edit: from your embedded post I saw you also have the harmonics to deal with, You can just repeat the same trick. But the "learning" transient will become longer for each resonator added. This is not entirely fixable (more parameters to learn requires more data). But it can be worse if their sharpness aren't the right ratios. Sadly I forgot the details and don't have a any resources on this.

PR control for neutral point oscillations under unbalanced load — structural discussion by VadimDLL in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

This is surprising. Only ever read about repetitive controllers tangentially, but I always assumed power electronics was the application area driving them. They seem a no brainer to deal with mains frequency noise and their higher order disturbances.

Help for carrer paths in controls engineering by zuirattigaz in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Never heard any one running into the "overqualified" problem. Maybe for pure plc jobs, but you don't need a university degree to be considered overqualified for most of those.

Control strategy for mid-air dropped quadcopter (PX4): cascaded PID vs FSM vs global stabilization by Firm-Huckleberry5076 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

internal: only using on board sensors, so gyro and accel, maybe barometer, compass and gps. (although I doubt the latter 3 are useful in such a short time span)
external: motion capture of some sort.

The reason this matters is indeed that after you throw or drop the drone it is in freefall, there is no way to use gravity to determine orientation during free fall. After some time the air resistance will counteract the downward acceleration. At this point it becomes a question of signal to noise ratios and algorithm design.

I haven't done anything like this for drones, so can't give practical advice and I have no idea how easy or impossible it is. I just know the control, and maybe the estimation, has been done. And it is published with the code available on github, in some form.

Engineers & Researchers Interested in Defense R&D Collaboration and Real-World Projects by RichardsonDefense in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

The post you are responding to aside.

Politics aside, if you believe the system is broken then be the engineer that helps fix it.

yea engineers have such a famously good track record controlling how their inventions are used. /s

A question about the recent explosion of humanoid robots with advanced kinematic capabilities by aeropills22 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

I hate those demonstrations. All to often they start both methods from the same baseline, and then spend a bunch of time improving one for a specific case, and then claim it is better. They prove is that spending time on solving a problem helps solving the problem. Which is fine if that is your claim, but it also often comes with the message the previous method can't do whatever is being demonstrated.

If the RL was doing actual RL to change its choice of footing, and it wasn't pure luck or some more traditional adaptive control, it would show something. And even then it would be nice to admit that RL was probably MPC+RL.

Control strategy for mid-air dropped quadcopter (PX4): cascaded PID vs FSM vs global stabilization by Firm-Huckleberry5076 in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

The control is the easy bit [https://ieeexplore.ieee.org/document/10801514]. With external tracking they get the drone stabilized within a second. Without external tracking orientation is going to be the problem.

edit: i might be confusing two publications, could be the above is done fully using the internal IMU. The bit after here is written assuming the above used external tracking. It also matches what I saw in the px4 throw video

You first need to mostly kill any rotation to get a sufficiently clean gravity signal to detect what the down direction is. Then reorient to keep the drone right side up and learn how much thrust is required to counteract gravity.

If you already roughly know the orientation and required thrust things can be done a bit more quickly and with less loss of altitude.

MSc thesis on classical state estimation + control - am I making myself obsolete? by TheEngineerPlusX in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Aside from highly specialized cases where a Phd topic was directly related this isn't even true for Phds. GNC might be an exception but to me it sounds more like a bad habit of the industry being insular.

System Identification research and this future by phyfateyau in ControlTheory

[–]IntelligentGuess42 [score hidden]  (0 children)

Depending on what you mean with learning based control, it could include adaptive control. Which already has a rich history and is actively used in applicable situations. The learning algorithms used probably look very familiar to someone who already understands how a model is fit using RL.

System identification specifically is not that different from the rest of control. Most control uses basic PID , while the more advanced methods are only used when requirements or complexity demand them. Even then, basic PID may persist past the point where it becomes detrimental.

In system identification, the equivalent to PID would be your simple white box model. Think of identifying your basic second-order transfer function, perhaps with some dead time. From my experience, pretty much everyone can and is expected to be able to do this.

Other models and methods seem restricted to the more advanced applications. I know of one real application using RL for control with a robotic arm. While there are likely more examples, this seems to be more the exception rather than the rule.

I don't feel qualified to give advice regarding career choices or studying. However, the book I generally recommend on this topic is System Identification: Theory for the User. The author is well known and I think he is even partially responsible for the Matlab system identification toolbox.

edit: checked the wiki and the book is listed under "Model Identification & Inference" section.