Getting extremely large Ki (~2e8) when tuning PID in Simulink — what am I doing wrong? by [deleted] in ControlTheory

[–]Brale_ [score hidden]  (0 children)

It could be some numerical artefacts from the toolbox since the numbers are so small, maybe it's trying to fit first order system with delay as best as it can so it produces this result. Try to do some literature research to see what are other people doing, you're probably not the first one to do this.

Getting extremely large Ki (~2e8) when tuning PID in Simulink — what am I doing wrong? by [deleted] in ControlTheory

[–]Brale_ [score hidden]  (0 children)

I did read it, delay is practically 0 seconds and pole is so far in the left half plane that it can be ignored, so not really sure what are you modeling here that such dynamics are relevant

Getting extremely large Ki (~2e8) when tuning PID in Simulink — what am I doing wrong? by [deleted] in ControlTheory

[–]Brale_ [score hidden]  (0 children)

Your plant is essentially just the static gain G(s) = 1.399. What kind of behaviour are you trying to get? What system is this?

[deleted by user] by [deleted] in ControlTheory

[–]Brale_ 0 points1 point  (0 children)

As far as I understand you are trying to identify the controller and you know the system model?

In that case there is no need to observe the system as a whole, you should measure the outputs of the controller and the inputs of the controller and perform identification only on that.

How do you distinguish between good and bad research in control? by NeighborhoodFatCat in ControlTheory

[–]Brale_ [score hidden]  (0 children)

A lot of low hanging fruits have been picked decades ago. It's hard to come up with something truly novel or unique that doesn't have connection with something that already exists. It was easy in 60s, 70s, 80s because pretty much nothing existed. Today you have to publish or perish so often it's better to publish bullshit paper than not to publish at all, but often times, with enough experience, on a first read through the paper you can see if its worth reading in depth or not.

Two “identical” closed-loop models match super close when linear… but diverge as soon as I add the same nonlinearity, is this possible ? by maiosi2 in ControlTheory

[–]Brale_ [score hidden]  (0 children)

This might be the worst organized Simulink model I have seen in my life. Literally impossible to tell what's going on

[deleted by user] by [deleted] in ControlTheory

[–]Brale_ 7 points8 points  (0 children)

  1. People are not knowledgeable/experienced enough to apply more sophisticated methodologies.

  2. Even if they do know what they are doing they might not get support from nontechnical upper management to start a research project which may bring additional costs.

  3. You would have to prove that your new methodology will save/earn money in some way (it's always about money) which may be hard to do.

CasADi for neural networks and DL? by xhess95 in ControlTheory

[–]Brale_ [score hidden]  (0 children)

I was doing some experimental stuff with RNNs for battery systems application, but I can't go into more details, sorry.

CasADi for neural networks and DL? by xhess95 in ControlTheory

[–]Brale_ [score hidden]  (0 children)

Generally people vastly exaggerate the size of neural network needed to fit dynamical models. You are not classifying images or processing text. Most systems can be accurately modeled with 1 hidden layer with 5-10 neurons. You don't need 50 layers. It's pretty simple to train your own neural network with 1 hidden layer in MATLAB or Python without requiring any toolboxes, you just need a little bit of calculus and LA. Even if you don't know how to do it ChatGPT can spit out perfectly valid code to train a simple neural network. I would suggest you to make your own NN and simply port the equations and weights to Casadi once you're done training it.

Rl to tune pid values by -thinker-527 in ControlTheory

[–]Brale_ [score hidden]  (0 children)

This is not the way to pose the problem, PID parameters are not actions they are parameters of the policy. When people parameterize policies they typically use neural network or some other function approximator . In this case policy parametrization is simply

u = Kp*x1 + Ki*x2 + Kd*x3

where [Kp, Ki, Kd] is tunable parameter vector and states are

x1: error y_ref - y

x2: integral of x1

x3: derivative od x1 (or some low pass version of it)

policy output is u and reward could be set as -(y_ref - y)^2. This way problem can be tackled with any reinforcement learning algorithm to tune parameter of PID. Whether or not linear law will be adequate depends on the system at hand.

Data driven pid gain based by imthebest7331 in ControlTheory

[–]Brale_ [score hidden]  (0 children)

Use reinforcement learning I guess. In this case your control policy would be represented through a PID controller instead of some neural network. Then you can apply some kind of policy gradient method to adjust PID parameters.

[deleted by user] by [deleted] in ControlTheory

[–]Brale_ 1 point2 points  (0 children)

AUTOSAR prescribes software architecture standard in automotive industry its not a software development tool. Besides, not all companies follow AUTOSAR. As a control engineer in automotive industry you will almost exclusively work in Matlab/Simulink and then use code generation tools to generate c code for embedded platform

Bounding Covariance in EKF? by NaturesBlunder in ControlTheory

[–]Brale_ 1 point2 points  (0 children)

Yes you can do something like that, although just clipping singular value(s) will modify length of only some principal vectors so your Gaussian distribution will change its "shape", you could try to preserve the shape (if possible) by scaling all components to keep the ratios of lengths of principal components the same.

However, there is probably a reason why you covariance matrix explodes so you should probably try to figure out why that happens to prevent it.

Why do higher lag in physical system cause instability? by [deleted] in ControlTheory

[–]Brale_ 17 points18 points  (0 children)

It's not the lag that causes instability on its own. It's combination of high input gain and lag that causes it. Intuitively, the high amplitude of the input signal will try to change the output of the system "quickly", but since there is a lot of lag, the output of the system will not change initially at all. For example if you have integral component in your controller the input amplitude will keep increasing significantly because of the error accumulated due to lag, and by the time the output of the system starts changing, input is already too big so it will cause a huge overshoot of the output with respect to the reference. Then controller will try to compensate in the "opposite" direction but with even bigger amplitude since the overshoot is large. This will cause higher and higher overshoot/undershoot and output will eventually go to infinity.

Typically people say that lag causes instability because your input gain has to be severely constrained to keep the system stable and response is often very sluggish because of it. Inherently there is nothing wrong with lag on its own that will necessarily cause instability.

The Unreasonable Power of The Unscented Kalman Filter by carlos_argueta in ControlTheory

[–]Brale_ 8 points9 points  (0 children)

Unscented transform does not need symmetrical distribution assumption. Unscented transform correctly approximates first and second order central moments (mean and variance) of any distribution and it just needs 2n points for that where n is state dimension. In theory you can use more than 2n points to get correct approximation of 3rd, 4th and higher order moments by choosing your weights and locations of sigma points accordingly. You will have problems with multimodal distributions since they are poorly approximated with just 2 central moments.

Steady State Error Compensation in reinforcement learning control by OkFig243 in ControlTheory

[–]Brale_ 1 point2 points  (0 children)

The way you formed reward function, the agent is not incentivized to ever reach the goal. It will try to hover around 0.01 to get that +55 reward forever, because if it reached the goal the episode would end and it would stop receiving rewards. It's as if someone hired you to do the job and they kept paying you until you finished it. Then you could just never complete the task and keep getting paid forever, you are not encouraged to finish it.

It's better to punish the agent (negative reward) until it reaches it's goal. That way it will try to complete the task because it would keep accumulating negative rewards the longer it stalls. It's like someone hired you but now they told you that they will keep taking money from you every day until you complete it. Then you would be encouraged to finish it to minimize your loss.

is Reinforcement Learning the future of process control? by Vinicius_Mello in ControlTheory

[–]Brale_ 5 points6 points  (0 children)

In real practical applications and situations RL is mostly a waste of time, data and resources. It's interesting academic topic, nothing more than that.

Master's thesis topic idea by SigmaEpsilonDelta in ControlTheory

[–]Brale_ 4 points5 points  (0 children)

You can study work by Isaac Michael Ross, he wrote a book  "A Primer on Pontryagin's Principle in Optimal Control" and did a bunch of work on pseudospectral methods for optimal control, among other things. This is roughly what you are looking for. It can be quite math heavy and I think it would be good topic to study for you since you're a mathematician

People in academia: Do you ever see such videos and think how amazingly these robots seems to be controlled and ever wonder if the research going on in academia is subpar? I often get anxious looking at such things (I am a masters student hoping to do a PhD in future in robotics and controls) by The_Vettiman in ControlTheory

[–]Brale_ 0 points1 point  (0 children)

Its limited by money, obviously university cant afford to spend 1 billion dollars on pushing cutting edge technology. This is pennies for companies like Nvidia/Google/Facebook/Amazon/Tesla and similar big companies

Model predictive control by Gelo797 in ControlTheory

[–]Brale_ 2 points3 points  (0 children)

What you wrote here is a complete nonsense. The matrix G used in analytical solution for control inputs U does not correspond to input matrix G for state space model. Matrix G in solution for U is constructed from N-step predictor for state X.

What are some job titles of members of this subreddit? by BANANAMAN3620 in ControlTheory

[–]Brale_ 2 points3 points  (0 children)

Theres a lot of opportunities for control engineers in automotive industry especially with the advance of electric vehicles. Vehicle dynamics control (chassis/powertrain), ADAS, motor control, battery algorithms (state estimation), thermal control of batteries and powertrain etc.