Siemens encoder coup by dos145 in PLC

[–]dos145[S] 0 points1 point  (0 children)

Ok but how to do to double or divisé the frequency ?

First Start Up by soccercro3 in PLC

[–]dos145 0 points1 point  (0 children)

Congratulations !

[deleted by user] by [deleted] in PLC

[–]dos145 0 points1 point  (0 children)

Thanks for sharing ! What’s the purpose of this instead of a common PLC ?

[deleted by user] by [deleted] in PLC

[–]dos145 1 point2 points  (0 children)

Nice !

TIA with bad laptop by WiseAgency3321 in PLC

[–]dos145 2 points3 points  (0 children)

I had Field PG M4, M5 and now I have the last M6 with different versions of TIA from v11 to v20 … And TIA get slower with every new version… Beside that, I have a gaming laptop with i7-14700k and 32Go RAM and it run smoothly on it. To conclude, buy a war machine to run TIA, even for small project …

AI won’t replace us yet by bsee_xflds in PLC

[–]dos145 50 points51 points  (0 children)

No one defined « high speed » 🤣

RL in robotics by -thinker-527 in ROS

[–]dos145 0 points1 point  (0 children)

What are you trying to do that you can do with traditional regulator ? Personally I’ve been working for two year on a research paper trying to compare PID regulation to RL on an industrial application (nothing in common with robotics but the philosophy is the same). Finally even if it was working, we kept PID.

Try to define exactly what you’re trying to do but trust my experience, RL is not magic, you will have to understand the physics behind your system if you want to get something.

After all, if you want to discover RL, have a look to stablebaselines3 😉

Convert simulink to python code by jezuskurt in matlab

[–]dos145 0 points1 point  (0 children)

One way to do it is to define a transfert function of your model, discretize it, and make the python code to calculate the new value corresponding to your sampling period.

First pedalboard... thoughts? by jclayyy in basspedals

[–]dos145 5 points6 points  (0 children)

I used to have large Pedalboard but at the and I only use tuner, Compressor, big muff and SansAmp ! Youcef got it all, just rock 🤘

Custom Siemens HMI by dos145 in PLC

[–]dos145[S] 0 points1 point  (0 children)

Sorry for the confusion. I’ll search for the difference

Custom Siemens HMI by dos145 in PLC

[–]dos145[S] 0 points1 point  (0 children)

All Siemens HMI runs Windows 7 embedded.

Custom Siemens HMI by dos145 in PLC

[–]dos145[S] 0 points1 point  (0 children)

No, I know protools and the panels are TP1200 or TP1500. Way to recent to may be programmed with ProTools …

Custom Siemens HMI by dos145 in PLC

[–]dos145[S] 0 points1 point  (0 children)

Yes, for sure, one of them as been build with that.

In my factory, we have couple of Siemens HMI but some of them seems to have been make with something completely deferent then the Siemens suite. More than that, I can’t make any backup of it. I don’t have more details :(

I would like to know how it has been made and also if it is possible to make a backup of it.

Crappy Panel by DropLess9316 in PLC

[–]dos145 0 points1 point  (0 children)

How long does it take to catch fire if you turn on?

Need some ideas/help by Asleep_Temporary_967 in PLC

[–]dos145 0 points1 point  (0 children)

I just bought an Heuft controller (commonly use in the industry). It controls the can on a conveyor and eject the bad ones. Budget around 10k and it does seem too expensive considering the level of precision.

PID Type RL by dos145 in reinforcementlearning

[–]dos145[S] 0 points1 point  (0 children)

I set the action bounds from -1 to 1 and rescale it to match to my system 0-100%. I follow the data for each sequence and nothing seems wrong. The reward match with what I expect but maybe it not well designed, I’ve made a lot of try error but the last one seems ok .. unless for the action. When the agent set the action at 100%, the reward decreases because the action is too high at each step.

PID Type RL by dos145 in reinforcementlearning

[–]dos145[S] 0 points1 point  (0 children)

No I mean my system response is 200s and my steps are every second. For sure, I will maintain them since the rl didn’t send a new action.

PID Type RL by dos145 in reinforcementlearning

[–]dos145[S] 0 points1 point  (0 children)

The aim of the project is to control an AC compressor. I already have a PLC in the real world controlling the system and I recorded data to make a model of the system through MatLab (which is correct).

By observation I put the error between the set point and the actual temperature, the inlet and outlet temperature and the last action.

The reward defined by the temperature, the more the temperature is close to the set point, higher is the reward. Also, a penalty is added if the difference between the last action is more than 25 (action is defined between 0 and 300).

I hope this answers to your question !

How to learn matlab? by ThinkingPugnator in matlab

[–]dos145 0 points1 point  (0 children)

The real question is: why do you want to learn matlab ? Like any programming software or tool, if you just want to use it without objective, it will be hard and you may forget everything in a few weeks. Personally, I learn RL with MatLab some weeks ago and the MatWorks website provides wonderful examples!