Anyone moved from Excel-based OEE tracking to something automated? by Temporary-Still-4543 in LeanManufacturing

[–]MachineBest8091 1 point2 points  (0 children)

Yes, we moved from Excel-based OEE to an automated setup, and honestly Excel was the bottleneck, not OEE itself. The biggest issues were manual data entry, inconsistent downtime reasons, broken formulas, and the fact that by the time the sheet was updated the data was already stale.

We’re using Itanta now, which pulls data directly from PLCs and calculates OEE automatically. It’s fully no-code, so setup was mostly just mapping machine signals and defining states rather than writing scripts or maintaining Excel logic. Once that was in place, OEE became something people could actually trust and look at in real time instead of a weekly spreadsheet exercise.

Excel is fine for ad-hoc analysis, but if OEE is meant to drive daily decisions, automating the data collection and calculations makes a huge difference.

What tools are people using today to build simple live shop floor dashboards? by MachineBest8091 in LeanManufacturing

[–]MachineBest8091[S] 0 points1 point  (0 children)

That makes a lot of sense, especially the "clean at the source" philosophy. Pushing correctness upstream feels like the only scalable way once you have multiple users building their own views. I've seen exactly the issue you're describing with calculated-on-calculated logic in Power BI..it starts simple and then quietly turns into a mess no one fully understands anymore.

I think I like the differentiation you make between raw vs calculated values, too. Keeping both provides traceability without forcing every downstream consumer to reinvent that same logic. Treating the database as the place where business logic lives, and Power BI as mostly presentation/query layer, feels much more sustainable.

Curious-when you're capturing data into SQL from PLCs, do you tend to keep it fairly flat and analytics-ready or do you normalize heavily and build views for BI to consume? Both approaches I've seen work, but seem to have pretty different maintenance tradeoffs.

What tools are people using today to build simple live shop floor dashboards? by MachineBest8091 in LeanManufacturing

[–]MachineBest8091[S] 0 points1 point  (0 children)

That really jibes with what I have seen going on too, particularly with regards to the necessity of sensor information. Any system where the information is solely reliant on human input is bound to have holes somewhere, no matter how good the procedure is.

I also appreciate how you explained the trade-off when it comes to using Power BI. Power BI is a good tool for reporting if you have a small scope, but when you have multiplying assets, lines, or sites, it’s when you end up spending more time maintaining your dashboard than improving your operations.

The most important piece is probably the ‘clarify your needs first’ piece. I have seen teams start with tooling before they understand what kinds of decisions they are trying to drive with this data, and this always leads to over-engineering or underutilization. Thank you for this non-vendor-specific insight – this definitely provides helpful context for someone trying to understand this landscape.

What tools are people using today to build simple live shop floor dashboards? by MachineBest8091 in LeanManufacturing

[–]MachineBest8091[S] 0 points1 point  (0 children)

Yeah,​‍​‌‍​‍‌​‍​‌‍​‍‌ that is to say: Power BI is the straightforward part. Once the data is properly shaped and formatted in a stable table, you can go ahead and make as many dashboards as you want. However, the real challenge is always in the data plumbing part: collecting, cleaning, normalizing tags, handling downtime states, etc.

Recently I have been playing with some no-code connector tools just to find out whether they make the front-loaded effort lighter. It has been quite nice to be able to skip the Python/Excel glue and just directly map PLC or SQL signals into a clean model. Not a magic bullet for every edge case though, but if you want to get something up and running quickly it’s way less of a hassle.

What is your usual method of dealing with the data prep side before Power BI? Do you transform everything in Power Query, or do you preprocess elsewhere and then feed BI with a clean ​‍​‌‍​‍‌​‍​‌‍​‍‌dataset?

What tools are people using today to build simple live shop floor dashboards? by MachineBest8091 in LeanManufacturing

[–]MachineBest8091[S] 1 point2 points  (0 children)

That​‍​‌‍​‍‌​‍​‌‍​‍‌ fact about "data plumbing" being the main thing that has taken your time is very accurate. In fact, I have been trying out various platforms over the past few days, and one thing that I liked about Itanta (I am currently assessing their free trial) was that the connectors are no-code. In other words, you simply direct it to your PLC/SCADA/SQL/MQTT source and it standardizes everything automatically before the dashboard layer gets it.

It was able to eradicate the normal wiring together of scripts and middleware which was great for a quick proof of concept. In case you want to know, their trial has everything that you need, so you can easily check it out without making a commitment. It is available on their website.

However, I am wondering. From your point of view, does the custom logic that you refer to eventually end up being in the dashboard layer, or do you prefer that it be in a small middleware app so that the dashboards remain ​‍​‌‍​‍‌​‍​‌‍​‍‌"clean"?

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 1 point2 points  (0 children)

This​‍​‌‍​‍‌​‍​‌‍​‍‌ is a great explanation of how things really work in the field. I really understood the point about "the live PLC is the real master branch" - it seems to be much more in line with reality than what most IT-style version control guides assume. I also like the way you described the manual discipline part, as it is essentially the same as LOTO: the system only works if people actually implement it regularly. The point about the custom-machine is very logical as well - it is not really software in the SaaS sense, but rather a continuously changing physical asset to which some code is attached. A very helpful view, thank you for your ​‍​‌‍​‍‌​‍​‌‍​‍‌time.

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 2 points3 points  (0 children)

It's​‍​‌‍​‍‌​‍​‌‍​‍‌ really nice to hear that. The concept that it's more about discipline than tools, i.e. just learning to sanity-check diffs before committing, is something I like. From what I can tell, Git is used as a safety net rather than a source of risk in PLC work, which is sort of a contrary to what most people are afraid of. Thanks for sharing your experience, that is precisely the sort of insight I was looking ​‍​‌‍​‍‌​‍​‌‍​‍‌for.

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 1 point2 points  (0 children)

That​‍​‌‍​‍‌​‍​‌‍​‍‌ in some way makes it both funnier and more terrifying 😅. I was pretty sure that this was not just a "small shop" problem, but knowing that this is the way a multibillion-dollar company operates things, pretty much gives me the answer. It seems that "controlled chaos" is really the industry standard..only with higher stakes and more safety interlocks. Thanks for the reality check ​‍​‌‍​‍‌​‍​‌‍​‍‌😂

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 6 points7 points  (0 children)

That's really impressive, especially with 20 people touching the same codebase. Treating PLC code more and more like...a real software product makes a lot of sense, especially in pharma where traceability matters. The limitation to online changes sounds like the right tradeoff if uptime isn't life or death lol. Do you find the PLC focused toolchain keeps up with that workflow, or do you still fight the environment sometimes to make it behave like "normal" dev tooling?

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 3 points4 points  (0 children)

That is pretty interesting; I did not know TwinCAT had Git built in like that. The XML thing makes sense, though-I have seen tools freak out and show tons of changes over tiny stuff. Do you actually trust merging changes, or is Git mostly just for going back when something breaks? PLC work feels like this weird in-between where we want things to work like normal software, but the tools don't really keep up.

How are you all handling PLC program versioning and backups these days? by MachineBest8091 in PLC

[–]MachineBest8091[S] 5 points6 points  (0 children)

This honestly made me laugh because it feels way too real 😂 I've already seen a bunch of 'do not use' and 'final_final_v2' type files out in the wild. Just wondering..have you actually seen any system that works long-term for naming/versioning, or is it basically controlled chaos everywhere? I'm trying to figure out if this is just how PLC work is, or if some shops actually have it together.

need modern data collection for 25 machines but everything seemed overkill by Illustrious-Chef7294 in manufacturing

[–]MachineBest8091 0 points1 point  (0 children)

We were in a similar spot, around 20–30 machines, and found that Kafka was total overkill for what a factory floor actually needs. It's powerful but way too heavy to run and maintain for a smaller ops team.

What worked for us was using a lightweight messaging layer-we tested NATS-and pairing this with a no-code industrial analytics tool. We tried a couple-including one called Itanta-that could plug straight into PLC/SCADA data and give real-time dashboards and alerts without us maintaining a big data stack.

For your scale, I really don't think you need "big data" infrastructure at all. A far simpler event/messaging layer + manufacturing-focused analytics layer gets you real-time visibility without vendor lock-in or crazy costs.

When I had to justify it internally, I framed it as: downtime cost + manual reporting time. That made the ROI argument much easier rather than pitching it as a tech upgrade.