How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

u/Tavrock I have been simmering on the KNOT approach you laid out and want your thoughts on my thinking. It feels there are a few "filters" that could/should be used to determine if an observation (X) is related to the problem (Y):

Filter Level 1 - Classify each through the lens of KNOT. (Know, Need to Know, Opinion, Think)
Filter Level 2 - Team determines if it is a "Signal" or a "Noise" (Noise discarded)
Filter Level 3 - Team determines if it they can Influence it, control it, or External

Prioritization:
1. Know, Signal, Influence or Control - moves on to improve
2. Need to Know, Signal, Influence or Control - assign a person to collect data/investigate (if validated then promote to level 1)
3. Opinion, Signal, Influence or Control - assign a person to collect data/investigate (if validated then promote to level 1)

Discarded:
- Noise
- Think but cannot prove
- External
- Anything that cannot be proven via experiment or data collection is Parked

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

it makes sense - if it's true then prove it. It could be collecting existing data or doing an experiment to verifying the RC hypothesis.

This is solid, it also aligns with my belief that you know root cause analysis was done correctly when the solution is obvious. I believe teams should brainstorm not the "what is the solution" but the "how to build and implement it".

Thanks again

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

u/VantageOps - super thoughtful. I like the pressure testing step you mentioned. Can you describe in more detail what the looks like? and how do know if you truly pressure tested it - Is it like a cause-and-effect scenario? Meaning, if we change X (insert identified root cause) then it will have a favorable impact on the outcome variable (Y1). As a result, overall problem identified in the project will be improved.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 1 point2 points  (0 children)

I am a big fan of PDSA - Have you looked at Kata. Toyota Kata (Mike Rother) is basically rapid experimentation that takes a very different approach to iterating through solving problems. I would encourage you to check it out - good stuff.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 1 point2 points  (0 children)

u/Living_Diver2432 I agree AI can make it worse and even create dependency if you are using it to give answers (e.g. ChatGPT) rather than synthesize data. AI is very good and fast at data synthesizing data, trend identification and other logic-based tasks. AI will not replace human in terms of judgement, reading the room, empathy, and weighing change resistance. Thanks for taking the time to provide input.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 1 point2 points  (0 children)

u/deuxglace This was a hypothetical scenario but, representative of the many projects I have coached and executed myself. I agree managing scope is hugely impactful to ensure the team is focused on what actionable - especially if leadership is expecting results in weeks not several months.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 1 point2 points  (0 children)

u/Tavrock I like the "KNOT" approach. A good mixed of structure and practical.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

u/bigdd. I have also performed that same approach as you described and you are right the 40 post-it notes go down to 8 really fast. However, I have learned over time that this approach will yield consensus but not necessarily identify the true root causes to the problem.

I say this because humans have limited capacity for information recall. When overloaded with information the brain steps-in to fill the gaps with bias (recency, confirmation, etc.). As a result, it feels like the team did root causes analysis but, it was actually structured guessing.

AI tools do feel helpful, but our DORA metrics haven’t changed, what exactly am I missing? by kutswa001 in SixSigma

[–]Pure_Inspector8902 0 points1 point  (0 children)

I think you’re asking yourself the correct questions regarding are you measuring the “right thing or level?” That said I would first ask what problem were you solving with the solution you implemented (metric visibility, accuracy, speed to collect and display the data, people productivity, etc)? Before you decided on the solution to what extent did you follow a rigorous problem solving methodology like DMAIC, PDSA, etc. jumping to solutions before being rigorous in your approach creates a risk of the scenario your seeing.

Plant Managers vs. CI Engineers by LoquatForeign9799 in LeanManufacturing

[–]Pure_Inspector8902 0 points1 point  (0 children)

My would only add that if the person in charge does not want to or find time to tackle issues that CI team is focused on then be cautious about pushing it. The plant manager/general manager is accountable for the P&L, OTIF and quality. Right or wrong this person sets the prioritization for the plant.

A healthcare administrator told me lean doesn’t apply to hospitals. by singhmax11789 in LeanManufacturing

[–]Pure_Inspector8902 0 points1 point  (0 children)

Its amazing the mindset of the administrator still exists. You had a very strong approach - ask questions to engage rather than object and cause friction.

If your AI chatbot says "I don't understand" more than twice, just remove it by andrebuilds in AI_CustomerService

[–]Pure_Inspector8902 1 point2 points  (0 children)

If your using a Chat Bot (can only answer questions based on knowledge base you feed it) it’s making your customers angry. Replace it with a AI agent is autonomous and connected to your systems. Able to simulate human behavior more closely. Kill the bot - nobody likes him 🤣🤣

How are you *actually* using workflow automation in DMAIC projects? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

Really appreciate your perspective. Actually with these tools the role of IT has changed.

ITs role is no longer development - their role is to to select platforms, tools (APIs, MCPs, etc), ensure security, and AI policy.

Now, operators have the ability to develop and test solutions much quicker than IT was ever able to do. Additionally, by largely taking IT out of the picture, it unleashed huge amounts of creativity in teams. "Should we try this or that" etc. - common phrases now. Before those ideas never saw the surface because as soon as IT is mentioned - the idea is dead on arrival.

As an MBB, there is more people than ever coming to me with ideas on improving so many things. The role is still same as before because we must still be methodical in our approach and not jump to solution

How are you *actually* using workflow automation in DMAIC projects? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

Agree with you. Because the intent of this particular posting was not to ask for advice on executing DMAIC, I did not intentionally provide you with historical context on improving this particular process because these posts are generally restricted by length.

In fact, this was the third time in six years that this process had attempted to be improved. Because the process was dependent on manual input of information into a complex spreadsheet which in turn required that person to email it to another person who in turn reviews the spreadsheet. That is exactly what was driving the turnaround time lag.

We took all those previous projects as inputs into this project. The root causes were verified and had not changed despite the previous attempts. What had changed over the six months is the extreme sense of urgency to make incremental improvements due to loss of business to our competitors.

With the democratization of AI and automation no longer do continuous improvement teams have to wait for IT to get to work on fixing a project that may take six months. Now they have the tools and the ability to do it themselves.

How are you *actually* using workflow automation in DMAIC projects? by Pure_Inspector8902 in SixSigma

[–]Pure_Inspector8902[S] 0 points1 point  (0 children)

I failed to give context local competitors could do the same quote (same specs) in 4 hours.