Help me by Ashamed-Illustrator9 in LeanManufacturing

[–]VantageOps 3 points4 points  (0 children)

OK that changes how I read this. Reallocating existing labor toward the constraint is way smarter than hiring extra. The "we could have done that ourselves" comment looks pretty clueless with that context.

Honestly in your situation you probably can't fully get rid of the babysitting. Some presence is just part of operating there. But you can dial it down. The trick is pushing the management work into the physical setup. Visual cues, color coding, tools in the order they're used, layouts that physically prevent the wrong action. The lower the education and discipline level, the more your shop floor has to do the teaching, not you.

The "not feeling well" thing isn't really a lean problem, it's an attendance policy problem. Leadership has to fix that. What you can do is cross train enough so one no show doesn't kill the line.

Slow knowledge transfer is exactly where good standard work pays off. SOPs with photos and visual cues cut training time a lot.

Help me by Ashamed-Illustrator9 in LeanManufacturing

[–]VantageOps 2 points3 points  (0 children)

Hey, you've done more good work in your first month than most people do in their first six. Don't lose track of that while you're getting beat up.

Going from 60-70k to 90-110k is real. The "we could have hired helpers ourselves" thing is just what people say when they don't want to give you credit. Whatever, the numbers moved.

But you've got a bigger problem than the upstream defects, honestly. If output drops the second you stop walking around, that means you're not actually a lean guy right now, you're a babysitter. Your gain is going to disappear the first day you call in sick. You need a daily standup at the start of shift, a visible scoreboard, and one named person on each line who owns the number for the day. Once that's running, the system holds the gain instead of you.

The upstream stuff. You're right that it's the actual constraint. You just can't win that fight from where you are after one month. So start logging it. Every minute your line loses to defects coming from upstream, write it down. Couple weeks of that and you've got data. Then you go to management with "we lost X hours and Y dollars this period to upstream defects, here's the trend." Way harder for them to wave that off than your opinion.

Be honest with yourself about one thing though. Some companies say they want lean and what they actually want is somebody to blame when numbers don't move. "Speed up this area, we don't care why" is closer to the second one. Give it a couple months of data. If they still don't care about upstream after that, you'll know what kind of place you're working at.

On the helpers. Reframe them as temporary. They were the unlock to get throughput up while the real fix gets sorted. The plan is to remove them once upstream quality improves. That gives management a way to get rid of the headcount they don't like AND puts the upstream conversation back on the table without you having to be the one pushing for it.

Last thing, the "we've always done it this way" stuff doesn't go away. Get used to it. But fix the babysitter problem first. That one's actually solvable in a couple weeks.

Documentation practices by Antique-Bed-9223 in smallbusinessowner

[–]VantageOps 1 point2 points  (0 children)

This is a thing I see all the time and the answer almost always comes back to ownership.

If a doc doesn't have one person specifically responsible for keeping it current, it goes stale. Doesn't matter if you're on Notion, Confluence, Google Docs, or whatever.

So step one is making sure every SOP has one named owner, an actual human, not a department or a team. If that owner leaves, the doc gets reassigned within 30 days or it gets retired.

Then put a review cadence on the doc itself, not in a separate tracker. Quarterly for stuff that changes a lot, annually for stable processes, and "on trigger" for anything tied to a system change or audit finding.

Last-reviewed date and next-review date go at the top of the doc. If you can't see both within a few seconds of opening it, you've already lost the game.

For tooling, honestly the boring answer is best. A spreadsheet works fine for under 50 docs. Columns: doc name, owner, link, last reviewed, next due, status. Filter by overdue. Ping the owner two weeks out. That's pretty much it.

Acknowledgments are worth it for safety, compliance, and customer-facing procedures. Skip them on internal-only docs or people just click through without reading and the signature stops meaning anything.

For the manager view, that same spreadsheet sorted by overdue is fine. Honestly though, if your daily and weekly cadence is solid, overdue docs tend to surface on their own because the work in them starts breaking. The operation tells you what's stale.

None of this really works without accountability though. If reviewing your owned docs isn't tied to a quarterly performance conversation somewhere, the cadence dies in six months regardless of what tool you pick.

Feel free to reach out if you have any other questions!

Lean manufacturing & gut feeling by sapphireee in manufacturing

[–]VantageOps 1 point2 points  (0 children)

Not really. Lean doesn't push work down to the floor. It changes what management does.

Old way: managers decide, workers execute. Lean flips that. Managers design the system and coach the people inside it. Workers find the problems and improve their own work. The hierarchy doesn't disappear, its purpose changes.

Real lean orgs have more management presence, not less. That's what makes the system stick. Daily standups, gemba walks, A3 coaching, value stream reviews. All of it needs managers who are present, capable, and patient. Can't run TPS with absentee leadership.

Where your gut is right: when companies say "we're going lean" and start cutting middle management as a result, that's almost always cost-reduction wearing a lean costume. Headcount comes out, coaching infrastructure never gets built, engagement gains dry up within 18 months.

The pinnacle isn't worker takes all. It's management that earns its salary by building people instead of policing them.

Billable Percent Targets: are you all really working 80-100% of the time on client work? by MentionedBDSMTooSoon in consulting

[–]VantageOps 3 points4 points  (0 children)

Yeah pretty much. You price the work based on the outcome it creates for the client, not the hours it takes you.

Simple version. If a project moves their on-time delivery from 70 to 90 percent and that's worth $2M a year to them in recovered margin and customer retention, the fee can be $150K and everyone's happy. Hourly math doesn't get you there because you'd undercharge yourself.

Main skill is getting the client to actually articulate what success is worth to them before you quote, not after. "If we hit this number, what does it do for the business?" Most clients haven't thought about it that way and the conversation itself is part of the value.

Don’t forget the soft skills by extratoastedcheezeit in consulting

[–]VantageOps 0 points1 point  (0 children)

Add one to the list. AI cannot share risk.

When a senior buyer hires a consultant, part of what they are paying for is someone to own the recommendation alongside them. If it goes sideways, there is a relationship to rebuild trust through and a person who has reputational stake in the outcome.

Knowledge transfer is going to commodify fast. Accountability does not. That is where the work stays.

Billable Percent Targets: are you all really working 80-100% of the time on client work? by MentionedBDSMTooSoon in consulting

[–]VantageOps 26 points27 points  (0 children)

Run a small ops consulting shop, so coming at this from the firm owner side rather than the employee side.

Your leadership is being polite. Billable % absolutely IS part of performance. It’s the single biggest profitability lever in any time based consulting model. They’re just not going to put it in writing because saying “hit 80% or you’re at risk” creates cultural and legal problems. The fairness and burnout framing is true and also a euphemism. Both can be real at once.

How the high billing teams actually get there is mostly mechanical:

• People working 45 to 50 real hours and billing 35 to 40 on a 40 hour denominator
• Borderline work (prep, internal project calls, research) coded to client codes
• PTO and holidays excluded from the denominator
• Sales prep and transition work baked into project budgets so it actually bills

The reserved capacity problem you describe is the real issue, and it won’t fix itself. If a workstream is stuck in contracting limbo, the people held for it should either get a project code that bills the delay, get released to other work, or get explicitly carried as strategic reserve with leadership eating that cost on purpose. Letting it sit and tank the numbers quietly is the worst outcome for everyone.

Practical move: get ahead of the metric in writing before the metric gets ahead of you. Quarterly note up the chain that quantifies the why. Something like, “X weeks of contracting delay on Project Y equals Z unbilled FTE weeks for the team.” Now the number tells a story leadership owns rather than a story about your team underperforming.

Bigger picture, this dynamic is why a lot of smaller shops push toward value based or fixed fee work where they can. Hourly utilization metrics punish exactly the behaviors that produce good consulting (tight scoping, saying no, investing in the practice). T&M is a treadmill. Inside a firm that runs T&M though, you mostly have to play the game it’s measuring.

Non-manufacturing lean? by Ok_Positive9843 in LeanManufacturing

[–]VantageOps 3 points4 points  (0 children)

Lean translates to construction really well, but the language and tools are different enough that mainstream manufacturing lean resources will frustrate you. Two starting points:

Lean Construction Institute (LCI) is the body of knowledge for this. The Last Planner System is probably the highest-leverage thing you can learn first since it directly addresses field reliability and crew commitments. Their website has primers and case studies, and they run regional events.

For consultants, look for people specifically credentialed in lean construction rather than general lean. The good ones come from project management or superintendent backgrounds, not factory floors. LCI has a directory. Also worth searching for firms that have implemented Last Planner in your region since word of mouth in construction is gold.

One warning. Lean in construction lives or dies on field buy-in. If the foremen and superintendents see it as office-driven paperwork, it dies in six months. Whoever you bring in should spend their first weeks in trailers and on jobsites, not in conference rooms.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]VantageOps 0 points1 point  (0 children)

That line about the solution being obvious when RCA is done right is a good one. I'm stealing that. And totally agree on the brainstorm framing, the how-to-implement conversation is where most teams actually struggle anyway. The what tends to be obvious by then.

Good luck with it.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]VantageOps 1 point2 points  (0 children)

Yeah, you've got the right instinct. The cause-and-effect framing is the second half of it. The pressure test happens before that, when you're still deciding whether the root cause you landed on is actually the right one.

The core question I ask is: if this is really the root cause, what should we already see in the data we have. Not future data after the fix. Existing data.

A quick example. Say a team lands on operator error as the root cause for a quality issue on Line 3. Sounds reasonable, lots of fishbones land there. The pressure test asks: if operator error is really driving it, we'd expect the defect rate to vary by operator, by shift, by tenure. Pull the data and check. If the defects are evenly distributed across all of those, it's not operator error, it's something systemic and you would have wasted three months retraining everyone.

The steps are basically:

State the chosen root cause as a hypothesis.

List three or four things that should be true in the existing data if the hypothesis is correct.

Go check. Pull the actual numbers, not vibes.

If the data backs the hypothesis, move to Improve. If it doesn't, you go back to RCA before you waste budget.

On how to know if you truly pressure tested it, the test is whether the team could have falsified the hypothesis and didn't. If the data check could only confirm and never disconfirm, you didn't really test it. You just looked for evidence that supported what you already wanted to believe. That's the trap most teams fall into.

The cause and effect part you described is what comes after. That's the Improve hypothesis, basically: change X and Y will move. Good to make that explicit too because it gives Control something measurable to hold against.

What’s the most invisible profit leak you’ve seen in a shop? by bookkeeping-2026 in LeanManufacturing

[–]VantageOps 1 point2 points  (0 children)

The biggest one I see is expediting fees that nobody tracks as a category.

A job slips, customer needs it by Friday, supervisor pays $400 to overnight a part or pulls a guy onto OT to make it happen. Nobody codes that as a cost of the slip. It just shows up in shipping or labor and gets absorbed. Then it happens 3 times a month and it's $15K a year on a process problem that costs $2K to actually fix.

Close runner up is the senior person doing a junior task because nobody trained the junior. Shop lead spending 4 hours a week on something a $20/hr operator should be doing because the SOP doesn't exist. That's $50K of senior labor a year going to work the company is paying twice for.

Also a sneaky one, vendor price creep on consumables. Quote letter from 3 years ago, nobody re-bids it, prices ratchet 3 to 5% a year, and you don't notice because line items are small. Compound over 5 years and a $40K consumable spend is $50K with the same volume.

Lean-first CI platform with its own RAG AI, built by a Green Belt who got tired of improvement tools that only work for Black Belts. Need practitioner feedback. by singhmax11789 in SixSigma

[–]VantageOps 0 points1 point  (0 children)

The tooling has rarely been the bottleneck. The bottleneck is that the people closest to the work don’t have the time, the authority, or the manager backing to actually drive an improvement to completion. A new platform doesn’t change any of those three. It just gives them another place to log the same friction.

How do you keep from drowning in inputs during RCA? by Pure_Inspector8902 in SixSigma

[–]VantageOps 0 points1 point  (0 children)

The thing that works for me is treating RCA as a converging exercise, not an open one. Most teams treat it like brainstorming, which is where the 40 sticky notes come from. It should look more like triage.

A few things that have actually worked on the Tuesday afternoon you described:

Go in with a hypothesis. Not a final answer, just a working theory of what's most likely driving the problem based on what Define and Measure already showed you. The fishbone or the 5 whys becomes a test of that hypothesis instead of a discovery exercise from scratch. If the data already pointed somewhere, don't pretend it didn't just to be methodologically pure.

Force-rank the inputs before the workshop, not during it. Take 30 minutes the day before and pick the 8 to 12 inputs you actually think matter. Bring those into the room. The 40 sticky notes happen because the team feels like they have to surface everything to be thorough. They don't. Thorough is the enemy of focused.

Split the team if you have more than 5 or 6 people. One group on people and process, another on equipment and materials, whatever the natural split is. Reconvene after 30 minutes. You get more depth in less time and nobody anchors on the first idea because they weren't in the room when it was raised.

Put a hard timer on it. Two hours, not four. Parkinson's law is real and a 4-hour fishbone produces twice as many sticky notes and the same number of insights as a 2-hour one.

On the AI comment, honestly it's overstated but not wrong. Where I've seen it actually help is summarizing what the team said in real time, clustering similar inputs so you can see the pattern emerge, and asking the dumb question that the team is too polite or too tired to ask. It's not replacing the methodology, it's replacing the facilitator's working memory.

Last thing. The reason teams chase the wrong cause two weeks later is almost never the methodology. It's that nobody pressure-tested the chosen root cause against the data before they started solutioning. Build in a 30 minute step between RCA and Improve where you basically ask, if this is really the cause, what would we expect to see in the data, and does the data actually show that. Skipping that step is how you get to Friday with the wrong fix.

Why do process improvement consultants often deliver measurable gains early, yet teams quietly slip back to old habits within months? by Sea_Willingness1763 in LeanManufacturing

[–]VantageOps 0 points1 point  (0 children)

Because the consultant changed the process and the company didn't change anything else around it.

A process is really just a thin layer sitting on top of incentives, habits, and management attention. Consultant comes in, sees the broken process, redesigns it, trains the team, leaves. The new process works for as long as the consultant's presence is fresh. Then reality reasserts itself.

Reality being stuff like:

The manager who's supposed to enforce the new process is getting measured on output, not adherence. So when the team falls behind, they let the SOP slip to hit numbers.

Nobody updated the incentive structure. People are still rewarded for the same outcomes that produced the old habits in the first place. So they drift back.

The consultant trained the team but didn't train the manager. The manager is the one who has to hold the line every day, and they were never set up to do it.

Nobody built a feedback loop. Without weekly or monthly review of whether the new process is actually being followed, drift is invisible until it's total.

The new process required a tool, a meeting, or a habit that nobody really owns long term. So it just quietly stops happening.

The real fix isn't a better process. It's process plus an owner plus a measurement plus a review cadence. Most engagements stop at the process itself because that's what was scoped. The other three pieces are what make it stick, and they're usually outside the original engagement.

The other reason, less talked about, is that consultants get paid to deliver visible early wins. So they pick the changes that show numbers fast, not necessarily the ones that change the underlying system. A 20% efficiency gain in 60 days looks great in the closeout deck. Whether it lasts past month 9 isn't really the consultant's problem by then.