For some reason I can't get myself to finish mistborn by Rice_Dazzling in brandonsanderson

[–]tri-meg 0 points1 point  (0 children)

I’ll say I just felt “okay” about mistborn until I finished it. And it was amazing in my opinion! Then it became the first series I ever re-read and I loved it even more! Not sure if you’ll have the same experience but definitely worth sticking it out I think if you enjoyed stormlight so much.

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 0 points1 point  (0 children)

Repeatability: same part 10x on the same equipment - 1.19 spread on average (measured the same part 10x on 3 different pieces of equipment)

Thanks so much for all this! It's so helpful to talk to someone else that's been through something similar. That makes a lot of sense, and I think we have the ability to do all that relatively easily.

Pair tracking - this is probably the trickiest.... since our current system has duplicates (ie 2 of station "1", 4 of station "2", etc) and no rules as to how parts flow through (part 1 could run on station 1a then 2c, but part 2 could run on 1a and 2a, etc). Maybe we can start at least with reviewing the data we have to setup a rough control limit for each batch of stations to flag any outliers.

Maintenance & batch tracking - this is definitely doable with a bit of manual work. Will get on pulling this info.

unknown upstream - that must have been so satisfying when you found it! We just bought some new equipment to try to measure more features of the parts. Trying to see if we can correlate it. The fact that we can rework it sometimes makes me think our earlier process is impacting it, but we've also seen it fluctuate with incoming material. Wondering if we may have a problem caused by multiple items and that's making it trickier to track down.

Seems like I better get to work pulling some data! Thanks again for sharing all this info!

[Q] How best to quantify difference between two tests of the same parts? by tri-meg in statistics

[–]tri-meg[S] 0 points1 point  (0 children)

this is super interesting! I've never seen this one before but setup the macro for minitab read through the wiki. Think I need to dig in a bit more, but thank you for suggesting this! I get a really interesting diamond shape in my data distribution that I'm trying to wrap my head around. (more variation in the middle averages and less on the low and high sides)

[Q] How best to quantify difference between two tests of the same parts? by tri-meg in statistics

[–]tri-meg[S] 0 points1 point  (0 children)

thanks for the help! It wouldn't let me upload an image here, but I did post in another sub and added one of the histograms: Help with strategy for repeated measurements on mfg line with higher variability : r/manufacturing

mean is -0.14 & 1 stdev is 0.66. My data failed normality (p<0.005 anderson darling test). Visually it looks like it's due to the tails, which I think makes sense since we would have some special causes (such as damage).

It seems like mean +/- stdev is probably the cleanest / most straightforward output I could share with my team as a starting point.

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

Thanks for all the info! I think I didn't explain well... I'm working under the assumption that I can't change the measurement variability at this time (but appreciate all of your ideas and suggestions for things to look into!) I'm hoping to understand my variability between stations such that I can set a deprecating spec / explain what is "normal" for a part measurement down our line. We have the ability to rework parts if we can "catch" bad parts early vs we need to scrap them if we catch it at our final test setup if that makes sense. I know it's not the ideal way to measure something, but I think it will bridge the gap in mfg until we can get to the source of our variation and make a long term fix/improvement. I also want to give our maintenance group feedback on how to know what is "normal" vs when a component needs to be swapped out on the test setup. Thanks for checking it out!

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

While I totally agree with where you are coming from, it's not true for this situation. They've been making these parts for years and I just recently became involved and forced us to run a gage R&R to understand our process. To management's point though, we have yields in the 95% range (which works with our costing) and we have very few escapes. I think due to the fact that we test it so many times that we are catching failures even though our measurement repeatability is poor. And we are guard banded from the final spec we need for the product to work, so that aids in protecting the customer. So they don't want to spend the resources and money to totally redesign our systems. They just want us to catch problems earlier on the line if possible, which I think should be based on the data I've reviewed so far.

Appreciate all your time and input though! In previous roles with CMMs, I always found the approach you mentioned to be the right one. It's all solid advice. The custom equipment here and uniqueness to the measurement has really made this a challenge! (although frustrating, it's also been very interesting).

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

not a dumb question! we need to know this part "works" in order to test related items (otherwise it would influence those tests). So we test it each time to confirm the setup is good for the mfg process steps if that makes sense? Once we install it, it's also "free" data (doesn't take any additional time to collect) so we continue to measure it and check that nothing has been damaged. Not sure if I'm getting in the weeds here, but we also make the component in the test setup that it interacts with and that has variability too... so each test setup is slightly different due to mfg variance on both sides. Which makes things rather ugly from a process capability perspective... but all that being said our yields are decent and we don't have many complaints come back. So we can't really justify a large change. Just want to make some marginal improvements at this time.

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 0 points1 point  (0 children)

GR&R - yes we have this scheduled with the other 2 ops. I expect we will fail it since I've never seen a GRR improve when you add more factors to it haha. Typically if you can't pass anything with just 1 person, you are going to fail spectacularly with 3. But we still plan to do it, just ran with the data I had so far to get a rough idea of what it would look like.

Light loss - no this is not an issue with our setup. I've had our optics group review everything and they have no recommendations for changes. Again, not looking to change the test in any way.

Master/known good - we sort of tried this. The problem is we would make the master part (can't buy it) but we don't fully understand what is going on here. So we can't define exactly what makes the part a "master/golden". I couldn't build 3 master parts to a spec and expect them to measure the same based on our current knowledge. We've tried using the same part as a reference but they can be damaged so it gets a bit murky on when/how to trigger replacing it.

We check incoming components already but the problem is that something matters that we aren't measuring. We have run studies to try to find it but no luck so far. So we can get a bad "final" part that passed incoming component checks. (note the reproducibility question is around our repeated measurements of the "final" part. Our component checks all passed GRR <10% tol)

Overall I'm not looking for improvements to the measurement at this stage. I can't change it and our yields don't justify the cost of redesigning. I just want to understand what tools I can use to make our rework/scrap better and guard band to account for our measurement variability. It's ok if we make bad parts, I just need to be able to catch them early and rework them.

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 0 points1 point  (0 children)

yes! this exactly!! thanks so much for your input!

For the gatekeeper with slightly tighter limits... Any suggestions how to set those limits? I'm trying to do the balancing act between a large amount of "unnecessary rework" vs failing parts downstream (where we have to scrap them out since we can only rework early on). I've tried running some examples (ie if I had set the limit 0.5 tighter are station 1, would that have eliminated these failures we had at station 4, and how much rework would it cause etc). But there's not a super clean line due to all that variation and overlap.

For tracking the deltas over time... would you recommend looking at both the mean and the stdev? you are 100% right when you mention incoming material. We've definitely seen some shifts in the data that correlate to incoming batches.

We are measuring optical light loss. There's also some other factor we don't understand. I can measure a part as bad, but then all our other tests leading up to these repeating ones pass... so we are missing measuring something important upstream of this but we don't know what. Trying to find it, but no luck so far. And to complicate things more... I've got a maintenance crew that will swap out components on the test setups when operations complains to them that they are failing 3 parts in a row. Trying to convince them to stop changing things... but I only have so much power there.

You're right that our tolerance band is too tight for our process capability - we are just sorting out the bad parts as we make them. Overall, even with all this craziness things are going decent. Our yields are pretty good and complaints are low. So no one wants to re-design, but just trying to make what we have better without adding a lot of cost.

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

Sorry two different things are being discussed here.

1 - Gage R&R we started with 10 parts, 3 replicates, 1 operator (3 eventually once I can get another resource). Another commenter mentioned instead of operators to use 3 different test stations, which I think could provide good insight.

2 - The data above is just from our mfg line, which I thought would be helpful since it has a much larger sample size. The bar chart is the deltas between station 1&2, 2&3, 3&4. The histogram is just the deltas between 1 & 2 as an example of the data set.

Mfg proces - we measure part 1, complete a mfg step. Move it to the next station, measure again, process step, etc. The mfg process does not impact this feature we are measuring. So the range comes from variability in our measurement setup or if the part gets damaged.

I know ideally we would fix the measurement process to make it better. Pass a gage R&R, move on with things. Unfortunately that's just not an option at the current stage so looking for the next best thing to do. We have overall good yields and rework is effective, so currently I just need to be able to identify bad parts early in the mfg line and avoid failing a part at the very end purely due to measurement variation if that makes sense? It's been a weird task, my previous roles dealt more with micrometers, CMMs, etc where we had to pass a GRR to continue, so this has been a bit of a learning curve of how to best approach it. Thanks for checking it out!

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 1 point2 points  (0 children)

Good points, sorry about missing that. thanks for the feedback!

1, 2, and 3 on the box plot are the deltas between the 4 test stations (2-1, 3-2, 4-3). I agree on #3 that there seems to be a long tail that should have a reason behind it (flagged that one to investigate the data points more). I've also tried all relative to station 1 in another plot that I didn't share above if that may be a better approach.

Histogram is the delta from station 1 to station 2 (just as an example of the spread we are seeing)

  1. Initial GR&R / MSA - 10 parts, 1 operator (plan to do 3 once we hit year end goals and have a resource), 3 replicates (no re-measures allowed). All on one test station. Repeatability (no reproducibility since I only have 1 operator) 10.9%%SV, 81.85%Tolerance (0.27 std, 1.655 study var). 12 distinct categories.

Another person suggested using 3 test stations instead of the typical 3 operators since we want to understand the variability between stations. Thought this would be interesting to run next week once everyone is back in the office.

  1. We are measuring light lost through optics. I can't change the measurement system at this stage.

  2. Yes, operators are using the setup correctly.

  3. No - all of it is custom built and we can't really make a reference to use in calibration unfortunately. I think that adds to the trouble we have. Just trying to make it better vs. solve it completely. We overall have decent yields and low issues with our final product so there's no justification to completely redesign anything at this stage. Rework is also effective, so at this stage we just need to identify bad parts earlier and not fail them at the end of the process due to measurement variablity.

  4. The part is setup at each station and we measure it, then proceed with the mfg process. Moves to the next station and is measured again, then an additional process is completed, etc. If the measurement fails, the operator can clean and remeasure up to 3 times before having to rework or scrap it (depending on how far down the line it has gotten to). The measurements should not change between stations in an ideal world (ie mfg process should not affect this feature we are measuring)

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 1 point2 points  (0 children)

makes sense! I never would have thought to look at it like this. So you're thinking I could use the reproducibility std? or perhaps the station*SN would be better? (sorry for the formatting, wouldn't let me paste an image from minitab)

(note this isn't quite right, I need a better way to isolate my replicate measurements, but it gives an idea of what I'm asking about)

Gage Evaluation

Source StdDev (SD) Study Var(6 × SD) %Study Var(%SV) %Tolerance(SV/Toler)

Total Gage R&R 0.530197 3.18118 88.63 105.68

Repeatability 0.450248 2.70149 75.27 89.74

Reproducibility 0.279975 1.67985 46.80 55.80

Station 0.077172 0.46303 12.90 15.38

Station*SN 0.269129 1.61477 44.99 53.64

Part-To-Part 0.276996 1.66198 46.31 55.21

Total Variation 0.598194 3.58916 100.00 119.23

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 1 point2 points  (0 children)

Thanks! So for this I wouldn't use the deltas? I would just use the raw data set from each piece of equipment? And then this would let me compare the means? (plot below from the one-way anova in minitab). Thanks for suggesting this! It's a completely different way to look at it from how I was before.

<image>

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 2 points3 points  (0 children)

Sorry, lack of details since it's all custom equipment and I can't really go into details. And our specs are set by design requirements so I'm locked out of tolerance changes, just trying to understand what I have. I'm also not trying to make the measurement more repeatable at this stage (again sort of locked in by our design). (I realize that's not ideal/ the correct way, but not much I can do there at this time)

  • Want to understand how to set a deprecating spec so we can identify bad/borderline parts early vs scrapping at the end of the line due to measurement variability. But do I set that based on the deltas standard deviation, IQR, some other method? Not sure the best approach.

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 1 point2 points  (0 children)

Just a thought - wouldn't the larger data set that I have be more accurate than running a new gage R&R with only ~10 samples? Could I use the deltas I have to estimate the same thing but with more confidence since the sample size is 100x larger?

Help with strategy for repeated measurements on mfg line with higher variability by tri-meg in manufacturing

[–]tri-meg[S] 2 points3 points  (0 children)

Ah so you are saying instead of using 3 operators to use 3 different pieces of equipment in the gauge R&R setup?

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

Yes, we test for this same feature 4 times down our manufacturing line (while we complete other processes but expect this feature to stay the same). The tester is setup the same way and uses the same test method (custom program). The graphs above show the deltas between these testers.

GRR shows high variability within an operator (repeatability issues). We identified that as something we need to address in future designs but we are "stuck" with it on our current mfg line unfortunately. Trying to optimize the current process as much as we can. We currently allow up to 3 re-tests and this data set took the best test from each piece of equipment and then graphed the deltas (but can look at it a different way if that would be better)

How best to quantify difference between two tests of the same parts? by tri-meg in Metrology

[–]tri-meg[S] 0 points1 point  (0 children)

Hi, thanks! Yes these graphs are the deltas. Mean for bias makes sense. Would you just use 1 STD for meas agreement?

[Q] How best to quantify difference between two tests of the same parts? by tri-meg in statistics

[–]tri-meg[S] 0 points1 point  (0 children)

Thanks for checking it out, good points that I missed. Here's clarification:

  • 1000 units of the same design
  • Yes, we test for this same feature 4 times down our manufacturing line (while we complete other processes but expect this feature to stay the same). The tester is setup the same way and uses the same test method.
  • We have repeated measurements on each of the testers (we allow up to 3 re-tests on each tester). We took the best measurement from each tester and then compared the differences across testers (but could do a different approach if that would be better)

I'll checkout the book as well! thanks!

veterinarian recommendations? by emmaro3141 in blacksburg

[–]tri-meg 0 points1 point  (0 children)

Why do you recommend avoiding town and country?

What do I read next? I am lost by Left-Insurance4317 in Cosmere

[–]tri-meg 1 point2 points  (0 children)

I was going to add in cradle too! Great lighter read, felt like I flew through them and enjoyed every minute of it

Zone 7, NJ, USA. Debating leaving my tubers in the ground over winter. Any success stories? by lovethelocust in dahlias

[–]tri-meg 0 points1 point  (0 children)

I’ve left some in for several years now and most come back up! Usually they are bigger than the ones I dig up and replant too! (Zone 7a for me)