Is SPC obsolete for real time monitoring in electronics manufacturing?

Statistical Process Control (SPC) has long been an important tactic for companies looking to ensure high product quality. In modern electronics manufacturing, the complexities involved don’t comply with the fundamental assumption of process stability. This makes traditional SPC worthless as a high level approach to quality management, when combined with the increasing amount of data collected. An approach compliant with Lean Six Sigma philosophy, allowing a wider scope than SPC, is superior in identifying and prioritizing relevant improvement initiatives.

Fundamental Limitations
SPC appears to still hold an important position at Original Electronics Manufacturers (OEM). It is found in continuous manufacturing processes; calculating control limits and attempting to detect out-of-order process parameters. A fundamental assumption of SPC is that you have been able to remove or account for the common cause variations from the process [1]. Meaning that all variations remaining are special cause ones. The parameters you need to worry about when they start to drift.

An electronics product today can contain hundreds of components. It will experience many design modifications due to things such as component obsolescence. It will be tested in various stages during the assembly process; feature multiple firmware revision; test software versions; test operators; variance in environmental factors, and so forth. You never have a stable process.

A method in SPC, developed by Western Electric Company back in 1956, is known as Western Electrical Rules, or WECO. It specifies certain rules where violation justifies investigation, depending on how far the observation are from ranges of standard deviations [2]. One problematic feature of WECO is that it on average will trigger a false alarm every 91,75 unrelated measurement.

False Alarms Everywhere!
Let’s say you have an annual production output of 10.000 units. Each gets tested through 5 different processes. Each process has an average of 25 measurements. Combining these you will get up to 62 false alarms per day on average, assuming 220 working days per year.

People receiving 62 emails per day from a single source would likely block them, leaving potentially important announcements unacknowledged, with no follow up. SPC savvy users will likely argue that there are ways to reduce this by new and improved analytical methods to lower the number of false alarms. Even so, false alarms will limit effectiveness on you being able to follow up on your biggest quality concerns.

Enter KPIs
What most people do is to make assumptions on a limited set of important parameters to monitor, and carefully track these by plotting them in their Control Charts, X-mR Charts or whatever they use to try and find issues. These KPI’s are in reality very often captured and analyzed well downstream in the manufacturing process, after multiple units are combined in to a system.

An obvious consequence of this is that problems are not detected where they happen, as they happen. The origin could easily come from one of the components or processes upstream, manufactured one month ago in a batch that by now has reached 50.000 units. A cost-failure relationship known as The 10x Rule says that for each step in the manufacturing process a failure is allowed to continue, the cost of fixing it increases by a factor of 10. A failure found at system level can mean that technicians will need to pick apart the product, an act that in itself gives opportunities for new defects. Should the failure be allowed to reach the field the cost implications can be catastrophic.


One  of the big inherent flaws of SPC, according to standards of modern approaches such as Lean Six Sigma, is that it makes assumptions of where problems are coming from. This is an obvious consequence of assuming stability in what in reality are highly dynamic factors, as mentioned earlier. Trending and tracking a limited set of KPIs only enhance this flaw.

A Modern Approach
In electronics manufacturing this starts by an honest recognition and monitoring of your First Pass Yield (FPY). True FPY to be more precise. Every test after the first represents waste, resources the company could have spent better elsewhere. True FPY represents your perhaps single most important KPI, still most OEMs have no real clue what theirs is.

Real Time Dashboards and drill down capabilities allows you to quickly identify the contributors to poor performance.

Live Dashboards
Knowing your FPY you can break this down in parallel across different products, product families, factories, stations, fixtures, operators, and test operation. Having this data available in real time as Dashboards gives you a powerful “Captains View”. It lets you quickly drill down to understand what the real origin of poor performance is, and make interventions based on economic reasoning. A good rule of thumb for dashboard is that unless the information is given to you it won’t be acted on. We simply don’t have time to go looking for trouble.

As a next step it is likely critical that you are able to quickly drill down to a Pareto view of your most occurring failures, across any of these dimensions. By now it could very well be that tools from SPC becomes relevant in order to learn more details. But now you know that you are applying it on something of high relevance, not based on educated guesses. You suddenly find yourself in a situation where you can prioritize initiatives based on a realistic cost-benefit ratio.

In short, quality influencing actions comes from informed decisions. Unless you have a data management approach that is able to give you the full picture, across multiple operational dimensions you can never optimize your product and process quality or company profits.

You can’t fix what you don’t measure.

 

You may also like...