Super-Human Quality Control AI finds manufacturing flaws better than human inspectors.

Published
Reading time
2 min read
Original vs processed image checking for leaks on a compressor

A computer vision model, continually trained and automatically updated, can boost quality control in factories.

What’s new: Landing AI, a machine learning platform company led by Andrew Ng, helped a maker of compressors for refrigeration check them for leaks. The manufacturer fills the compressor with air and submerges it in water while an inspector looks for telltale bubbles. Landing AI’s system outperformed the inspectors.

Problem: When a visual inspection model detects a flaw where none exists, an engineer adds the example to the training set. When enough new examples have accrued, the engineer retrains the model, compares it with its predecessor and, if the new model shows improved performance, puts it into production — a laborious process that may introduce new errors.

Solution: An automated machine-learning pipeline can accelerate all of these tasks and execute them more consistently.

How it works: The Landing AI team aimed a camera at the water tank and sent the footage to a MIIVII Apex Xavier computer. The device ran a model that looked for bubbles and classified each compressor as okay or flawed, and a different model that watched indicator lights as an inspector activated a robot arm to place good compressors in one area and defective ones in another, and classified that decision.

  • The system compared machine and human decisions and sent disagreements to an off-site expert. The expert reviewed the video and rendered a judgement.
  • If the expert declared the model incorrect, the system added it to the training set (if it was categorized as a familiar sort of bubble) or a test set (if the problem was unfamiliar, such as an oddly shaped bubble). It retrained the model weekly.
  • Before deploying a new model, the system ran it in parallel with the previous version and logged its output to audit its performance. If the new model performed better, it replaced the old one.
  • As they iterated on the model, the engineers used a data-centric approach to reduce the percentage of inaccurate inferences. For instance, they placed QR codes on the corners of the water tank, enabling a model to detect issues in the camera’s framing, and lit the tank so another model could detect murky water that needed to be changed. To help the system differentiate between metal beads (artifacts of manufacturing) and bubbles, the team highlighted bubble motion by removing the original colors from three consecutive frames and compositing them into the red, green, and blue channels of an RGB image. Bubbles lit up like a Christmas tree.

Results: After two months of iteration, the team put the system to a test. Of 50,000 cases in which the system expressed certainty, it disagreed with human experts in only five. It was correct in four of those cases. It was insufficiently certain to render a decision in 3 percent of cases, which required human decisions.

Why it matters: Human inspectors are expensive and subject to errors. Shifting some of their responsibility to a machine learning system — especially one that performs better than humans — would enable manufacturers to reallocate human attention elsewhere.

We’re thinking: A human-in-the-loop deployment that maintains a feedback loop between human experts and algorithms is a powerful way to learn — for both people and machines.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox