Breaking down barriers for quality inspection.
By: Jim Wilmot
Vision systems for quality control are historically highly complex, tuned systems with extensive design, set-up and scaling costs. Before the rise of industrial AI, quality control in manufacturing heavily relied on rule-based machine vision systems. These traditional solutions used fixed algorithms and predefined image processing rules such as edge detection, thresholding and pattern matching to identify defects in parts or products. Engineers manually configured inspection criteria based on visual tolerances, often requiring significant effort to fine-tune lighting, camera positioning and software thresholds.
While effective for consistent and well-defined tasks, legacy systems struggled with variability. Minor deviations in product appearance due to lighting, surface texture or natural variation could trigger false positives or false negatives. Adapting to new product designs or changing inspection conditions often meant rewriting or recalibrating the rules, leading to downtime and engineering overhead.
These bespoke deployments required expensive consultants for software, lighting and even the cameras themselves, which drove up costs and increased downtime, even for relatively small product changes. These systems lacked contextual understanding. They couldn’t learn from new examples or adapt to edge cases. These were often standalone solutions with limited integration into broader data ecosystems or feedback loops for continuous improvement.
This level of complexity, cost and lack of flexibility has limited the cost-effectiveness of machine-vision systems in quality control deployments. As a result, rigid legacy vision systems paved the way for AI-enabled vision, which uses deep learning to detect complex patterns, generalize from data and adapt dynamically. This capability reduces manual set-up, improves accuracy and lowers deployment costs.
AI-enabled vision systems are changing the calculus for many industrial applications for machine vision. AI vision systems for quality control replace the complex, rule-based systems, expensive consultants, and the carefully tuned hardware and lighting for many quality control needs. By leveraging trainable AI models, plug-and-play vision systems can now be deployed with little downtime, at low overhead and taught to complete quality control tasks outside the ability of expensive legacy solutions.
The adaptability of AI models means that challenges to vision system deployment, such as scaling production or varied lighting, are simply no longer concerns. Legacy rule-based systems struggled with the variability or dynamic environments. Trained AI systems excel at ignoring insignificant data points and detecting only what they have learned to look for.
The deployment and configuration of AI systems explain how the operation differs. Instead of carefully calibrating hardware and lighting to begin programming a rules-based system, once the hardware is connected for a trained AI system, there is a wizard-based approach that any engineer can use.
The various models within the system include the acquisition AI, which obtains the optimal image; the recognition AI, which identifies the part; and the inspection AI, which identifies defects. These all can be trained by showing the system “passing” parts, mapping the product, and directing the model on areas of concern. Then the models can learn from there, passing and failing parts and becoming more accurate over time with feedback.
The benefits of being able to deploy machine-vision quality control systems in-house, without expensive consultants or extensive downtime, are obvious. AI-powered machine vision excels at detecting subtle, complex defects that traditional systems miss. By learning from examples, it adapts to natural variation and performs well in dynamic environments with minimal manual tuning. It reduces false positives, improves accuracy, and enables rapid deployment across varied product lines.
There are limitations to the capability of this first generation of AI vision systems in the complexity of application that they can learn out of the box, without extension or customization that can negate their cost advantage. They may struggle with edge cases not represented in training data and can raise concerns about explainability and traceability in regulated industries. Ongoing maintenance, retraining and integration with legacy infrastructure remain common challenges.
Currently available AI-empowered vision systems excel at filling the common low-level needs at a fraction of the cost of the bespoke custom legacy solutions. By being simple to train, easier to install, and having a much lower total cost of ownership, AI vision systems can be employed in a broad range of industrial applications where they were previously not cost-effective.
In the dynamic modern business environment, having the agility to meet changes in customer demand or the marketplace is a crucial part of maintaining a competitive advantage. Adaptive AI vision systems allow seamless repurposing for the new parts, product variations and input changes required to stay competitive.
Systems can be easily scaled by simply replicating the models, installing systems in new environments, and retraining as needed. The plug-and-play adaptability of trainable systems is augmented by their ability to be packaged solutions instead of customized deployments. Even more flexibility can be delivered with software AI solutions that are completely hardware agnostic. For instance, allowing any camera system of a given resolution to be deployed, and the model retrained to perform with the new hardware, provides sourcing redundancy, securing the logistics chain if critical parts fail or rapid scaling is demanded.
“AI” has been implemented in various roles in business over the past few years, and that broad set of applications has complicated the perception of AI-enabled applications and toolsets, both in terms of what they are and what their limitations are.
In an industrial context, AI models are algorithms trained on a small, application-specific dataset. The algorithms adapt based on feedback. These models can ignore “noise” in the data they collect, and because of that, trained models excel at completing tasks that previously required complex programming and fine-tuning to accomplish.
By implementing AI models and training them to specific applications, companies can unlock efficiencies that were previously difficult and costly to achieve. This adaptability also means the same models can be trained to handle similar processes, creating a new standard of in-house reliance for the planning, installation, configuration and operation of new machine-learning applications than they were previously able to. Evaluating processes and implementing AI models to deliver efficiency improvements will soon be the minimum standard to maintain competitive advantage in industrial processes.
About the Author:
Jim Wilmot is Product Manager, SIMATIC, Software and Motion Control at Siemens, where he works with the company’s industrial automation products across the U.S. in all areas of industry. Jim has been in his current role for more than 11 years, and with Siemens for more than 27 years in total.
Magen Buterbaugh is the President & CEO at Greene Tweed. Listen to her insights on her ambition to be a lawyer and how her math teacher suggested she consider chemical engineering. Now with several accolades to her name including being honored as one of the 2020 Most Outstanding Engineering Alumnus of Penn State and a Board Member of National Association of Manufacturers (NAM) she has never looked back.