Join or Sign in

Register for your free membership or if you are already a member,
sign in using your preferred method below.

To check your latest product inquiries, manage newsletter preference, update personal / company profile, or download member-exclusive reports, log in to your account now!
Login asmag.comMember Registration[%]26S&utm_medium=topbanner&utm_campaign=WN7_SEAcampaign

Video data required for meaningful machine vision

Video data required for meaningful machine vision

The first step a machine vision system will take to understand images collected by cameras is to adjust these images through processes such as sharpening, cutting or zooming. This processing provides meaningful information for computers to read.   

As humans, we have a set of eyes capturing images, which then are sent to the brain for image identification. For machines, cameras and other visual sensors perform the function of the eyes, with software, artificial intelligence, FPGA (Field Programmable Gate Arrays) chips, CPUs and GPUs filling in for the brain.

Image processing can be seen as the first step in analyzing video data, before it is fed to the system’s computer vision algorithms,” said Jerome Gigot, senior director of marketing at Ambarella.

Processing software can sharpen an image to improve readability, change the exposure for a clearer shot, or to zoom in and crop certain information, such as a barcode or address located on a package.

“The type of data that will be analyzed heavily depends on the manufacturing function that needs to be performed,” said Gigot.

Industrial objects, for instance, can be inspected by size, shape, color, and texture. These same variables can be also used to recognize agricultural or biological objects.

The second step is to have an algorithm that first distinguishes between the many different pieces of an image, then identifies the edges and models its subcomponents. 

In manufacturing, computer vision isn’t limited to a single niche purpose. Some decode barcodes, while others inspect for defects. The latter is powered by neural networks that can compare how a piece of equipment looks versus how it is supposed to look. When the algorithm finds an anomaly, it flags the issue for the user. Other possibilities include monitoring, predictive maintenance, safety inspection and inventory management.

Gigot offers the example of food processing. At a food processing plant, a neural network detects and instructs the system to remove bad apples in real time as they speed through the scanner and before they shipped out to stores.

Seeing beyond vision with predictive capacity

Lian Jye Su, Principal Analyst, ABI Research
“In addition to cameras, machine learning-based machine vision can also incorporate data collected from various sensors, including LiDAR, radar, ultrasound, and magnetic field sensors. The rich set of data will provide further insight into other aspects of production processes,” said Lian Jye Su, Principal Analyst at ABI Research.

Conventional machine vision only detects product defects and quality issues predefined by humans. With the help of machine learning algorithms, machine vision can pick up unexpected product abnormalities or defects, providing flexibility and valuable insights for manufacturers.

Machine vision-powered predictive maintenance utilizes machine learning and other connected devices to monitor data and components in order to taking corrective actions before machinery breaks down. It creates a zero-downtime situation for manufacturers, creating cost savings.

Another use of machine learning-equipped machine vision systems is for monitoring worker safety. Devices can track people and predict the movement of equipment, helping to prevent dangerous interactions between people and machines.

Share to:
Comments ( 0 )