https://www.klacci.com/if-plus-series-touchless-smart-lock?utm_source=iF+&utm_medium=asmag_EN&utm_campaign=iF+_EN_web
INSIGHTS
The technology that allows robots to perceive
The technology that allows robots to perceive
There are quite a few technologies that enable robots to perceive an object and find out how far away it is. Here is an overview of the major ones.

The technology that allows robots to perceive

Date: 2019/08/06
Source: Prasanth Aby Thomas, Consultant Editor
Robots are increasingly becoming a ubiquitous part of factories and the physical security sector. According to a report published by the International Federation of Robotics (IFR) last year, the use of robots in industrial automation had doubled over the previous five years. Another report from Mordor Intelligence shows the security robot market will grow at a CAGR of almost 8 percent between 2018 and 2024.

But robots have been in factories for several years now. In fact, a manufacturing plant of FANUC near Mt. Fuji in Japan that opened in 2001 is a “lights out” factory where only robots work alongside other robots to make more robots. So, what has changed in recent years that is boosting interest and investment in this field?

The answer lies in the recent innovations that allow robots to see, perceive, or understand what is in front of and around them. The early robots were mostly blind and were provided a single or very limited set of tasks that they could perform without interruption. Now we see robots that can collaborate with people, deliver packages, proactively secure places, and even make a cup of coffee!

There are quite a few technologies that enable robots to perceive and measure the distance between them and an object. Here is an overview of the major ones.

Light and laser-based systems

Light Detection and Ranging (LiDAR) is a key technology in this sector that allows machines to perceive their environment by illuminating an object and measuring the time taken for the light to reach back. Robots and autonomous vehicles can create a complex map of the environment that they are in by sending light pulses in quick succession.

At present, several companies offer LiDAR-based solutions for autonomous systems but the sensors they use mostly come under three categories, multi-beam, single-beam, and rotational.
  1. Single-beam sensors: As the name suggests, single beam sensors send one beam of light and are usually preferred for large or flat objects like walls. Single beam sensors can further be divided into two categories, highly collimated and LED & pulsed diode beams. The former, like a laser pointer, emits a beam that remains small and pointed till its destination. The latter works like a flashlight, with the light spreading out as it leaves the sensor.
  2. Multi-beam sensors: Multi-beam sensors emit multiple beams of light simultaneously and help robots and autonomous vehicles detect objects and avoid a collision. Often ranging from one meter to several dozen meters in their functionality, these sensors become crucial when robots have to move among people or other moving objects.
  3. Rotational sensors: Rotational sensors, like the single-beam sensor, produce one beam but it does so while the device is rotated. This is useful for Simultaneous Localisation and Mapping (SLAM) and for avoiding objects.

Sensors to detect parts  

Factory robots often have to pick up objects of various shapes and sizes. This requires them to look for an object, recognize it, and know when it's ready to be picked up. Several sensors are at work here, because it's not just about detecting an object but knowing its position, orientation, or even color.

3D vision

As robots begin to collaborate and work alongside humans, making machines that can learn on-the-go has become important. 3D machine vision, which works through laser triangulation is a commonly used technology now and is often considered by many as the future of robotic vision. In fact, according to MarketsandMarkets, the global 3D machine vision market is set to grow at a CAGR of over 11 percent to 2022.   

Simply put, the robot is taught a CAD model which will enable it to identify a part and its orientation. Using laser, the robot will identify its own orientation with its corresponding x, y, and z axes. After it identifies its own position in relation to the position of the object, it can pick it up or take whatever action is necessary.

Much research is currently being done on 3D machine vision. Enabling robots to perceive objects with a single glance is not easy and requires algorithms that can help the machine to even figure out parts of an object that may not be visible. Nevertheless, as robots look all set to take over many industries, more development can be expected soon.

https://www.klacci.com/if-plus-series-touchless-smart-lock?utm_source=iF+&utm_medium=asmag_EN&utm_campaign=iF+_EN_web
Related Articles
Machine vision market goes to the next level
Machine vision market goes to the next level
What role does the vision system in ADAS play?
What role does the vision system in ADAS play?
What are some industrial automation trends to watch for?
What are some industrial automation trends to watch for?