Among the many and widespread non-security applications of surveillance technology today are those which are designed to aid cultivation or crop farming, and the rearing of livestock or animal husbandry.
Recently, research in the area of agricultural robotics has picked up again due to the advance in technologies, said Professor Yael Edan of the Department of Industrial Engineering and Management at Israelˇs Ben-Gurion University of the Negev. A multidisciplinary group of scientists at the university has been recently engaged in two projects, one of which is the development of a prototype robot for spraying and pollinating date palm trees.
Spraying and pollinating date palm trees is currently done manually by a team of three workers from a platform lifted 18 meters or more above the ground. This method is extremely unsafe and many accidents have happened due to lack of stability when the platform is in a lifted position. Alternatively, date clusters are occasionally sprayed by alarge pressurized sprayer directly from the ground, a method that is highly unselective and environmentally harmful.
Edanˇs team is developing an automatic apparatus that can effectively and accurately spray and pollinate date clusters from a robotic device mounted to a standard tractor operated by a single driver. The apparatus consists of a robotic arm and a computer-controlled sprayer, guided by a computer-vision system that detects and localizes date clusters with a camera. This system will minimize risk of injury, significantly save manpower (from three persons to one person per team), and deliver the spray with maximum accuracy, thereby reducing chemical disposure. A small-scale prototype has been built and is currently under preliminary experiments.
The advantage of robotic spraying is that material can be sprayed only onto selected targets. Precise application of herbicides directly and only onto the crowns will significantly reduce the amount of spraying material, thereby reducing costs and environmental pollution. The shape and density of the plant canopy will be detected by a machine vision system using thermal and RGB (colormodel ) cameras. This information will be fed in real time to a robotic arm, which will position the spraying nozzlesˇ air exhaust direction according to the detected shape of the canopy.
Imaging in Robotics
The uncertainties in object location, size, shape and maturity necessitate a sophisticated sensory system to identify the fruitsˇ locations, which are partially occluded in constantly changing illumination conditions (clouds and sun direction, for example), and decide whether a specific fruit is ripe. Vision sensors tend to be the most suitable technique for dealing with the wide range of sizes, shapes and colors of partially occluded targets. Several sensing technologies have been investi-gated for fruit detection: intensity, spectral and range. Different recognition algorithms have been developed: shape-based and location-based. Fine-positioning is usually achieved using a second image sensor, tactile sensors, laser ranging and ultrasonic sensors.
Farmers need to improve yield and increase productivity to increase profits. Aerial imaging is an alternative to highly targeted, high-resolution imaging, explained Wojciech Majewski, Managing Director of Vision Asia Technology. ¨Plants have spectral reflectivity varying widely in near infrared (NIR) spectrum. With a multispectral camera, we can separate green, red and NIR light levels and, by processing the data, produce an image that shows plant health. We can clearly see indication of water stress?F nitrogen deficiency in the soil?F the amount of vegetative biomass which can be related to the number, size and layers of leaves?F vegetative vigor related to the thickness of the spongy layer of a leaf?F and greenness of the leaves (related to fertility and chlorophyll levels in a leaf ).〃 The variation in the plant health index throughout the field shows areas of the healthiest crop and problem areas. With this information, efforts can be made to eliminate problems and to increase crop yields and profits.
A recent application involved aisle and common areas of an indoor livestock facility, illustrated Fred Turek, Chief Operating Officer of FSI Technologies. The objective was to detect presence of unsecured livestock in secured areas, differentiated from human presence -- presence of people in the same, differentiated from livestock presence -- and turning on of brighter daytime lights. Predator detection was not a part of the mission but could easily have been included.
For higher reliability, color imaging was chosen over standard analysis and gray scale imaging. The solution included a CVS-700 vision unit with four cameras and optimized lenses. The geometric design (placement and angling of cameras within the space) included the object of minimizing variations in spatial resolution (distance from the camera) -- and when such did occur within the image, to design it such that it is a near constant for each area of the image.
The CVP (Central Vision Processor) implemented the image analysis and automated output solution. The software tools were given three training sets of three colors. An image transformation was done for each, each resulting in an image which, pixel by pixel, represented the distance in color/space from the training set. Analysis consists of an image math comparison of each of these to a stored image (of that space) which was processed in the same manner for the space when empty. For each of the three analyses, the program identified any areas where there was a difference, and turned each deviation area into an object for analysis. Various features are computed (height, width, elongation, fiber length, area on the image) and these are put into a neural net for classification into human, livestock or objects / variations too small to be either.
Upon detection of humans or livestock, the solution provides two contact closures (one for presence of people, one for presence of livestock) which is used to provide enunciation of the presence at three monitor locations. Upon detection, the unit stores a copy of the actual images which triggered this, readily accessible by simple methods in a standard PC/Ethernet network.