The topic of this article may not meet Wikipedia's notability guidelines for products and services .(June 2012) |
Developer(s) | Mark A. O'Neill |
---|---|
Stable release | 2.00 / August 18, 2016 |
Written in | C |
Operating system | Linux |
Platform | IA-32 x86-64 ARM |
License | Proprietary commercial software |
Website | www |
Rana motion vision system is a motion detection that uses vision to detect the presence of objects within its visual field. Rana is based on the open source motion package for Linux, but has significantly enhanced motion detection capabilities. It has been designed top operate as an efficient camera trap system for recording the movements of small invertebrates, capable of operating autonomously in the field for extended periods. To date, Rana has been used a number of projects looking eusocial hymenoptera including studies of bumblebee and hornet activity in the vicinity of their nests [1] and of the behaviour of hover flies and other pollinators at flowers [2] [3] and as a general purpose e-ecology tool for the automated remote observation of plant-pollinator interactions in the field. [4] [5] [6] [7]
Here we see a typical Rana setup for observing bumblebees in the vicinity of their colony. The colony is mounted on cork stilts inside an outer (plastic) weatherproof housing. Bees are channelled in and out of the nest via a one way system. Each channel is monitored using a Phillips SPC1330N autofocus USB camera which are connected to the Asus Aspire one data logging computer via USB 2.0 connections. This logging computer runs C code which implements a motion detector which is loosely modelled on the frog visual system (e.g. a blob detector which capable of detecting and tracking blobs of a user determined size). This motion code is running on top of a Linux kernel. This offers a relatively good real time performance on the relatively slow Atom N450 processor which was chosen to keep power consumption low (so the logger can operate stand-alone with solar panels in remote field locations). The data logger is connected to the outside world via 100 Mbit/second Ethernet (Wi-Fi and a mobile phone dongle could be substituted in remote field locations). The system is controlled via a web interface on a remote monitoring computer, smart phone or tablet. With high end cameras like the Phillips SPC1330N or the Logitech C270 it is possible to point and focus the cameras from the monitoring station too.
in addition to detecting moving blobs, the Rana system can also track the path of these blobs through its visual field. If required be split into a number of sub-fields within which blobs can be tracked independently. This permits a single camera to monitor a number of visual channels reducing system hardware complexity and expense.
Subsequently, Rana has been ported onto a number of low power ARM based devices such as the Raspberry Pi and Odroid [C1] and [C2] which can be operated off the grid in remote field locations.
Remote control of Rana (and live video streaming from the camera(s) monitored by Rana is accomplished via a web services interface. These systems have recently been used by researchers at the Royal Botanic Gardens Kew to monitor the activities of pollinators at floral patches both within Kew and also in the field in order to determine the pollinators of the rare Pasque Flower in the Chiltern Hills. In addition, it has also been used with near infra-red night vision cameras to monitor the activities of nocturnal invertebrates including cockroaches, moths and bed bugs. A recent [study ] by Red Butte Garden and Arboretum (University of Utah) has used the system to observe thousands of hours of plan-pollinator interactions in the Utah desert. Some representative footage from the Utah study is shown [here]. Rana was also showcased in the Red Butte Garden newsletters in [2016] and [2017].
Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.
Forward-looking infrared (FLIR) cameras, typically used on military and civilian aircraft, use a thermographic camera that senses infrared radiation.
The visual system comprises the sensory organ and parts of the central nervous system which gives organisms the sense of sight as well as enabling the formation of several non-image photo response functions. It detects and interprets information from the optical spectrum perceptible to that species to "build a representation" of the surrounding environment. The visual system carries out a number of complex tasks, including the reception of light and the formation of monocular neural representations, colour vision, the neural mechanisms underlying stereopsis and assessment of distances to and between objects, the identification of particular object of interest, motion perception, the analysis and integration of visual information, pattern recognition, accurate motor coordination under visual guidance, and more. The neuropsychological side of visual information processing is known as visual perception, an abnormality of which is called visual impairment, and a complete absence of which is called blindness. Non-image forming visual functions, independent of visual perception, include the pupillary light reflex (PLR) and circadian photoentrainment.
An accelerometer is a tool that measures proper acceleration. Proper acceleration is the acceleration of a body in its own instantaneous rest frame; this is different from coordinate acceleration, which is acceleration in a fixed coordinate system. For example, an accelerometer at rest on the surface of the Earth will measure an acceleration due to Earth's gravity, straight upwards of g ≈ 9.81 m/s2. By contrast, accelerometers in free fall will measure zero.
Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste. SPC can be applied to any process where the "conforming product" output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.
Motion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object. It can be achieved by either mechanical or electronic methods. When it is done by natural organisms, it is called motion perception.
A security alarm is a system designed to detect intrusion, such as unauthorized entry, into a building or other areas such as a home or school. Security alarms used in residential, commercial, industrial, and military properties protect against burglary (theft) or property damage, as well as personal protection against intruders. Security alerts in neighborhoods show a connection with diminished robbery. Car alarms likewise help protect vehicles and their contents. Prisons also use security systems for the control of inmates.
A passive infrared sensor is an electronic sensor that measures infrared (IR) light radiating from objects in its field of view. They are most often used in PIR-based motion detectors. PIR sensors are commonly used in security alarms and automatic lighting applications.
A motion detector is an electrical device that utilizes a sensor to detect nearby motion. Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems.
The following outline is provided as an overview of and topical guide to computer vision:
Electro-optical MASINT is a subdiscipline of Measurement and Signature Intelligence, (MASINT) and refers to intelligence gathering activities which bring together disparate elements that do not fit within the definitions of Signals Intelligence (SIGINT), Imagery Intelligence (IMINT), or Human Intelligence (HUMINT).
In computer vision, 3D object recognition involves recognizing and determining 3D information, such as the pose, volume, or shape, of user-chosen 3D objects in a photograph or range scan. Typically, an example of the object to be recognized is presented to a vision system in a controlled environment, and then for an arbitrary input such as a video stream, the system locates the previously presented object. This can be done either off-line, or in real-time. The algorithms for solving this problem are specialized for locating a single pre-identified object, and can be contrasted with algorithms which operate on general classes of objects, such as face recognition systems or 3D generic object recognition. Due to the low cost and ease of acquiring photographs, a significant amount of research has been devoted to 3D object recognition in photographs.
Mark A. O'Neill is an English computational biologist with interests in artificial intelligence, systems biology, complex systems and image analysis. He is the creator and lead programmer on a number of computational projects including the Digital Automated Identification SYstem (DAISY) for automated species identification and PUPS P3, an organic computing environment for Linux.
In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.
The unattended ground sensor (UGS) is under development as part of the United States Army's Future Combat Systems Program. For information on currently fielded UGS systems, refer to the Current Force UGS Program or CF UGS.
Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads.
A driveway alarm is a device that is designed to detect people or vehicles entering a property via the driveway. A driveway alarm system is often integrated as a component of a system which automatically performs a task or alerts home owners of an unexpected intruder or visitor. Driveway alarms can be a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems.
Baited remote underwater video (BRUV) is a system used in marine biology research. By attracting fish into the field of view of a remotely controlled camera, the technique records fish diversity, abundance and behaviour of species. Sites are sampled by video recording the region surrounding a baited canister which is lowered to the bottom from a surface vessel or less commonly by a submersible or remotely operated underwater vehicle. The video can be transmitted directly to the surface by cable, or recorded for later analysis.
Artificial intelligence for video surveillance utilizes computer software programs that analyze the audio and images from video surveillance cameras in order to recognize humans, vehicles, objects, attributes, and events. Security contractors program is the software to define restricted areas within the camera's view and program for times of day for the property being protected by the camera surveillance. The artificial intelligence ("A.I.") sends an alert if it detects a trespasser breaking the "rule" set that no person is allowed in that area during that time of day.
A video management system, also known as video management software plus a video management server, is a component of a security camera system that in general: