AVM Navigator

Last updated

AVM Navigator is an additional module of the RoboRealm (plugin) that provides object recognition and autonomous robot navigation using a single video camera on the robot as the main sensor for navigation.

Associative Video Memory

It is possible due to using of an "Associative Video Memory" (AVM) algorithm based on multilevel decomposition of recognition matrices. It provides image recognition with low False Acceptance Rate (about 0.01%). In this case visual navigation is just the sequence of images (landmarks) with associated coordinates that was memorized inside AVM tree during route training. The navigation map is presented as the set of data (such as X, Y coordinates and azimuth) associated with images inside AVM tree. When a robot sees images from camera (marks) that can be recognized then it confirms its current location.

The navigator creates a way from the current location to target position as a chain of waypoints. If the robot's current orientation does not point to the next waypoint then the navigator turns the robot body. When the robot reaches a waypoint the navigator changes direction to the next waypoint in the chain and so on until the target position is reached.

Related Research Articles

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.

Celestial navigation Navigation using astronomical objects to determine position

Celestial navigation, also known as astronavigation, is the ancient and modern practice of position fixing that enables a navigator to transition through a space without having to rely on estimated calculations, or dead reckoning, to know their position. Celestial navigation uses "sights", or angular measurements taken between a celestial body and the visible horizon. The Sun is most commonly used, but navigators can also use the Moon, a planet, Polaris, or one of 57 other navigational stars whose coordinates are tabulated in the nautical almanac and air almanacs.

A waypoint is an intermediate point or place on a route or line of travel, a stopping point or point at which course is changed, the first use of the term tracing to 1880. In modern terms, it most often refers to coordinates which specify one's position on the globe at the end of each "leg" (stage) of an air flight or sea passage, the generation and checking of which are generally done computationally.

Motion capture Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robotics. In filmmaking and video game development, it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct a map or floor plan and to localize itself and its recharging bases or beacons in it. Robotic leg is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.

Template matching is a technique in digital image processing for finding small parts of an image which match a template image. It can be used in manufacturing as a part of quality control, a way to navigate a mobile robot, or as a way to detect edges in images.

Course (navigation)

In navigation, the course of a watercraft or aircraft is the cardinal direction in which the craft is to be steered. The course is to be distinguished from the heading, which is the compass direction in which the craft's bow or nose is pointed.

Motion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object. It can be achieved by either mechanical or electronic methods. When it is done by natural organisms, it is called motion perception.

Video tracking is the process of locating a moving object over time using a camera. It has a variety of uses, some of which are: human-computer interaction, security and surveillance, video communication and compression, augmented reality, traffic control, medical imaging and video editing. Video tracking can be a time-consuming process due to the amount of data that is contained in video. Adding further to the complexity is the possible need to use object recognition techniques for tracking, a challenging problem in its own right.

Motion planning, also path planning is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.

The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.

An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.

Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localisation, in that it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates. Map building can be in the shape of a metric map or any notation describing locations in the robot frame of reference.

Short baseline acoustic positioning system Class of underwater acoustic positioning systems used to track underwater vehicles and divers

A short baseline (SBL) acoustic positioning system is one of three broad classes of underwater acoustic positioning systems that are used to track underwater vehicles and divers. The other two classes are ultra short baseline systems (USBL) and long baseline systems (LBL). Like USBL systems, SBL systems do not require any seafloor mounted transponders or equipment and are thus suitable for tracking underwater targets from boats or ships that are either anchored or under way. However, unlike USBL systems, which offer a fixed accuracy, SBL positioning accuracy improves with transducer spacing. Thus, where space permits, such as when operating from larger vessels or a dock, the SBL system can achieve a precision and position robustness that is similar to that of sea floor mounted LBL systems, making the system suitable for high-accuracy survey work. When operating from a smaller vessel where transducer spacing is limited, the SBL system will exhibit reduced precision.

RoboLogix

RoboLogix is a robotics simulator which uses a physics engine to emulate robotics applications. The advantages of using robotics simulation tools such as RoboLogix are that they save time in the design of robotics applications and they can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated. RoboLogix provides a platform to teach, test, run, and debug programs that have been written using a five-axis industrial robot in a range of applications and functions. These applications include pick-and-place, palletizing, welding, and painting.

Robotic sensing is a subarea of robotics science intended to give robots sensing capabilities, so that robots are more human-like. Robotic sensing mainly gives robots the ability to see, touch, hear and move and uses algorithms that require environmental feedback.

Moving map display

A moving map display is a type of navigation system output that, instead of numerically displaying the current geographical coordinates determined by the navigation unit or an heading and distance indication of a certain waypoint, displays the unit's current location at the center of a map. As the unit moves around and new coordinates are therefore determined, the map moves to keep its position at the center of the display. Mechanical moving map displays using paper charts were first introduced in the 1950s, and became common in some roles during the 1960s. Mechanically moved paper maps were replaced by projected map displays and digital maps during the 1970s and 80s, with resolution and detail improving along with computer imagery and the computer memory systems that held the data.

An autonomous aircraft is an aircraft which flies under the control of automatic systems and needs no intervention from a human pilot. Most autonomous aircraft are unmanned aerial vehicle or drones, however autonomous control systems are reaching a point where several air taxis and associated regulatory regimes are being developed.

Visage SDK

visage|SDK is a multiplatform software development kit (SDK) created by Visage Technologies AB. visage|SDK allows software programmers to build a wide variety of face and head tracking and eye tracking applications for various operating systems, mobile and tablet environments, and embedded systems, using computer vision and machine learning algorithms.

Air-Cobot French research and development project (2013–)

Air-Cobot is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.