Robot navigation

Last updated
Robot navigation using visual and sensorimotor information (2013)

Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localisation, in that it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates. Map building can be in the shape of a metric map or any notation describing locations in the robot frame of reference.[ citation needed ]

Contents

For any mobile device, the ability to navigate in its environment is important. Avoiding dangerous situations such as collisions and unsafe conditions (temperature, radiation, exposure to weather, etc.) comes first, but if the robot has a purpose that relates to specific places in the robot environment, it must find those places. This article will present an overview of the skill of navigation and try to identify the basic blocks of a robot navigation system, types of navigation systems, and closer look at its related building components.

Robot navigation means the robot's ability to determine its own position in its frame of reference and then to plan a path towards some goal location. In order to navigate in its environment, the robot or any other mobility device requires representation, i.e. a map of the environment and the ability to interpret that representation.

Navigation can be defined as the combination of the three fundamental competences: [1]

  1. Self-localisation
  2. Path planning
  3. Map-building and map interpretation

Some robot navigation systems use simultaneous localization and mapping to generate 3D reconstructions of their surroundings. [2]

Vision-based navigation

Vision-based navigation or optical navigation uses computer vision algorithms and optical sensors, including laser-based range finder and photometric cameras using CCD arrays, to extract the visual features required to the localization in the surrounding environment. However, there are a range of techniques for navigation and localization using vision information, the main components of each technique are:

In order to give an overview of vision-based navigation and its techniques, we classify these techniques under indoor navigation and outdoor navigation.

Indoor navigation

Egomotion estimation from a moving camera Egomotion-odometry.gif
Egomotion estimation from a moving camera

The easiest way of making a robot go to a goal location is simply to guide it to this location. This guidance can be done in different ways: burying an inductive loop or magnets in the floor, painting lines on the floor, or by placing beacons, markers, bar codes etc. in the environment. Such Automated Guided Vehicles (AGVs) are used in industrial scenarios for transportation tasks. Indoor Navigation of Robots are possible by IMU based indoor positioning devices. [3] [4]

There are a very wider variety of indoor navigation systems. The basic reference of indoor and outdoor navigation systems is "Vision for mobile robot navigation: a survey" by Guilherme N. DeSouza and Avinash C. Kak.

Also see "Vision based positioning" and AVM Navigator.

Autonomous Flight Controllers

Typical Open Source Autonomous Flight Controllers have the ability to fly in full automatic mode and perform the following operations;

The onboard flight controller relies on GPS for navigation and stabilized flight, and often employ additional Satellite-based augmentation systems (SBAS) and altitude (barometric pressure) sensor. [5]

Inertial navigation

Some navigation systems for airborne robots are based on inertial sensors. [6]

Acoustic navigation

Autonomous underwater vehicles can be guided by underwater acoustic positioning systems. [7] Navigation systems using sonar have also been developed. [8]

Radio navigation

Robots can also determine their positions using radio navigation. [9]

See also

Related Research Articles

<span class="mw-page-title-main">Computer vision</span> Computerized information extraction from images

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Dead reckoning</span> Means of calculating position

In navigation, dead reckoning is the process of calculating the current position of a moving object by using a previously determined position, or fix, and incorporating estimates of speed, heading, and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, is path integration.

<span class="mw-page-title-main">Avinash Kak</span> Indian American mathematician

Avinash C. Kak is a professor of Electrical and Computer Engineering at Purdue University who has conducted pioneering research in several areas of information processing. His most noteworthy contributions deal with algorithms, languages, and systems related to networks, robotics, and computer vision. Born in Srinagar, Kashmir, he did his Bachelors in BE at University of Madras and Phd in Indian Institute of Technology Delhi. He joined the faculty of Purdue University in 1971.

Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct a map or floor plan and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.

<span class="mw-page-title-main">Simultaneous localization and mapping</span> Computational navigational technique used by robots and autonomous vehicles

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.

<span class="mw-page-title-main">Gregory Dudek</span>

Gregory L. Dudek is a Canadian computer scientist specializing in robotics, computer vision, and intelligent systems. He is a chaired professor at McGill University where he has led the Mobile Robotics Lab since the 1990s. He was formerly the director of McGill's school of computer science and before that director of McGill's center for intelligent machines.

<span class="mw-page-title-main">Mobile robot</span> Type of robot

A mobile robot is an automatic machine that is capable of locomotion. Mobile robotics is usually considered to be a subfield of robotics and information engineering.

A positioning system is a system for determining the position of an object in space. One of the most well-known and commonly used positioning systems is the Global Positioning System (GPS).

In robotics, obstacle avoidance is the task of satisfying some control objective subject to non-intersection or non-collision position constraints. What is critical about obstacle avoidance concept in this area is the growing need of usage of unmanned aerial vehicles in urban areas for especially military applications where it can be very useful in city wars. Normally obstacle avoidance is considered to be distinct from path planning in that one is usually implemented as a reactive control law while the other involves the pre-computation of an obstacle-free path which a controller will then guide a robot along. With recent advanced in the autonomous vehicles sector, a good and dependable obstacle avoidance feature of a driverless platform is also required to have a robust obstacle detection module.

<span class="mw-page-title-main">Indoor positioning system</span> Network of devices used to wirelessly locate objects inside a building

An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.

Wi-Fi positioning system is a geolocation system that uses the characteristics of nearby Wi-Fi hotspots and other wireless access points to discover where a device is located.

<span class="mw-page-title-main">Visual odometry</span> Determining the position and orientation of a robot by analyzing associated camera images

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.

The Guidance, Control and Decision Systems Laboratory (GCDSL) is situated in the Department of Aerospace Engineering at the Indian Institute of Science in Bangalore, India. The Mobile Robotics Laboratory (MRL) is its experimental division. They are headed by Dr. Debasish Ghose, Full Professor.

<span class="mw-page-title-main">Mobile Robot Programming Toolkit</span>

The Mobile Robot Programming Toolkit (MRPT) is a cross-platform and open source C++ library aimed to help robotics researchers to design and implement algorithms related to Simultaneous Localization and Mapping (SLAM), computer vision and motion planning. Different research groups have employed MRPT to implement projects reported in some of the major robotics journals and conferences.

<span class="mw-page-title-main">Inertial navigation system</span> Continuously computed dead reckoning

An inertial navigation system is a navigation device that uses motion sensors (accelerometers), rotation sensors (gyroscopes) and a computer to continuously calculate by dead reckoning the position, the orientation, and the velocity of a moving object without the need for external references. Often the inertial sensors are supplemented by a barometric altimeter and sometimes by magnetic sensors (magnetometers) and/or speed measuring devices. INSs are used on mobile robots and on vehicles such as ships, aircraft, submarines, guided missiles, and spacecraft. Older INS systems generally used an inertial platform as their mounting point to the vehicle and the terms are sometimes considered synonymous.

<span class="mw-page-title-main">Inertial measurement unit</span> Accelerometer-based navigational device

An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs.

<span class="mw-page-title-main">Air-Cobot</span> French research and development project (2013–)

Air-Cobot (Aircraft Inspection enhanced by smaRt & Collaborative rOBOT) is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.

Intrinsic localization is a method used in mobile laser scanning to recover the trajectory of the scanner, after, or during the measurement. Specifically, it is a way to recover the spatial coordinates and the rotation of the scanner without the use of any other sensors, i.e, extrinsic information. To function in practice, intrinsic localization relies on two things. First, a priori knowledge of the scanning instruments, and second, on sensor data overlap employing simultaneous localization and mapping (SLAM) methods. The term was coined in.

<span class="mw-page-title-main">Pose tracking</span>

In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked.

<span class="mw-page-title-main">Margarita Chli</span> Greek computer vision and robotics researcher

Margarita Chli is an assistant professor and leader of the Vision for Robotics Lab at ETH Zürich in Switzerland. Chli is a leader in the field of computer vision and robotics and was on the team of researchers to develop the first fully autonomous helicopter with onboard localization and mapping. Chli is also the Vice Director of the Institute of Robotics and Intelligent Systems and an Honorary Fellow of the University of Edinburgh in the United Kingdom. Her research currently focuses on developing visual perception and intelligence in flying autonomous robotic systems.

References

  1. Stachniss, Cyrill. "Robotic mapping and exploration." Vol. 55. Springer, 2009.
  2. Fuentes-Pacheco, Jorge, José Ruiz-Ascencio, and Juan Manuel Rendón-Mancha. "Visual simultaneous localization and mapping: a survey." Artificial Intelligence Review 43.1 (2015): 55-81.
  3. Chen, C.; Chai, W.; Nasir, A. K.; Roth, H. (April 2012). "Low cost IMU based indoor mobile robot navigation with the assist of odometry and Wi-Fi using dynamic constraints". Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium. pp. 1274–1279. doi:10.1109/PLANS.2012.6236984. ISBN   978-1-4673-0387-3. S2CID   19472012.
  4. GT Silicon (2017-01-07), An awesome robot with cool navigation and real-time monitoring, archived from the original on 2021-12-12, retrieved 2018-04-04
  5. "Flying | AutoQuad".
  6. Bruno Siciliano; Oussama Khatib (20 May 2008). Springer Handbook of Robotics. Springer Science & Business Media. pp. 1020–. ISBN   978-3-540-23957-4.
  7. Mae L. Seto (9 December 2012). Marine Robot Autonomy. Springer Science & Business Media. pp. 35–. ISBN   978-1-4614-5659-9.
  8. John J. Leonard; Hugh F. Durrant-Whyte (6 December 2012). Directed Sonar Sensing for Mobile Robot Navigation. Springer Science & Business Media. ISBN   978-1-4615-3652-9.
  9. Oleg Sergiyenko (2019). Machine Vision and Navigation. Springer Nature. pp. 172–. ISBN   978-3-030-22587-2.

Further reading