Obstacle avoidance

Last updated

Obstacle avoidance, in robotics, is a critical aspect of autonomous navigation and control systems. It is the capability of a robot or an autonomous system/machine to detect and circumvent obstacles in its path to reach a predefined destination. This technology plays a pivotal role in various fields, including industrial automation, self-driving cars, drones, and even space exploration. Obstacle avoidance enables robots to operate safely and efficiently in dynamic and complex environments, reducing the risk of collisions and damage.

Contents

For a robot or autonomous system to successfully navigate through obstacles, it must be able to detect such obstacles. This is most commonly done through the use of sensors, which allow the robot to process its environment, make a decision on what it must do to avoid an obstacle, and carry out that decision with the use of its effectors, or tools that allow a robot to interact with its environment. [1]

Approaches

There are several methods for robots or autonomous machines to carry out their decisions in real-time. Some of these methods include sensor-based approaches, path planning algorithms, and machine learning techniques.

Sensor-based

Example of obstacle avoidance using sensors. 01Cruise-E-Ultrasonic Intelligent Obstacle Avoidance.gif
Example of obstacle avoidance using sensors.

One of the most common approaches to obstacle avoidance is the use of various sensors, such as ultrasonic, LiDAR, radar, sonar, and cameras. These sensors allow an autonomous machine to do a simple 3 step process: sense, think, and act. They take in inputs of distances in objects and provide the robot with data about its surroundings enabling it to detect obstacles and calculate their distances. The robot can then adjust its trajectory to navigate around these obstacles while maintaining its intended path. All of this is done and carried out in real-time and can be practically and effectively used in most applications of obstacle avoidance [1] [2]

While this method works well under most circumstances, there are such where more advanced techniques could be useful and appropriate for efficiently reaching an endpoint.

An example of A*, a path planning algorithm Weighted A star with eps 5.gif
An example of A*, a path planning algorithm

Path planning algorithms

Path Planning Algorithms are critical for optimally calculating and routing collision-free paths. These algorithms take into account the robot's position, destination, and the locations of obstacles in the environment. They take and store this information to map out an area, and then use that map to calculate the fastest possible route to a specific destination. Such algorithms are commonly used in routing mazes and autonomous vehicles. Popular path-planning algorithms include A* (A-star), Dijkstra's algorithm, and Rapidly-exploring Random Trees (RRT). These algorithms help the robot find the quickest path to reach its goal while avoiding collisions, all in real time. [3]

Machine learning techniques

With the use of machine learning, the range of possibilities for obstacle avoidance becomes far greater. With artificial intelligence (AI), an autonomous machine can figure out a path to get to its destination, but can also learn to adapt to a rapidly changing environment at the same time. It can do this by being put through many testing stages of exposure to obstacles and environmental changes. By giving an AI a task and reward for doing a task correctly, over time, it can learn to do this task efficiently and effectively. This allows the machine to understand what its obstacles are and to come up with an efficient path around them. It also gives the machine the ability to learn how to deal with specific cases, which can include dealing with water, hills, high winds or temperatures, etc. This use of AI allows the autonomous machine to react accordingly to a plethora of situations that could be expected or unexpected. This form of obstacle avoidance is especially good in autonomous vehicles as it removes possible human errors that can occur. [4]

Applications

Obstacle avoidance can be found in a variety of different fields, including but not limited to:

Autonomous vehicles
Vehicles with the ability to drive themselves have been around since the 1980s and have been especially popularized in modern culture due to companies such as Tesla and Nvidia. [5]
Satellites
Due to there being debris around Earth's orbit, satellites must be able to avoid such debris. They can do this by detecting and calculating if and when an object will hit the satellite. Once that is done, the satellite can use drag to decelerate and change its trajectory to avoid impact. [6]
Drones
Drones can be used autonomously for a variety of reasons, some of which include mail, military, mapping, agriculture, and much more. [7]
Public Transport
The rise of autonomous vehicles has also lead to their use for public transports as a cheaper alternative to hiring drivers, while also removing possible human error. [8]
Industrial Systems
Large corporations use obstacle avoidance when it comes to their automated industrial systems, which replace the need for workers and reduces the amount of mistakes made.

Challenges

Although these strategies for incorporating obstacle avoidance work, there are challenges to them that still require further development and planning. For one, it's difficult for sensors to quickly take in information on their environment, have the machine process the information, and make a decision about what it has to do to avoid an obstacle when it's moving too fast. This problem is very difficult to fix and if the machine can't act quickly enough, it can lead to the danger or destruction of the machine and any people around it. It is also incredibly difficult to account for every possible obstacle that can get in the way of an autonomous machine. For example, when it comes to satellites, there are millions of pieces of debris in Earth's orbit, so it is difficult to know when one may hit a satellite and from where and when. [9]

Related Research Articles

<span class="mw-page-title-main">Computer vision</span> Computerized information extraction from images

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus.

<span class="mw-page-title-main">Roomba</span> Series of autonomous robotic vacuum cleaners sold by iRobot

Roomba is a series of autonomous robotic vacuum cleaners made by the company iRobot. Introduced in September 2002, they have a set of sensors that enable them to navigate the floor area of a home. These sensors can detect the presence of obstacles, particularly dirty spots on the floor, and steep drops. Amazon announced the intent to purchase the parent company of the Roomba in August 2022, though the acquisition has been delayed by regulators in Europe.

Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct a map or floor plan and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.

<span class="mw-page-title-main">Swarm robotics</span> Coordination of multiple robots as a system

Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″ It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.

<span class="mw-page-title-main">Survey vessel</span> Type of research vessel

A survey vessel is any type of ship or boat that is used for underwater surveys, usually to collect data for mapping or planning underwater construction or mineral extraction. It is a type of research vessel, and may be designed for the purpose, modified for the purpose or temporarily put into the service as a vessel of opportunity, and may be crewed, remotely operated, or autonomous. The size and equipment vary to suit the task and availability.

<span class="mw-page-title-main">Stanley (vehicle)</span> Autonomous car

Stanley is an autonomous car created by Stanford University's Stanford Racing Team in cooperation with the Volkswagen Electronics Research Laboratory (ERL). It won the 2005 DARPA Grand Challenge, earning the Stanford Racing Team a $2 million prize.

<span class="mw-page-title-main">Automated guided vehicle</span> Type of portable robot

An automated guided vehicle (AGV), different from an autonomous mobile robot (AMR), is a portable robot that follows along marked long lines or wires on the floor, or uses radio waves, vision cameras, magnets, or lasers for navigation. They are most often used in industrial applications to transport heavy materials around a large industrial building, such as a factory or warehouse. Application of the automatic guided vehicle broadened during the late 20th century.

<span class="mw-page-title-main">Mobile robot</span> Type of robot

A mobile robot is an automatic machine that is capable of locomotion. Mobile robotics is usually considered to be a subfield of robotics and information engineering.

<span class="mw-page-title-main">Robot navigation</span> Robots ability to navigate

Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localisation, in that it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates. Map building can be in the shape of a metric map or any notation describing locations in the robot frame of reference.

In robotics, Vector Field Histogram (VFH) is a real time motion planning algorithm proposed by Johann Borenstein and Yoram Koren in 1991. The VFH utilizes a statistical representation of the robot's environment through the so-called histogram grid, and therefore places great emphasis on dealing with uncertainty from sensor and modeling errors. Unlike other obstacle avoidance algorithms, VFH takes into account the dynamics and shape of the robot, and returns steering commands specific to the platform. While considered a local path planner, i.e., not designed for global path optimality, the VFH has been shown to produce near optimal paths.

The Guidance, Control and Decision Systems Laboratory (GCDSL) is situated in the Department of Aerospace Engineering at the Indian Institute of Science in Bangalore, India. The Mobile Robotics Laboratory (MRL) is its experimental division. They are headed by Dr. Debasish Ghose, Full Professor.

CajunBot refers to the autonomous ground vehicles developed by the University of Louisiana at Lafayette for the DARPA Grand Challenges. CajunBot was featured on CNN and on the Discovery Channel science series Robocars.

<span class="mw-page-title-main">National Robotics Engineering Center</span> Operating unit within the Robotics Institute of Carnegie Mellon University

The National Robotics Engineering Center (NREC) is an operating unit within the Robotics Institute (RI) of Carnegie Mellon University. NREC works closely with government and industry clients to apply robotic technologies to real-world processes and products, including unmanned vehicle and platform design, autonomy, sensing and image processing, machine learning, manipulation, and human–robot interaction.

LEDDAR is a proprietary technology owned by LeddarTech. It uses the time of flight of light signals and signal processing algorithms to detect, locate, and measure objects in its field of view.

<span class="mw-page-title-main">Driverless tractor</span> Autonomous farm vehicle

A driverless tractor is an autonomous farm vehicle that delivers a high tractive effort at slow speeds for the purposes of tillage and other agricultural tasks. It is considered driverless because it operates without the presence of a human inside the tractor itself. Like other unmanned ground vehicles, they are programmed to independently observe their position, decide speed, and avoid obstacles such as people, animals, or objects in the field while performing their task. The various driverless tractors are split into full autonomous technology and supervised autonomy. The idea of the driverless tractor appears as early as 1940, but the concept has significantly evolved in the last few years. The tractors use GPS and other wireless technologies to farm land without requiring a driver. They operate simply with the aid of a supervisor monitoring the progress at a control station or with a manned tractor in lead.

An autonomous aircraft is an aircraft which flies under the control of automatic systems and needs no intervention from a human pilot. Most autonomous aircraft are unmanned aerial vehicle or drones. However, autonomous control systems are reaching a point where several air taxis and associated regulatory regimes are being developed.

<span class="mw-page-title-main">Air-Cobot</span> French research and development project (2013–)

Air-Cobot (Aircraft Inspection enhanced by smaRt & Collaborative rOBOT) is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.

Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles.

The Robotics Collaborative Technology Alliance (R-CTA) was a research program initiated and sponsored by the US Army Research Laboratory. The purpose was to "bring together government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles." Collaborative Technology and Research Alliances was a term for partnerships between Army laboratories and centers, private industry and academia for performing research and technology development intended to benefit the US Army. The partnerships were funded by the US Army.

References

  1. 1 2 Wang J, Herath D (2022). "What Makes Robots? Sensors, Actuators, and Algorithms". In Herath D, St-Onge D (eds.). Foundations of Robotics: A Multidisciplinary Approach with Python and ROS. Singapore: Springer Nature. pp. 177–203. doi:10.1007/978-981-19-1983-1_7. ISBN   978-981-19-1983-1.
  2. Discant A, Rogozan A, Rusu C, Bensrhair A (May 2007). Sensors for obstacle detection-a survey. 2007 30th International Spring Seminar on Electronics Technology (ISSE). IEEE. pp. 100–105. doi:10.1109/ISSE.2007.4432828.
  3. Véras LG, Medeiros FL, Guimaráes LN (March 2019). "Systematic literature review of sampling process in rapidly-exploring random trees". IEEE Access. 7: 50933–50953. Bibcode:2019IEEEA...750933V. doi: 10.1109/ACCESS.2019.2908100 . S2CID   133481997.
  4. Bachute MR, Subhedar JM (December 2021). "Autonomous Driving Architectures: Insights of Machine Learning and Deep Learning Algorithms". Machine Learning with Applications. 6: 100164. doi: 10.1016/j.mlwa.2021.100164 . S2CID   240502983.
  5. Bimbraw K (2015). "Autonomous Cars: Past, Present and Future - A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology". Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics. Scitepress - Science and Technology Publications. pp. 191–198. doi:10.5220/0005540501910198. ISBN   978-989-758-122-9. S2CID   9957244.
  6. "Mitigating space debris generation". European Space Agency. Retrieved 2023-11-09.
  7. "Autonomous Aerial Vehicles & Drones". Unmanned Systems Technology. October 20, 2023. Retrieved November 9, 2023.
  8. Iclodean C, Cordos N, Varga BO (2020-06-06). "Autonomous Shuttle Bus for Public Transportation: A Review". Energies. 13 (11): 2917. doi: 10.3390/en13112917 . ISSN   1996-1073.
  9. Kohout B (2000). "Challenges in Real-Time Obstacle Avoidance". AAAI Spring Symposium on Real-Time Autonomous Systems.