Developer(s) | José Luis Blanco Claraco, contributors |
---|---|
Stable release | 2.11.5 / December 21, 2023 [1] |
Repository | github |
Written in | C++ |
Operating system | Linux, Windows, MacOS |
Platform | Cross-platform |
Type | Robotics suite |
License | New BSD License |
Website | www |
The Mobile Robot Programming Toolkit (MRPT) is a cross-platform and open source C++ library aimed to help robotics researchers to design and implement algorithms related to Simultaneous Localization and Mapping (SLAM), computer vision and motion planning (obstacle avoidance). Different research groups have employed MRPT to implement projects reported in some of the major robotics journals and conferences. [2] [3] [4] [5] [6]
MRPT is open source and distributed under the New BSD License. Versions prior to 1.0.0 were released under the GPL.
Some features included in the project as user-applications:
Upon a selection of the individual libraries provided by MRPT, [7] users can develop new applications.
MRPT participated in Google Summer of Code in 2016–2018. [8] [9]
Through polymorphism in the design of metric-map classes, observations (such as a laser scans) can be inserted into a grid-map or a map of points, or both simultaneously, transparently to the user.
The following representations of metric maps are implemented:
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds, and related group motion. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object. Reynolds' boid model is one example of a larger general concept, for which many other variations have been developed since. The closely related work of Ichiro Aoki is noteworthy because it was published in 1982 — five years before Reynolds' boids paper.
Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology.
Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct a map or floor plan and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.
Iterative closest point (ICP) is an algorithm employed to minimize the difference between two clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning, to co-register bone models, etc.
Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.
The Hilbert curve is a continuous fractal space-filling curve first described by the German mathematician David Hilbert in 1891, as a variant of the space-filling Peano curves discovered by Giuseppe Peano in 1890.
Collaborative mapping, also known as citizen mapping, is the aggregation of Web mapping and user-generated content, from a group of individuals or entities, and can take several distinct forms. With the growth of technology for storing and sharing maps, collaborative maps have become competitors to commercial services, in the case of OpenStreetMap, or components of them, as in Google Map Maker Waze and Yandex Map Editor.
An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.
Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localisation, in that it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates. Map building can be in the shape of a metric map or any notation describing locations in the robot frame of reference.
In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.
In photography, an omnidirectional camera, also known as 360-degree camera, is a camera having a field of view that covers approximately the entire sphere or at least a full circle in the horizontal plane. Omnidirectional cameras are important in areas where large visual field coverage is needed, such as in panoramic photography and robotics.
In robotics, the exploration problem deals with the use of a robot to maximize the knowledge over a particular area. The exploration problem arises in robotic mapping and search & rescue situations, where an environment might be dangerous or inaccessible to humans.
Ant robotics is a special case of swarm robotics. Swarm robots are simple robots with limited sensing and computational capabilities. This makes it feasible to deploy teams of swarm robots and take advantage of the resulting fault tolerance and parallelism. Swarm robots cannot use conventional planning methods due to their limited sensing and computational capabilities. Thus, their behavior is often driven by local interactions. Ant robots are swarm robots that can communicate via markings, similar to ants that lay and follow pheromone trails. Some ant robots use long-lasting trails. Others use short-lasting trails including heat and alcohol. Others even use virtual trails.
A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance.
The Robotics Toolbox is MATLAB toolbox software that supports research and teaching into arm-type and mobile robotics. While the Robotics Toolbox is free software, it requires the proprietary MATLAB environment in order to execute. The Toolbox forms the basis of the exercises in several textbooks.
Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent. Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.
In computer vision, the inverse depth parametrization is a parametrization used in methods for 3D reconstruction from multiple images such as simultaneous localization and mapping (SLAM). Given a point in 3D space observed by a monocular pinhole camera from multiple views, the inverse depth parametrization of the point's position is a 6D vector that encodes the optical centre of the camera when in first observed the point, and the position of the point along the ray passing through and .
Margarita Chli is an assistant professor and leader of the Vision for Robotics Lab at ETH Zürich in Switzerland. Chli is a leader in the field of computer vision and robotics and was on the team of researchers to develop the first fully autonomous helicopter with onboard localization and mapping. Chli is also the Vice Director of the Institute of Robotics and Intelligent Systems and an Honorary Fellow of the University of Edinburgh in the United Kingdom. Her research currently focuses on developing visual perception and intelligence in flying autonomous robotic systems.
{{cite conference}}
: CS1 maint: multiple names: authors list (link)