List of SLAM methods

Last updated

This is a list of simultaneous localization and mapping (SLAM) methods. The KITTI Vision Benchmark Suite website has a more comprehensive list of Visual SLAM methods.

List of methods

Related Research Articles

<span class="mw-page-title-main">Simultaneous localization and mapping</span> Computational navigational technique used by robots and autonomous vehicles

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.

<span class="mw-page-title-main">Swarm robotics</span> Coordination of multiple robots as a system

Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″ It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.

<span class="mw-page-title-main">Wolfram Burgard</span> German roboticist

Wolfram Burgard is a German roboticist. He is a full professor at the University of Technology Nuremberg where he heads the Laboratory for Robotics and Artificial Intelligence. He is known for his substantial contributions to the simultaneous localization and mapping (SLAM) problem as well as diverse other contributions to robotics.

<span class="mw-page-title-main">Indoor positioning system</span> Network of devices used to wirelessly locate objects inside a building

An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.

<span class="mw-page-title-main">Rapidly exploring random tree</span> Search algorithm

A rapidly exploring random tree (RRT) is an algorithm designed to efficiently search nonconvex, high-dimensional spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. RRTs were developed by Steven M. LaValle and James J. Kuffner Jr. They easily handle problems with obstacles and differential constraints and have been widely used in autonomous robotic motion planning.

<span class="mw-page-title-main">3D reconstruction</span> Process of capturing the shape and appearance of real objects

In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.

<span class="mw-page-title-main">Visual odometry</span> Determining the position and orientation of a robot by analyzing associated camera images

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.

Johann Borenstein is an Israeli roboticist and Professor at the University of Michigan. Borenstein is well known for his work in autonomous obstacle avoidance, and is credited with the development of the Vector Field Histogram.

In robotics, the exploration problem deals with the use of a robot to maximize the knowledge over a particular area. The exploration problem arises in robotic mapping and search & rescue situations, where an environment might be dangerous or inaccessible to humans.

<span class="mw-page-title-main">Mobile Robot Programming Toolkit</span>

The Mobile Robot Programming Toolkit (MRPT) is a cross-platform and open source C++ library aimed to help robotics researchers to design and implement algorithms related to Simultaneous Localization and Mapping (SLAM), computer vision and motion planning. Different research groups have employed MRPT to implement projects reported in some of the major robotics journals and conferences.

<span class="mw-page-title-main">Dmitri Dolgov</span> Russian-American businessman (born 1977/1978)

Dmitri Dolgov is a Russian-American engineer who is the co-chief executive officer of Waymo. Previously, he worked on self-driving cars at Toyota and Stanford University for the DARPA Grand Challenge (2007). Dolgov then joined Waymo's predecessor, Google's Self-Driving Car Project, where he served as an engineer and head of software. He has also been Google X's lead scientist.

<span class="mw-page-title-main">Tactile sensor</span>

A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modeled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain. Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing.

GridLAB-D is an open-source simulation and analysis tool that models emerging smart grid energy technologies. It couples power flow calculations with distribution automation models, building energy use and appliance demand models, and market models. It is used primarily to estimate the benefits and impacts of smart grid technology.

In computer vision, the term cuboid is used to describe a small spatiotemporal volume extracted for purposes of behavior recognition. The cuboid is regarded as a basic geometric primitive type and is used to depict three-dimensional objects within a three dimensional representation of a flat, two dimensional image.

<span class="mw-page-title-main">Point-set registration</span> Process of finding a spatial transformation that aligns two point clouds

In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.

Cloud robotics is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centered on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent. Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low-cost, smarter robots with an intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.

<span class="mw-page-title-main">Inverse depth parametrization</span> Computational method for constructing 3D models

In computer vision, the inverse depth parametrization is a parametrization used in methods for 3D reconstruction from multiple images such as simultaneous localization and mapping (SLAM). Given a point in 3D space observed by a monocular pinhole camera from multiple views, the inverse depth parametrization of the point's position is a 6D vector that encodes the optical centre of the camera when in first observed the point, and the position of the point along the ray passing through and .

Daniel Cremers is a German computer scientist, Professor of Informatics and Mathematics and Chair of Computer Vision & Artificial Intelligence at the Technische Universität München. His research foci are computer vision, mathematical image, partial differential equations, convex and combinatorial optimization, machine learning and statistical inference.

<span class="mw-page-title-main">Margarita Chli</span> Greek computer vision and robotics researcher

Margarita Chli is an assistant professor and leader of the Vision for Robotics Lab at ETH Zürich in Switzerland. Chli is a leader in the field of computer vision and robotics and was on the team of researchers to develop the first fully autonomous helicopter with onboard localization and mapping. Chli is also the Vice Director of the Institute of Robotics and Intelligent Systems and an Honorary Fellow of the University of Edinburgh in the United Kingdom. Her research currently focuses on developing visual perception and intelligence in flying autonomous robotic systems.

A continuum robot is a type of robot that is characterised by infinite degrees of freedom and number of joints. These characteristics allow continuum manipulators to adjust and modify their shape at any point along their length, granting them the possibility to work in confined spaces and complex environments where standard rigid-link robots cannot operate. In particular, we can define a continuum robot as an actuatable structure whose constitutive material forms curves with continuous tangent vectors. This is a fundamental definition that allows to distinguish between continuum robots and snake-arm robots or hyper-redundant manipulators: the presence of rigid links and joints allows them to only approximately perform curves with continuous tangent vectors.

References

  1. Zikos, Nikos; Petridis, Vassilios (2014). "6-DoF Low Dimensionality SLAM (L-SLAM)". Journal of Intelligent & Robotic Systems. 79: 1–18. doi:10.1007/s10846-014-0029-6. ISSN   0921-0296. S2CID   40486562.
  2. "SLAM". The Telegraph. 3 September 2019. Sunday, 2 May 2021
  3. Thrun, S.; Burgard, W.; Fox, D. (2005). Probabilistic Robotics. Cambridge: The MIT Press. ISBN   0-262-20162-3.
  4. G. Klein and D. Murray (2007). "Parallel Tracking and Mapping for Small AR Workspaces" (PDF). 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. pp. 1–10. doi:10.1109/ISMAR.2007.4538852. ISBN   978-1-4244-1749-0. S2CID   206986664.
  5. J. Engel and T. Schops and D. Cremers (2014). "LSD-SLAM: Large-Scale Direct Monocular SLAM" (PDF). European Conference on Computer Vision (ECCV).
  6. Taihú Pire and Thomas Fischer and Gastón Castro and Pablo De Cristóforis and Javier Civera and Julio Jacobo Berlles (2017). "S-PTAM: Stereo Parallel Tracking and Mapping". Robotics and Autonomous Systems. 93: 27–42. doi:10.1016/j.robot.2017.03.019. hdl: 11336/59974 . ISSN   0921-8890.
  7. R. Mur-Artal and J. M. M. Montiel and J. D. Tardós (2015). "ORB-SLAM: A Versatile and Accurate Monocular SLAM System". IEEE Transactions on Robotics. 31 (5): 1147–1163. arXiv: 1502.00956 . Bibcode:2015arXiv150200956M. doi:10.1109/TRO.2015.2463671. ISSN   1552-3098. S2CID   206775100.
  8. D. Zou and P. Tan (2013). "CoSLAM: Collaborative Visual SLAM in Dynamic Environments" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE. 35 (2): 354–66. doi:10.1109/TPAMI.2012.104. PMID   22547430. S2CID   9517281.
  9. Michael J. Milford and Gordon. F. Wyeth. "SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights". Proceedings of the International Conference on Robotics and Automation.
  10. "QSLAM". Sunday, 18 April 2021
  11. "iSAM: Incremental Smoothing and Mapping". people.csail.mit.edu. Retrieved 2018-02-14.
  12. M. Bosse and R. Zlot (2009). "Continuous 3D scan-matching with a spinning 2D laser". 2009 IEEE International Conference on Robotics and Automation. pp. 4312–4319. doi:10.1109/ROBOT.2009.5152851. ISBN   978-1-4244-2788-8. ISSN   1050-4729. S2CID   2819117.
  13. F. Endres and J. Hess and J. Sturm and D. Cremers and W. Burgard (2013). "3-D mapping with an RGB-D camera". IEEE Transactions on Robotics: 177–187.
  14. "Rgbdslamv2" . Retrieved 2019-09-20.
  15. A. Rosinol and M. Abate and Y. Chang and L. Carlone (2020). "Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping". IEEE International Conference on Robotics and Automation. arXiv: 1910.02490 .