AVM Navigator

Last updated

AVM Navigator is an additional module of the RoboRealm (plugin) that provides object recognition and autonomous robot navigation using a single video camera on the robot as the main sensor for navigation.

Associative Video Memory

It is possible due to using of an "Associative Video Memory" (AVM) algorithm based on multilevel decomposition of recognition matrices. It provides image recognition with low False Acceptance Rate (about 0.01%). In this case visual navigation is just the sequence of images (landmarks) with associated coordinates that was memorized inside AVM tree during route training. The navigation map is presented as the set of data (such as X, Y coordinates and azimuth) associated with images inside AVM tree. When a robot sees images from camera (marks) that can be recognized then it confirms its current location.

The navigator creates a way from the current location to target position as a chain of waypoints. If the robot's current orientation does not point to the next waypoint then the navigator turns the robot body. When the robot reaches a waypoint the navigator changes direction to the next waypoint in the chain and so on until the target position is reached.

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Waypoint</span> Point on a route of travel

A waypoint is a point or place on a route or line of travel, a stopping point, an intermediate point, or point at which course is changed, the first use of the term tracing to 1880. In modern terms, it most often refers to coordinates which specify one's position on the globe at the end of each "leg" (stage) of an air flight or sea passage, the generation and checking of which are generally done computationally.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In films, television shows and video games, motion capture refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

Robotic mapping is a discipline related to computer vision and cartography. The goal for an autonomous robot is to be able to construct a map or floor plan and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.

Template matching is a technique in digital image processing for finding small parts of an image which match a template image. It can be used for quality control in manufacturing, navigation of mobile robots, or edge detection in images.

<span class="mw-page-title-main">Motion estimation</span> Process used in video coding/compression

In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.

<span class="mw-page-title-main">Yeoman plotter</span>

The Yeoman Plotter was a plotter used on ships and boats to transfer GPS coordinates or RADAR echo locations onto a paper navigation chart and to read coordinates from the chart. It was manufactured from 1985 to 2014/2015 and was an intermediary step between traditional paper chart navigation and full electronic chart displays. It was easy to understand for people that were accustomed to paper charts and much cheaper than electronic chart displays available at the time. The continuing fall in prices of electronic chart displays, their increase in functionality such as radar overlay and the advent of cheap tablets eventually made the Yeoman plotter uncompetitive.

In the fields of computing and computer vision, pose represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.

Motion planning, also path planning is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.

Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.

<span class="mw-page-title-main">Indoor positioning system</span> Network of devices used to wirelessly locate objects inside a building

An indoor positioning system (IPS) is a network of devices used to locate people or objects where GPS and other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.

<span class="mw-page-title-main">Robot navigation</span> Robots ability to navigate

Robot localization denotes the robot's ability to establish its own position and orientation within the frame of reference. Path planning is effectively an extension of localization, in that it requires the determination of the robot's current position and a position of a goal location, both within the same frame of reference or coordinates. Map building can be in the shape of a metric map or any notation describing locations in the robot frame of reference.

D* is any one of the following three related incremental search algorithms:

A vision-guided robot (VGR) system is basically a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications, life sciences, and more.

<span class="mw-page-title-main">Short baseline acoustic positioning system</span> Class of underwater acoustic positioning systems used to track underwater vehicles and divers

A short baseline (SBL) acoustic positioning system is one of three broad classes of underwater acoustic positioning systems that are used to track underwater vehicles and divers. The other two classes are ultra short baseline systems (USBL) and long baseline systems (LBL). Like USBL systems, SBL systems do not require any seafloor mounted transponders or equipment and are thus suitable for tracking underwater targets from boats or ships that are either anchored or under way. However, unlike USBL systems, which offer a fixed accuracy, SBL positioning accuracy improves with transducer spacing. Thus, where space permits, such as when operating from larger vessels or a dock, the SBL system can achieve a precision and position robustness that is similar to that of sea floor mounted LBL systems, making the system suitable for high-accuracy survey work. When operating from a smaller vessel where transducer spacing is limited, the SBL system will exhibit reduced precision.

<span class="mw-page-title-main">RoboLogix</span> Robotics simulator

RoboLogix is a robotics simulator which uses a physics engine to emulate robotics applications. The advantages of using robotics simulation tools such as RoboLogix are that they save time in the design of robotics applications and they can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated. RoboLogix provides a platform to teach, test, run, and debug programs that have been written using a five-axis industrial robot in a range of applications and functions. These applications include pick-and-place, palletizing, welding, and painting.

<span class="mw-page-title-main">Moving map display</span>

A moving map display (MMD) / projected map display (PMD) is a type of navigation system output that, instead of numerically displaying the current geographical coordinates determined by the navigation unit or an heading and distance indication of a certain waypoint, displays the unit's current location at the center of a map. As the unit moves around and new coordinates are therefore determined, the map moves to keep its position at the center of the display.

An autonomous aircraft is an aircraft which flies under the control of on-board autonomous robotic systems and needs no intervention from a human pilot or remote control. Most contemporary autonomous aircraft are unmanned aerial vehicles (drones) with pre-programmed algorithms to perform designated tasks, but advancements in artificial intelligence technologies mean that autonomous control systems are reaching a point where several air taxis and associated regulatory regimes are being developed.

<span class="mw-page-title-main">Air-Cobot</span> French research and development project (2013–)

Air-Cobot (Aircraft Inspection enhanced by smaRt & Collaborative rOBOT) is a French research and development project of a wheeled collaborative mobile robot able to inspect aircraft during maintenance operations. This multi-partner project involves research laboratories and industry. Research around this prototype was developed in three domains: autonomous navigation, human-robot collaboration and nondestructive testing.

<span class="mw-page-title-main">Geopositioning</span> Identification of the real-world geographic position of an object

Geopositioning is the process of determining or estimating the geographic position of an object or a person.