VXL

Last updated

VXL, the Vision-something-Library, is a large collection of open source C++ libraries for computer vision. The idea of the naming is to replace X with one of many letters to obtain the smaller library names, i.e. G (VGL) is a geometry library, N (VNL) is a numerics library, I (VIL) is an image processing library, etc. These libraries can be used for general scientific computing as well as computer vision. Some examples of usage can be seen online. [1]

Contents

VXL is a larger-scale software engineering project with roots dating back to traditional computer vision environments from the 1990s, having libraries at multiple levels of complexities, many of them listing OpenCV as one of many dependencies. A similar approach at an even larger scale is taken by Kitware's KWIVER. [2] VXL core libraries are extremely stable and have been used in larger projects, both public and within companies, notably ITK.

See also

Related Research Articles

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.

The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local features in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.

OpenCV Computer vision library

OpenCV is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez. The library is cross-platform and free for use under the open-source Apache 2 License. Starting with 2011, OpenCV features GPU acceleration for real-time operations.

The core idea of Artificial Intelligence systems integration is making individual software components, such as speech synthesizers, interoperable with other components, such as common sense knowledgebases, in order to create larger, broader and more capable A.I. systems. The main methods that have been proposed for integration are message routing, or communication protocols that the software components use to communicate with each other, often through a middleware blackboard system.

The Great Britain Historical GIS, is a spatially enabled database that documents and visualises the changing human geography of the British Isles, although is primarily focussed on the subdivisions of the United Kingdom mainly over the 200 years since the first census in 1801. The project is currently based at the University of Portsmouth, and is the provider of the website A Vision of Britain through Time.

Image rectification

Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane.

Bundle adjustment

In photogrammetry and computer stereo vision, bundle adjustment is the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, given a set of images depicting a number of 3D points from different viewpoints. Its name refers to the geometrical bundles of light rays originating from each 3D feature and converging on each camera's optical center, which are adjusted optimally according to an optimality criterion involving the corresponding image projections of all points.

The following outline is provided as an overview of and topical guide to object recognition:

In computer vision, maximally stable extremal regions (MSER) are used as a method of blob detection in images. This technique was proposed by Matas et al. to find correspondences between image elements from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms.

Object detection

Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class in digital images and videos. Well-researched domains of object detection include face detection and pedestrian detection. Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.

Accessible publishing Approach to publishing and book design

Accessible publishing is an approach to publishing and book design whereby books and other texts are made available in alternative formats designed to aid or replace the reading process. It is particularly relevant for people who are blind, visually impaired or otherwise print disabled.

Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process of stereopsis.

AForge.NET is a computer vision and artificial intelligence library originally developed by Andrew Kirillov for the .NET Framework.

Motion Tracking using Java is the process of locating a moving object in time. An algorithm analyses the video frames and outputs the location of moving targets within the video frame.

Google Test is a unit testing library for the C++ programming language, based on the xUnit architecture. The library is released under the BSD 3-clause license. It can be compiled for a variety of POSIX and Windows platforms, allowing unit-testing of C sources as well as C++ with minimal source modification.

Adrian Kaehler

Adrian Kaehler is an American scientist, engineer, entrepreneur, inventor and author. He is best known for his work on the OpenCV Computer Vision library, as well as two books on that library.

Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.

A Vision Transformer (ViT) is a transformer that is targeted at vision processing tasks such as image recognition.

References

  1. "The VXL Project".
  2. http://www.kwiver.org