Morphogenetic robotics

Last updated

Morphogenetic robotics [1] generally refers to the methodologies that address challenges in robotics inspired by biological morphogenesis. [2] [3]

Contents

Background

Differences to epigenetic

Morphogenetic robotics is related to, but differs from, epigenetic robotics. The main difference between morphogenetic robotics and epigenetic robotics is that the former focuses on self-organization, self-reconfiguration, self-assembly and self-adaptive control of robots using genetic and cellular mechanisms inspired from biological early morphogenesis (activity-independent development), during which the body and controller of the organisms are developed simultaneously, whereas the latter emphasizes the development of robots' cognitive capabilities, such as language, emotion and social skills, through experience during the lifetime (activity-dependent development). Morphogenetic robotics is closely connected to developmental biology and systems biology, whilst epigenetic robotics is related to developmental cognitive neuroscience emerged from cognitive science, developmental psychology and neuroscience.

Topics

Morphogenetic robotics includes, but is not limited to the following main topics:

See also

Related Research Articles

An emergent algorithm is an algorithm that exhibits emergent behavior. In essence an emergent algorithm implements a set of simple building block behaviors that when combined exhibit more complex behaviors. One example of this is the implementation of fuzzy motion controllers used to adapt robot movement in response to environmental obstacles.

Ant colony optimization algorithms

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial Ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated solely to Artificial Ants, and to numerous commercial applications by specialized companies such as AntOptima.

Multi-agent system

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.

Dario Floreano Italian roboticist

Dario Floreano is director of the Laboratory of Intelligent Systems (LIS) at the École Polytechnique Fédérale de Lausanne in Switzerland as well as the Swiss National Centre of Competence in Research (NCCR) Robotics.

Evolutionary robotics (ER) is a methodology that uses evolutionary computation to develop controllers and/or hardware for autonomous robots. Algorithms in ER frequently operate on populations of candidate controllers, initially selected from some distribution. This population is then repeatedly modified according to a fitness function. In the case of genetic algorithms, a common method in evolutionary computation, the population of candidate controllers is repeatedly grown according to crossover, mutation and other GA operators and then culled according to the fitness function. The candidate controllers used in ER applications may be drawn from some subset of the set of artificial neural networks, although some applications use collections of "IF THEN ELSE" rules as the constituent parts of an individual controller. It is theoretically possible to use any set of symbolic formulations of a control law as the space of possible candidate controllers. Artificial neural networks can also be used for robot learning outside the context of evolutionary robotics. In particular, other forms of reinforcement learning can be used for learning robot controllers.

Developmental robotics (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence, developmental robotics also provides feedback and novel hypotheses on theories of human and animal development.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

Cognitive robotics is concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition.

In computer science and operations research, a memetic algorithm (MA) is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software. The word "robot" was introduced to the public by Czech writer Karel Čapek in his play R.U.R., published in 1920. The term "robotics" was coined by Isaac Asimov in his 1941 science fiction short-story "Liar!"

Modular self-reconfiguring robotic systems or self-reconfigurable modular robots are autonomous kinematic machines with variable morphology. Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage.

Physicomimetics is physics-based swarm (computational) intelligence. The word is derived from physike and mimesis.

Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype-phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models.

Evolutionary developmental robotics refers to methodologies that systematically integrate evolutionary robotics, epigenetic robotics and morphogenetic robotics to study the evolution, physical and mental development and learning of natural intelligent systems in robotic systems. The field was formally suggested and fully discussed in a published paper and further discussed in a published dialogue.

Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.

Intelligence and intelligent systems has to be able to evolve, self-develop, self-learn continuously in order to reflect the dynamically evolving environment. The concept of Evolving Intelligent Systems (EISs) was conceived around the turn of the century with the phrase EIS itself coined for the first time in and expanded in. EISs develop their structure, functionality and internal knowledge representation through autonomous learning from data streams generated by the possibly unknown environment and from the system self-monitoring. EISs consider a gradual development of the underlying system structure and differ from evolutionary and genetic algorithms which consider such phenomena as chromosomes crossover, mutation, selection and reproduction, parents and off-springs. The evolutionary fuzzy and neuro systems are sometimes also called "evolving" which leads to some confusion. This was more typical for the first works on this topic in the late 1990s.

Swarm robotic platforms apply swarm robotics in multi-robot collaboration. They take inspiration from nature. The main goal is to control a large number of robots to accomplish a common task/problem. Hardware limitation and cost of robot platforms limit current research in swarm robotics to mostly performed by simulation software. On the other hand, simulation of swarm scenarios that needs large numbers of agents is extremely complex and often inaccurate due to poor modelling of external conditions and limitation of computation.

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

References

  1. Y. Jin and Y. Meng. Morphogenetic robotics: An emerging new field in developmental robotics. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 41(2):145-160, 2011
  2. I. Salazar-Ciudad, H. Garcia-Fernandez, and R. V. Sole. Gene networks capable of pattern formation: from induction to reaction-diffusion. Journal of Theoretical Biology, 205:587-603, 2000
  3. L. Wolpert. Principles of Development. Oxford University Press, 2002
  4. H. Guo, Y. Meng, and Y. Jin. A cellular mechanism for multi-robot construction via evolutionary multi-objective optimization of a gene regulatory network. BioSystems, 98(3):193-203, 2009
  5. M. Mamei, M. Vasirani, F. Zambonelli, Experiments in morphogenesis in swarms of simple mobile robots. Applied Artificial Intelligence, 18, 9-10: 903-919, 2004
  6. W. Shen, P. Will and A. Galstyan. Hormone-inspired self-organization and distributed control of robotic swarms. Autonomous Robots, 17, pp.93-105, 2004
  7. H. Hamann, H. Wörn, K. Crailsheim, T. Schmickl: Spatial macroscopic models of a bio-inspired robotic swarm algorithm. IROS 2008: 1415-1420
  8. Y. Jin, H. Guo, and Y. Meng. A hierarchical gene regulatory network for adaptive multi-robot pattern formation. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42(3):805-816, 2012
  9. H. Guo, Y. Jin, and Y. Meng. A morphogenetic framework for self-organized multi-robot pattern formation and boundary coverage. ACM Transactions on Autonomous and Adaptive Systems, 7(1), Article No. 15, April 2012. doi:10.1145/2168260.2168275
  10. T. Schmickl, J. Stradner, H. Hamann, and K. Crailsheim. Major Feedbacks that Support Artificial Evolution in Multi-Modular Robotics. Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Exploring New Horizons in Evolutionary Design of Robots Workshop, Oct. 11-15 2009, St. Louis, MO, USA, pp. 65-72
  11. Y. Meng, Y. Zheng and Y. Jin. Autonomous self-reconfiguration of modular robots by evolving a hierarchical mechnochemical model. IEEE Computational Intelligence Magazine, 6(1):43-54, 2011
  12. G.S. Hornby and J.B. Pollack. Body-brain co-evolution using L-systems as a generative encoding. Artificial Life, 8:3, 2002
  13. J.A. Lee and J. Sitte. Morphogenetic Evolvable Hardware Controllers for Robot Walking. In: 2nd International Symposium on Autonomous Minirobots for Research and Edutainment (AMiRE 2003), Feb. 18-20, 2003, Brisbane, Australia
  14. G. Gomez and P. Eggenberger. Evolutionary synthesis of grasping through self-exploratory movements of a robotic hand. Congress on Evolutionary Computation, 2007
  15. L. Schramm, Y. Jin, B. Sendhoff. Emerged coupling of motor control and morphological development in evolution of multi-cellular animats. 10th European Conference on Artificial Life, Budapest, September 2009
  16. Y. Meng, Y. Jin and J. Yin. Modeling activity-dependent plasticity in BCM spiking neural networks with application to human behavior recognition. IEEE Transactions on Neural Networks, 22(12):1952-1966, 2011
  17. J. Yin, Y. Meng and Y. Jin. A developmental approach to structural self-organization in reservoir computing. IEEE Transactions on Autonomous Mental Development, 2012