Latent learning

Last updated

Latent learning is the subconscious retention of information without reinforcement or motivation. In latent learning, one changes behavior only when there is sufficient motivation later than when they subconsciously retained the information. [1]

Contents

Latent learning is when the observation of something, rather than experiencing something directly, can affect later behavior. Observational learning can be many things. A human observes a behavior, and later repeats that behavior at another time (not direct imitation) even though no one is rewarding them to do that behavior.

In the social learning theory, humans observe others receiving rewards or punishments, which invokes feelings in the observer and motivates them to change their behavior.

In latent learning particularly, there is no observation of a reward or punishment. Latent learning is simply animals observing their surroundings with no particular motivation to learn the geography of it; however, at a later date, they are able to exploit this knowledge when there is motivation - such as the biological need to find food or escape trouble.

The lack of reinforcement, associations, or motivation with a stimulus is what differentiates this type of learning from the other learning theories such as operant conditioning or classical conditioning. [2]

Latent learning is used by animals to navigate a maze more efficiently. Picture maze unsolved.png
Latent learning is used by animals to navigate a maze more efficiently.

Comparison to other types of learning

Classical conditioning

Classical conditioning is when an animal eventually subconsciously anticipates a biological stimulus such as food when they experience a seemingly random stimulus, due to a repeated experience of their association. One significant example of classical conditioning is Ivan Pavlov's experiment in which dogs showed a conditioned response to a bell the experimenters had purposely tried to associate with feeding time. After the dogs had been conditioned, the dogs no longer only salivated for the food, which was a biological need and therefore an unconditioned stimulus. The dogs began to salivate at the sound of a bell, the bell being a conditioned stimulus and the salivating now being a conditioned response to it. They salivated at the sound of a bell because they were anticipating food.

On the other hand, latent learning is when an animal learns something even though it has no motivation or stimulus associating a reward with learning it. Animals are therefore able to simply be exposed to the information for the sake of information and it will come to their brain. One significant example of latent learning in rats subconsciously creating mental maps and using that information to be able to find a biological stimulus such as food faster later on when there is a reward. [3] These rats already knew the map of the maze, even though there was no motivation to learn the maze before the food was introduced.

Operant conditioning

Operant Conditioning is the ability to tailor an animals behavior using rewards and punishments. Latent Learning is tailoring an animals behavior by giving them time to create a mental map before a stimulus is introduced.

Social learning theory

Social learning theory suggests that behaviors can be learned through observation, but actively cognizant observation. In this theory, observation leads to a change in behavior more often when rewards or punishments associated with specific behaviors are observed. Latent learning theory is similar in the observation aspect, but again it is different due to the lack of reinforcement needed for learning.

Early studies

In a classic study by Edward C. Tolman, three groups of rats were placed in mazes and their behavior observed each day for more than two weeks. The rats in Group 1 always found food at the end of the maze; the rats in Group 2 never found food; and the rats in Group 3 found no food for 10 days, but then received food on the eleventh. The Group 1 rats quickly learned to rush to the end of the maze; Group 2 rats wandered in the maze but did not preferentially go to the end. Group 3 acted the same as the Group 2 rats until food was introduced on Day 11; then they quickly learned to run to the end of the maze and did as well as the Group 1 rats by the next day. This showed that the Group 3 rats had learned about the organisation of the maze, but without the reinforcement of food. Until this study, it was largely believed that reinforcement was necessary for animals to learn such tasks. [4] Other experiments showed that latent learning can happen in shorter durations of time, e.g. 3–7 days. [5] Among other early studies, it was also found that animals allowed to explore the maze and then detained for one minute in the empty goal box learned the maze much more rapidly than groups not given such goal orientation. [6] [ clarification needed ]

In 1949, John Seward conducted studies in which rats were placed in a T-maze with one arm coloured white and the other black. One group of rats had 30 mins to explore this maze with no food present, and the rats were not removed as soon as they had reached the end of an arm. Seward then placed food in one of the two arms. Rats in this exploratory group learned to go down the rewarded arm much faster than another group of rats that had not previously explored the maze. [7] Similar results were obtained by Bendig in 1952 where rats were trained to escape from water in a modified T-maze with food present while satiated for food, then tested while hungry. Upon being returned to the maze while food deprived, the rats learned where the food was located at a rate that increased with the number of pre-exposures given the rat in the training phase. This indicated varying levels of latent learning. [8]

Most early studies of latent learning were conducted with rats, but a study by Stevenson in 1954 explored this method of learning in children. [9] Stevenson required children to explore a series of objects to find a key, and then he determined the knowledge the children had about various non-key objects in the set-up. [9] The children found non-key objects faster if they had previously seen them, indicating they were using latent learning. Their ability to learn in this way increased as they became older. [9]

In 1982, Wirsig and co-researchers used the taste of sodium chloride to explore which parts of the brain are necessary for latent learning in rats. Decorticate rats were just as able as normal rats to accomplish the latent learning task. [10]

More recent studies

Latent learning in infants

The human ability to perform latent learning seems to be a major contributor to why infants can use knowledge they learned while they did not have the skills to use them. For example, infants do not gain the ability to imitate until they are 6 months. In one experiment, one group of infants was exposed to hand puppets A and B simultaneously at the age of three-months. Another control group, the same age, was only presented to with puppet A. All of the infants were then periodically presented with puppet A until six-months of age. At six-months of age, the experimenters performed a target behavior on the first puppet while all the infants watched. Then, all the infants were presented with puppet A and B. The infants that had seen both puppets at 3-months of age imitated the target behavior on puppet B at a significantly higher rate than the control group which had not seen the two puppets paired. This suggests that the pre-exposed infants had formed an association between the puppets without any reinforcement. This exhibits latent learning in infants, showing that infants can learn by observation, even when they do not show any indication that they are learning until they are older. [11]

The impact of different drugs on latent learning

Many drugs abused by humans imitate dopamine, the neurotransmitter that gives humans motivation to seek rewards. [12] It is shown that zebra-fish can still latently learn about rewards while lacking dopamine if they are given caffeine. If they were given caffeine before learning, then they could use the knowledge they learned to find the reward when they were given dopamine at a later time. [13]

Alcohol may impede on latent learning. Some zebra-fish were exposed to alcohol before exploring a maze, then continued to be exposed to alcohol when the maze had a reward introduced. It took these zebra-fish much longer to find a reward in the maze than the control group that had not been exposed to alcohol, even though they showed the same amount of motivation. However, it was shown that the longer the zebra-fish were exposed to alcohol, the less it had an effect of their latent learning. Another experiment group were zebra-fish representing alcohol withdrawal. Zebra-fish that performed the worst were those who had been exposed to alcohol for a long period, then had it removed before the reward was introduced. These fish lacked in motivation, motor dysfunction, and seemed to have not latently learned the maze. [14]

Other factors impacting latent learning

Though the specific area of the brain responsible for latent learning may not have been pinpointed, it was found that patients with medial temporal amnesia had particular difficulty with a latent learning task which required representational processing. [15]

Another study, conducted with mice, found intriguing evidence that the absence of a prion protein disrupts latent learning and other memory functions in the water maze latent learning task. [16] A lack of phencyclidine was also found to impair latent learning in a water finding task. [17]

Related Research Articles

Observational learning is learning that occurs through observing the behavior of others. It is a form of social learning which takes various forms, based on various processes. In humans, this form of learning seems to not need reinforcement to occur, but instead, requires a social model such as a parent, sibling, friend, or teacher with surroundings. Particularly in childhood, a model is someone of authority or higher status in an environment. In animals, observational learning is often based on classical conditioning, in which an instinctive behavior is elicited by observing the behavior of another, but other processes may be involved as well.

Operant conditioning, also called instrumental conditioning, is a learning process where voluntary behaviors are modified by association with the addition of reward or aversive stimuli. The frequency or duration of the behavior may increase through reinforcement or decrease through punishment or extinction.

<span class="mw-page-title-main">Operant conditioning chamber</span> Laboratory apparatus used to study animal behavior

An operant conditioning chamber is a laboratory apparatus used to study animal behavior. The operant conditioning chamber was created by B. F. Skinner while he was a graduate student at Harvard University. The chamber can be used to study both operant conditioning and classical conditioning.

Classical conditioning is a behavioral procedure in which a biologically potent stimulus is paired with a neutral stimulus. The term classical conditioning refers to the process of an automatic, conditioned response that is paired with a specific stimulus.

<span class="mw-page-title-main">Reinforcement</span> Consequence affecting an organisms future behavior

In behavioral psychology, reinforcement refers to consequences that increases the likelihood of an organism's future behavior, typically in the presence of a particular antecedent stimulus. For example, a rat can be trained to push a lever to receive food whenever a light is turned on. In this example, the light is the antecedent stimulus, the lever pushing is the operant behavior, and the food is the reinforcer. Likewise, a student that receives attention and praise when answering a teacher's question will be more likely to answer future questions in class. The teacher's question is the antecedent, the student's response is the behavior, and the praise and attention are the reinforcements.

<span class="mw-page-title-main">Animal cognition</span> Intelligence of non-human animals

Animal cognition encompasses the mental capacities of non-human animals including insect cognition. The study of animal conditioning and learning used in this field was developed from comparative psychology. It has also been strongly influenced by research in ethology, behavioral ecology, and evolutionary psychology; the alternative name cognitive ethology is sometimes used. Many behaviors associated with the term animal intelligence are also subsumed within animal cognition.

<span class="mw-page-title-main">Edward C. Tolman</span> American psychologist (1886–1959)

Edward Chace Tolman was an American psychologist and a professor of psychology at the University of California, Berkeley. Through Tolman's theories and works, he founded what is now a branch of psychology known as purposive behaviorism. Tolman also promoted the concept known as latent learning first coined by Blodgett (1929). A Review of General Psychology survey, published in 2002, ranked Tolman as the 45th most cited psychologist of the 20th century.

Behaviorism is a systematic approach to understand the behavior of humans and other animals. It assumes that behavior is either a reflex evoked by the pairing of certain antecedent stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment contingencies, together with the individual's current motivational state and controlling stimuli. Although behaviorists generally accept the important role of heredity in determining behavior, they focus primarily on environmental events. The cognitive revolution of the late 20th century largely replaced behaviorism as an explanatory theory with cognitive psychology, which unlike behaviorism examines internal mental states.

Motivational salience is a cognitive process and a form of attention that motivates or propels an individual's behavior towards or away from a particular object, perceived event or outcome. Motivational salience regulates the intensity of behaviors that facilitate the attainment of a particular goal, the amount of time and energy that an individual is willing to expend to attain a particular goal, and the amount of risk that an individual is willing to accept while working to attain a particular goal.

<span class="mw-page-title-main">Clark L. Hull</span> American psychologist

Clark Leonard Hull was an American psychologist who sought to explain learning and motivation by scientific laws of behavior. Hull is known for his debates with Edward C. Tolman. He is also known for his work in drive theory.

Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences the behavior gradually stops occurring. In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.

Latent inhibition (LI) is a technical term in classical conditioning, where a familiar stimulus takes longer to acquire meaning than a new stimulus. The term originated with Lubow and Moore in 1973. The LI effect is latent in that it is not exhibited in the stimulus pre-exposure phase, but rather in the subsequent test phase. "Inhibition", here, simply connotes that the effect is expressed in terms of relatively poor learning. The LI effect is extremely robust, appearing in both invertebrate and mammalian species that have been tested and across many different learning paradigms, thereby suggesting some adaptive advantages, such as protecting the organism from associating irrelevant stimuli with other, more important, events.

<span class="mw-page-title-main">Reward system</span> Group of neural structures responsible for motivation and desire

The reward system is a group of neural structures responsible for incentive salience, associative learning, and positively-valenced emotions, particularly ones involving pleasure as a core component. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward". In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding.

Discrimination learning is defined in psychology as the ability to respond differently to different stimuli. This type of learning is used in studies regarding operant and classical conditioning. Operant conditioning involves the modification of a behavior by means of reinforcement or punishment. In this way, a discriminative stimulus will act as an indicator to when a behavior will persist and when it will not. Classical conditioning involves learning through association when two stimuli are paired together repeatedly. This conditioning demonstrates discrimination through specific micro-instances of reinforcement and non-reinforcement. This phenomenon is considered to be more advanced than learning styles such as generalization and yet simultaneously acts as a basic unit to learning as a whole. The complex and fundamental nature of discrimination learning allows for psychologists and researchers to perform more in-depth research that supports psychological advancements. Research on the basic principles underlying this learning style has their roots in neuropsychology sub-processes.

Purposive behaviorism is a branch of psychology that was introduced by Edward Tolman. It combines the study of behavior while also considering the purpose or goal of behavior. Tolman thought that learning developed from knowledge about the environment and how the organism relates to its environment. Tolman's goal was to identify the complex cognitive mechanisms and purposes that guided behavior. His theories on learning went against the traditionally accepted stimulus-response connections at his time that had been proposed by other psychologists such as Edward Thorndike. Tolman disagreed with John B.Watson's behaviorism, so he initiated his own behaviorism, which became known as purposive behaviorism.

External inhibition is the observed decrease of the response of a conditioned reaction when an external (distracting) stimulus that was not part of the original conditioned response set is introduced. This effect was first observed in Ivan Pavlov's classical conditioning studies where the dogs would salivate less when presented with the sound of the tuning fork in the distracting context of a passing truck. External inhibition is important for its main principle in classical conditioning where a conditioned response may decrease in magnitude after the external stimulus is introduced. This is especially advantageous for when trying to disassociate conditioned stimulus and responses. A practical example is where students who become anxious upon standing in front of the class to give a presentation may feel less anxiety if their friends were sitting in front of the student presenting. The positive association of speaking to friends may distract the student from associating speaking to the entire class with anxiety.

An antecedent is a stimulus that cues an organism to perform a learned behavior. When an organism perceives an antecedent stimulus, it behaves in a way that maximizes reinforcing consequences and minimizes punishing consequences. This might be part of complex, interpersonal communication.

Social learning refers to learning that is facilitated by observation of, or interaction with, another animal or its products. Social learning has been observed in a variety of animal taxa, such as insects, fish, birds, reptiles, amphibians and mammals.

Pavlovian-instrumental transfer (PIT) is a psychological phenomenon that occurs when a conditioned stimulus that has been associated with rewarding or aversive stimuli via classical conditioning alters motivational salience and operant behavior. Two distinct forms of Pavlovian-instrumental transfer have been identified in humans and other animals – specific PIT and general PIT – with unique neural substrates mediating each type. In relation to rewarding stimuli, specific PIT occurs when a CS is associated with a specific rewarding stimulus through classical conditioning and subsequent exposure to the CS enhances an operant response that is directed toward the same reward with which it was paired. General PIT occurs when a CS is paired with one reward and it enhances an operant response that is directed toward a different rewarding stimulus.

Association in psychology refers to a mental connection between concepts, events, or mental states that usually stems from specific experiences. Associations are seen throughout several schools of thought in psychology including behaviorism, associationism, psychoanalysis, social psychology, and structuralism. The idea stems from Plato and Aristotle, especially with regard to the succession of memories, and it was carried on by philosophers such as John Locke, David Hume, David Hartley, and James Mill. It finds its place in modern psychology in such areas as memory, learning, and the study of neural pathways.

References

  1. Wade, Carol Tavris, Carole (1997). Psychology In Perspective (2nd ed.). New York: Longman. ISBN   978-0-673-98314-5.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. "Latent Learning | Introduction to Psychology". courses.lumenlearning.com. Retrieved 2019-02-05.
  3. Johnson, A & DA, Crowe. (2009). Revisiting Tolman: theories and cognitive maps. Cogn Crit. 1. 43-72. http://www.cogcrit.umn.edu/docs/Johnson_Crowe_10.pdf
  4. Tolman, E. C., & Honzik, C. H. (1930). Introduction and removal of reward, and maze performance in rats. University of California publications in psychology.
  5. Reynolds, B. (1 January 1945). "A repetition of the Blodgett experiment on 'latent learning'". Journal of Experimental Psychology. 35 (6): 504–516. doi:10.1037/h0060742. PMID   21007969.
  6. Karn, H. W.; Porter, J. M. Jr. (1 January 1946). "The effects of certain pre-training procedures upon maze performance and their significance for the concept of latent learning". Journal of Experimental Psychology. 36 (5): 461–469. doi:10.1037/h0061422. PMID   21000777.
  7. Seward, John P. (1 January 1949). "An experimental analysis of latent learning". Journal of Experimental Psychology. 39 (2): 177–186. doi:10.1037/h0063169. PMID   18118274.
  8. Bendig, A. W. (1 January 1952). "Latent learning in a water maze". Journal of Experimental Psychology. 43 (2): 134–137. doi:10.1037/h0059428. PMID   14927813.
  9. 1 2 3 Stevenson, Harold W. (1 January 1954). "Latent learning in children". Journal of Experimental Psychology. 47 (1): 17–21. doi:10.1037/h0060086. PMID   13130805.
  10. Wirsig, Celeste R.; Grill, Harvey J. (1 January 1982). "Contribution of the rat's neocortex to ingestive control: I. Latent learning for the taste of sodium chloride". Journal of Comparative and Physiological Psychology. 96 (4): 615–627. doi:10.1037/h0077911. PMID   7119179.
  11. Campanella, Jennifer; Rovee-Collier, Carolyn (2005-05-01). "Latent Learning and Deferred Imitation at 3 Months". Infancy. 7 (3): 243–262. doi: 10.1207/s15327078in0703_2 . ISSN   1525-0008. PMID   33430559.
  12. Di Chiara, G.; Imperato, A. (1988-07-01). "Drugs abused by humans preferentially increase synaptic dopamine concentrations in the mesolimbic system of freely moving rats". Proceedings of the National Academy of Sciences. 85 (14): 5274–5278. Bibcode:1988PNAS...85.5274D. doi: 10.1073/pnas.85.14.5274 . ISSN   0027-8424. PMC   281732 . PMID   2899326.
  13. Berridge, Kent C. (2005). "Espresso Reward Learning, Hold the Dopamine: Theoretical Comment on Robinson et al. (2005)". Behavioral Neuroscience. 119 (1): 336–341. doi:10.1037/0735-7044.119.1.336. ISSN   1939-0084. PMID   15727539. S2CID   6507002.
  14. Luchiari, Ana C.; Salajan, Diana C.; Gerlai, Robert (2015). "Acute and chronic alcohol administration: Effects on performance of zebrafish in a latent learning task". Behavioural Brain Research. 282: 76–83. doi:10.1016/j.bbr.2014.12.013. ISSN   0166-4328. PMC   4339105 . PMID   25557800.
  15. Myers, Catherine E. (1 January 2000). "Latent learning in medial temporal amnesia: Evidence for disrupted representational but preserved attentional processes". Neuropsychology. 14 (1). McGlinchey-Berroth, Regina, Warren, Stacey, Monti, Laura, Brawn, Catherine M., Gluck, Mark A.: 3–15. doi:10.1037/0894-4105.14.1.3. PMID   10674794.
  16. Nishida, Noriyuki; Katamine, Shigeru; Shigematsu, Kazuto; Nakatani, Akira; Sakamoto, Nobuhiro; Hasegawa, Sumitaka; Nakaoke, Ryota; Atarashi, Ryuichiro; Kataoka, Yasufumi; Miyamoto, Tsutomu (1 January 1997). "Prion Protein Is Necessary for Latent Learning and Long-Term Memory Retention". Cellular and Molecular Neurobiology. 17 (5): 537–545. doi:10.1023/A:1026315006619. PMID   9353594. S2CID   11877905.
  17. Noda, A (2001). "Phencyclidine impairs latent learning in mice interaction between glutamatergic systems and sigma1 receptors". Neuropsychopharmacology. 24 (4): 451–460. doi: 10.1016/S0893-133X(00)00192-5 . PMID   11182540.