Prior to the founding of psychology as a scientific discipline, attention was studied in the field of philosophy. Due to this, many of the discoveries in the field of Attention were made by philosophers. Psychologist John Watson cites Juan Luis Vives as the Father of Modern Psychology due to his book De Anima et Vita ("The Soul and Life") in which Vives was the first to recognize the importance of empirical investigation.[3] In his work on memory, Vives found that the more closely one attends to stimuli, the better they will be retained. By the 1990s, psychologists began using PET and later fMRI to image the brain while monitoring attention tasks. Because of the highly expensive equipment that was generally only available in hospitals, psychologists sought for cooperation with neurologists. Pioneers of brain imaging studies of selective attention are psychologist Michael Posner (then already renowned for his seminal work on visual selective attention) and neurologist Marcus Raichle.[citation needed] Their results soon sparked interest from the entire neuroscience community in these psychological studies, which had until then focused on monkey brains. With the development of these technological innovations neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of EEG had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to actually measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. The results of these experiments have shown a broad agreement with the psychological, psychophysiological and the experiments performed on monkeys.[citation needed] Selective attention and visual attention (See also Selective auditory attention) The spotlight model of attention In cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered loosely as metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process.[4] In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e. it is focused), and processing is performed in a serial fashion. The first of these models to appear in the literature is the spotlight model. The term "spotlight" was inspired by the work of William James who described attention as having a focus, a margin, and a fringe.[5] The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention which extracts information in a much more crude fashion (i.e. low-resolution). This fringe extends out to a specified area and this cut-off is called the margin. The second model that is called the zoom-lens model, and was first introduced in 1986.[6] This model inherits all properties of the spotlight model (i.e. the focus, the fringe, and the margin) but has the added property of changing in size. This size-change mechanism was inspired by the zoom lens you might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing.[7] The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attentional resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° of visual angle,[5][8] however the maximum size has not yet been determined. A significant debate emerged in the last decade of the 20th century in which Treisman's 1993 Feature Integration Theory (FIT) was compared to Duncan and Humphrey's 1989 Attentional Engagement Theory (AET).[9] FIT posits that "objects are retrieved from scenes by means of selective spatial attention that picks our objects' feature maps, and integrates those features that are found at the same location into forming objects." Duncan and Humphrey's AET understanding of attention maintained that "there is an initial pre-attentive parallel phase of perceptual segmentation and analysis that encompasses all of the visual items present in a scene. At this phase, descriptions of the objects in a visual scene are generated into structural units; the outcome of this parallel phase is a multiple-spatial-scale structured representation. Selective attention intervenes after this stage to select information that will be entered into visual short-term memory."[9] The contrast of the two theories placed a new emphasis on the separation of visual attention tasks alone and those mediated by supplementary cognitive processes. As Rastophopoulos summarizes the debate: "Against Treisman's FIT, which posits spatial attention as a necessary condition for detection of objects, Humphreys argues that visual elements are encoded and bound together in an initial parallel phase without focal attention, and that attention serves to select among the objects that result from this initial grouping."[10] Neuropsychological model In the twentieth century, the pioneering research of Lev Vygotsky and Alexander Luria led to the three-part model of neuropsychology defining the working brain as being represented by three co-active processes listed as Attention, Memory, and Activation. Attention is identified as one of the three major co-active processes of the working brain. A.R. Luria published his well-known book The Working Brain in 1973 as a concise adjunct volume to his previous 1962 book Higher Cortical Functions in Man. In this volume, Luria summarized his three-part global theory of the working brain as being composed of three constantly co-active processes which he described as the; (1) Attention system, (2) Mnestic (memory) system, and (3) Cortical activation system. The two books together are considered by Homskaya's account as "among Luria's major works in neuropsychology, most fully reflecting all the aspects (theoretical, clinical, experimental) of this new discipline."[11] The product of the combined research of Vygotsky and Luria have determined a large part of the contemporary understanding and definition of attention as it is understood at the start of the 21st-century. Multitasking and divided attention See also: Human multitasking, distracted driving Multitasking can be defined as the attempt to perform two or more tasks simultaneously; however, research shows that when multitasking, people make more mistakes or perform their tasks more slowly.[12] Attention must be divided among all of the component tasks to perform them. Older research involved looking at the limits of people performing simultaneous tasks like reading stories, while listening and writing something else,[13] or listening to two separate messages through different ears (i.e., dichotic listening). Generally, classical research into attention investigated the ability of people to learn new information when there were multiple tasks to be performed, or to probe the limits of our perception (c.f. Donald Broadbent). There is also older literature on people's performance on multiple tasks performed simultaneously, such as driving a car while tuning a radio[14] or driving while telephoning.[15] The vast majority of current research on human multitasking is based on performance of doing two tasks simultaneously,[12] usually that involves driving while performing another task, such as texting, eating, or even speaking to passengers in the vehicle, or with a friend over a cellphone. This research reveals that the human attentional system has limits for what it can process: driving performance is worse while engaged in other tasks; drivers make more mistakes, brake harder and later, get into more accidents, veer into other lanes, and/or are less aware of their surroundings when engaged in the previously discussed tasks.[16][17][18] There has been little difference found between speaking on a hands-free cell phone or a hand-held cell phone,[19][20] which suggests that it is the strain of attentional system that causes problems, rather than what the driver is doing with his or her hands. While speaking with a passenger is as cognitively demanding as speaking with a friend over the phone,[21] passengers are able to change the conversation based upon the needs of the driver. For example, if traffic intensifies, a passenger may stop talking to allow the driver to navigate the increasingly difficult roadway; a conversation partner over a phone would not be aware of the change in environment. There have been multiple theories regarding divided attention. One, conceived by Kahneman,[22] explains that there is a single pool of attentional resources that can be freely divided among multiple tasks. This model seems to be too oversimplified, however, due to the different modalities (e.g., visual, auditory, verbal) that are perceived.[23] When the two simultaneous tasks use the same modality, such as listening to a radio station and writing a paper, it is much more difficult to concentrate on both because the tasks are likely to interfere with each other. The specific modality model was theorized by Navon and Gopher in 1979. Although this model is more adequate at explaining divided attention among simple tasks, resource theory is another, more accurate metaphor for explaining divided attention on complex tasks. Resource theory states that as each complex task is automatized, performing that task requires less of the individual's limited-capacity attentional resources.[23] Other variables play a part in our ability to pay attention to and concentrate on many tasks at once. These include, but are not limited to, anxiety, arousal, task difficulty, and skills.[23] Simultaneous Attention Simultaneous attention is a type of attention, classified by attending to multiple events at the same time. Simultaneous attention is demonstrated by children in Indigenous communities, who learn through this type of attention to their surroundings.[24] Simultaneous attention is present in the ways in which children of indigenous background interact both with their surroundings, and with other individuals. Simultaneous attention requires focus on multiple, simultaneous, activities or occurrences. This differs from multitasking which is characterized by alternating attention and focus between multiple activities; that is, halting one activity before switching to the next. Simultaneous attention involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. Indigenous heritage toddlers and caregivers in San Pedro were observed to frequently coordinate their activities with other members of a group in ways parallel to a model of simultaneous attention, whereas middle-class European-descent families in the U.S. would move back and forth between events.[2][25] Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially wide, keen observers.[26] This points to a strong cultural difference in attention management. Alternative topics and discussions Overt and covert orienting Attention may be differentiated into "overt" versus "covert" orienting.[27] Overt orienting is the act of selectively attending to an item or location over others by moving the eyes to point in that direction.[28] Overt orienting can be directly observed in the form of eye movements. Although overt eye movements are quite common, there is a distinction that can be made between two types of eye movements; reflexive and controlled. Reflexive movements are commanded by the superior colliculus of the midbrain. These movements are fast and are activated by the sudden appearance of stimuli. In contrast, controlled eye movements are commanded by areas in the frontal lobe. These movements are slow and voluntary. Covert orienting is the act to mentally shifting one's focus without moving one's eyes.[5][28][29] Simply, it is changes in attention that are not attributable to overt eye movements. Covert orienting has the potential to affect the output of perceptual processes by governing attention to particular items or locations (for example, the activity of a V4 neuron whose receptive field lies on an attended stimuli will be enhanced by covert attention[30]) but does not influence the information that is processed by the senses. Researchers often use "filtering" tasks to study the role of covert attention of selecting information. These tasks often require participants to observe a number of stimuli, but attend to only one. The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slower saccade to that location.[citation needed] There are studies that suggest the mechanisms of overt and covert orienting may not be as separate as previously believed. This is due to the fact that central mechanisms that may control covert orienting, such as the parietal lobe also receive input from subcortical centres involved in overt orienting.[28] General theories of attention actively assume bottom-up (covert) processes and top-down (overt) processes converge on a common neural architecture.[31] For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed. Exogenous and endogenous orienting Orienting attention is vital and can be controlled through external (exogenous) or internal (endogenous) processes. However, comparing these two processes is challenging because external signals do not operate completely exogenously, but will only summon attention and eye movements if they are important to the subject.[28] Exogenous (from Greek exo, meaning "outside", and genein, meaning "to produce") orienting is frequently described as being under control of a stimulus.[32] Exogenous orienting is considered to be reflexive and automatic and is caused by a sudden change in the periphery. This often results in a reflexive saccade. Since exogenous cues are typically presented in the periphery, they are referred to as peripheral cues. Exogenous orienting can even be observed when individuals are aware that the cue will not relay reliable, accurate information about where a target is going to occur. This means that the mere presence of an exogenous cue will affect the response to other stimuli that are subsequently presented in the cue's previous location.[33] Several studies have investigated the influence of valid and invalid cues.[28][34][35][36] They concluded that valid peripheral cues benefit performance, for instance when the peripheral cues are brief flashes at the relevant location before to the onset of a visual stimulus. Posner and Cohen (1984) noted a reversal of this benefit takes place when the interval between the onset of the cue and the onset of the target is longer than about 300 ms.[37] The phenomenon of valid cues producing longer reaction times than invalid cues is called inhibition of return. Endogenous (from Greek endo, meaning "within" or "internally") orienting is the intentional allocation of attentional resources to a predetermined location or space. Simply stated, endogenous orienting occurs when attention is oriented according to an observer's goals or desires, allowing the focus of attention to be manipulated by the demands of a task. In order to have an effect, endogenous cues must be processed by the observer and acted upon purposefully. These cues are frequently referred to as central cues. This is because they are typically presented at the center of a display, where an observer's eyes are likely to be fixated. Central cues, such as an arrow or digit presented at fixation, tell observers to attend to a specific location.[38] When examining differences between exogenous and endogenous orienting, some researchers suggest that there are four differences between the two kinds of cues: exogenous orienting is less affected by cognitive load than endogenous orienting; observers are able to ignore endogenous cues but not exogenous cues; exogenous cues have bigger effects than endogenous cues; and expectancies about cue validity and predictive value affects endogenous orienting more than exogenous orienting.[39] There exist both overlaps and differences in the areas of the brain that are responsible for endogenous and exogenous orientating.[40] Another approach to this discussion has been covered under the topic heading of "bottom-up" versus "top-down" orientations to attention. Researchers of this school have described two different aspects of how the mind focuses attention to items present in the environment. The first aspect is called bottom-up processing, also known as stimulus-driven attention or exogenous attention. These describe attentional processing which is driven by the properties of the objects themselves. Some processes, such as motion or a sudden loud noise, can attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not.[41] These aspects of attention are thought to involve parietal and temporal cortices, as well as the brainstem.[42] The second aspect is called top-down processing, also known as goal-driven, endogenous attention, attentional control or executive attention. This aspect of our attentional orienting is under the control of the person who is attending. It is mediated primarily by the frontal cortex and basal ganglia[42][43] as one of the executive functions.[28][42] Research has shown that it is related to other aspects of the executive functions, such as working memory,[44] and conflict resolution and inhibition.[45] Influence of processing load One theory regarding selective attention is the cognitive load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual considers the subject’s ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive refers to the actual processing of the stimuli, studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.[46] Some people can process multiple stimuli, e.g. trained morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform.[citation needed] Clinical model Attention is best described as the sustained focus of cognitive resources on information while filtering or ignoring extraneous information. Attention is a very basic function that often is a precursor to all other neurological/cognitive functions. As is frequently the case, clinical models of attention differ from investigation models. One of the most used models for the evaluation of attention in patients with very different neurologic pathologies is the model of Sohlberg and Mateer.[47] This hierarchic model is based in the recovering of attention processes of brain damage patients after coma. Five different kinds of activities of growing difficulty are described in the model; connecting with the activities those patients could do as their recovering process advanced. Focused attention: The ability to respond discretely to specific visual, auditory or tactile stimuli. Sustained attention (vigilance and concentration): The ability to maintain a consistent behavioral response during continuous and repetitive activity. Selective attention: The ability to maintain a behavioral or cognitive set in the face of distracting or competing stimuli. Therefore it incorporates the notion of "freedom from distractibility." Alternating attention: The ability of mental flexibility that allows individuals to shift their focus of attention and move between tasks having different cognitive requirements. Divided attention: This is the highest level of attention and it refers to the ability to respond simultaneously to multiple tasks or multiple task demands. This model has been shown to be very useful in evaluating attention in very different pathologies, correlates strongly with daily difficulties and is especially helpful in designing stimulation programs such as attention process training, a rehabilitation program for neurological patients of the same authors. Neural correlates Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a certain response to a stimulus when the animal is not attending to the stimulus, then when the animal does attend to the stimulus, the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same. In a 2007 review, Knudsen[48] describes a more general model which identifies four core processes of attention, with working memory at the center: Working memory temporarily stores information for detailed analysis. Competitive selection is the process that determines which information gains access to working memory. Through top-down sensitivity control, higher cognitive processes can regulate signal intensity in information channels that compete for access to working memory, and thus give them an advantage in the process of competitive selection. Through top-down sensitivity control, the momentary content of working memory can influence the selection of new information, and thus mediate voluntary control of attention in a recurrent loop (endogenous attention).[49] Bottom-up saliency filters automatically enhance the response to infrequent stimuli, or stimuli of instinctive or learned biological relevance (exogenous attention).[49] Neutrally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement. At the top of the hierarchy, the frontal eye fields (FEF) on the dorsolateral frontal cortex contain a retinocentric spatial map. Microstimulation in the FEF induces monkeys to make a saccade to the relevant location. Stimulation at levels too low to induce a saccade will nonetheless enhance cortical responses to stimuli located in the relevant area. At the next lower level, a variety of spatial maps are found in the parietal cortex. In particular, the lateral intraparietal area (LIP) contains a saliency map and is interconnected both with the FEF and with sensory areas. Certain automatic responses that influence attention, like orienting to a highly salient stimulus, are mediated subcortically by the superior colliculi. At the neural network level, it is thought that processes like lateral inhibition mediate the process of competitive selection. In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.[50][51][52][53] Another commonly used model for the attention system has been put forth by researchers such as Michael Posner divides attention into three functional components: alerting, orienting, and executive attention.[42][54] Alerting is the process involved in becoming and staying attentive toward the surroundings. It appears to exist in the frontal and parietal lobes of the right hemisphere, and is modulated by norepinephrine.[55][56] Orienting is the directing of attention to a specific stimulus. Executive attention is used when there is a conflict between multiple attention cues. It is essentially the same as the central executive in Baddeley's model of working memory. The Eriksen flanker task has shown that the executive control of attention may take place in the anterior cingulate cortex[57] Cultural variation Children appear to develop patterns of attention related to the cultural practices of their families, communities, and the institutions in which they participate.[58] In 1955, Jules Henry suggested that there are societal differences in sensitivity to signals from many ongoing sources that call for the awareness of several levels of attention simultaneously. He tied his speculation to ethnographic observations of communities in which children are involved in a complex social community with multiple relationships.[59] Many Indigenous children in the Americas predominantly learn by observing and pitching in. There are several studies to support that the use of keen attention towards learning is much more common in Indigenous communities of North and Central America than in a middle-class setting.[60] This is a direct result of the learning by observing and pitching in model. Keen attention is both a requirement and result of learning by observing and pitching-in. Incorporating the children in the community gives them the opportunity to keenly observe and contribute to activities that were not directed towards them. It can be seen from different Indigenous communities and cultures, such as the Mayans of San Pedro, that children can simultaneously attend to multiple events.[59] Most Maya children have learned to pay attention to several events at once in order to make useful observations.[61] One example is simultaneous attention which involves uninterrupted attention to several activities occurring at the same time. Another cultural practice that may relate to simultaneous attention strategies is coordination within a group. San Pedro toddlers and caregivers frequently coordinated their activities with other members of a group in multiway engagements rather than in a dyadic fashion.[2][25] Research concludes that children with close ties to Indigenous American roots have a high tendency to be especially keen observers.[26] This learning by observing and pitching-in model requires active levels of attention management. The child is present while caretakers engage in daily activities and responsibilities such as: weaving, farming, and other skills necessary for survival. Being present allows the child to focus their attention on the actions being performed by their parents, elders, and/or older siblings.[60] In order to learn in this way, keen attention and focus is required. Eventually the child is expected to be able to perform these skills themselves. Attention modelling In the domain of computer vision, efforts have been made in modelling the mechanism of human attention, especially the bottom-up attentional mechanism.[62] Generally speaking, there are two kinds of models to mimic the bottom-up saliency mechanism. One way is based on the spatial contrast analysis. For example, a center–surround mechanism has been used to define saliency across scales, inspired by the putative neural mechanism.[63] It has also been hypothesized that some visual inputs are intrinsically salient in certain background contexts and that these are actually task-independent. This model has established itself as the exemplar for saliency detection and consistently used for comparison in the literature;[62] the other way is based on the frequency domain analysis. This method was first proposed by Hou et al.,[64] this method was called SR, and then PQFT method was also introduced. Both SR and PQFT only use the phase information.[62] In 2012, the HFT method was introduced, and both the amplitude and the phase information are made use of.[65] Hemispatial neglect Main article: Hemispatial neglect Hemispatial neglect, also called unilateral neglect, often occurs when people have damage to their right hemisphere.[66] This damage often leads to a tendency to ignore the left side of one's body or even the left side of an object that can be seen. Damage to the left side of the brain (the left hemisphere) rarely yields significant neglect of the right side of the body or object in the person's local environments.[67] The effects of spatial neglect, however, may vary and differ depending on what area of the brain was damaged. Damage to different neural substrates can result in different types of neglect. Attention disorders (lateralized and nonlaterized) may also contribute to the symptoms and effects.[67] Much research has asserted that damage to gray matter within the brain results in spatial neglect.[68] New technology has yielded more information, such that there is a large, distributed network of frontal, parietal, temporal, and subcortical brain areas that have been tied to neglect.[69] This network can be related to other research as well; the dorsal attention network is tied to spatial orienting.[70] The effect of damage to this network may result in patients neglecting their left side when distracted about their right side or an object on their right side.[66] History of the study of attention Philosophical period Psychologist Daniel E. Berlyne credits the first extended treatment of attention to philosopher Nicolas Malebranche in his work "The Search After Truth". "Malebranche held that we have access to ideas, or mental representations of the external world, but not direct access to the world itself."[3] Thus in order to keep these ideas organized, attention is necessary. Otherwise we will confuse these ideas. Malebranche writes in "The Search After Truth", "because it often happens that the understanding has only confused and imperfect perceptions of things, it is truly a cause of our errors.... It is therefore necessary to look for means to keep our perceptions from being confused and imperfect. And, because, as everyone knows, there is nothing that makes them clearer and more distinct than attentiveness, we must try to find the means to become more attentive than we are".[71] According to Malebranche, attention is crucial to understanding and keeping thoughts organized. Philosopher Gottfried Wilhelm Leibniz introduced the concept of apperception to this philosophical approach to attention. Apperception refers to "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individual to form a new whole."[72] Apperception is required for a perceived event to become a conscious event. Leibniz emphasized a reflexive involuntary view of attention known as exogenous orienting. However there is also endogenous orienting which is voluntary and directed attention. Philosopher Johann Friedrich Herbart agreed with Leibniz's view of apperception however he expounded on it in by saying that new experiences had to be tied to ones already existing in the mind. Herbart was also the first person to stress the importance of applying mathematical modeling to the study of psychology.[3] It was previously thought in the beginning of the 19th century that people were not able to attend to more than one stimulus at a time. However with research contributions by Sir William Hamilton, 9th Baronet this view was changed. Hamilton proposed a view of attention that likened its capacity to holding marbles. You can only hold a certain amount of marbles at a time before it starts to spill over. His view states that we can attend to more than one stimulus at once. William Stanley Jevons later expanded this view and stated that we can attend to up to four items at a time[citation needed] . During this period of attention, various philosophers made significant contributions to the field. They began the research on the extent of attention and how attention is directed. 1860–1909 This period of attention research took the focus from conceptual findings to experimental testing. It also involved psychophysical methods that allowed measurement of the relation between physical stimulus properties and the psychological perceptions of them. This period covers the development of attentional research from the founding of psychology to 1909. Wilhelm Wundt introduced the study of attention to the field of psychology. Wundt measured mental processing speed by likening it to differences in stargazing measurements. Astronomers in this time would measure the time it took for stars to travel. Among these measurements when astronomers recorded the times, there were personal differences in calculation. These different readings resulted in different reports from each astronomer. To correct for this, a personal equation was developed. Wundt applied this to mental processing speed. Wundt realized that the time it takes to see the stimulus of the star and write down the time was being called an "observation error" but actually was the time it takes to switch voluntarily one's attention from one stimulus to another. Wundt called his school of psychology voluntarism. It was his belief that psychological processes can only be understood in terms of goals and consequences. Franciscus Donders used mental chronometry to study attention and it was considered a major field of intellectual inquiry by authors such as Sigmund Freud. Donders and his students conducted the first detailed investigations of the speed of mental processes. Donders measured the time required to identify a stimulus and to select a motor response. This was the time difference between stimulus discrimination and response initiation. Donders also formalized the subtractive method which states that the time for a particular process can be estimated by adding that process to a task and taking the difference in reaction time between the two tasks. He also differentiated between three types of reactions: simple reaction, choice reaction, and go/no-go reaction. Hermann von Helmholtz also contributed to the field of attention relating to the extent of attention. Von Helmholtz stated that it is possible to focus on one stimulus and still perceive or ignore others. An example of this is being able to focus on the letter u in the word house and still perceiving the letters h, o, s, and e. One major debate in this period was whether it was possible to attend to two things at once (split attention). Walter Benjamin described this experience as "reception in a state of distraction." This disagreement could only be resolved through experimentation. In 1890, William James, in his textbook The Principles of Psychology, remarked: “ Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.[73] ” James differentiated between sensorial attention and intellectual attention. Sensorial attention is when attention is directed to objects of sense, stimuli that are physically present. Intellectual attention is attention directed to ideal or represented objects; stimuli that are not physically present. James also distinguished between immediate or derived attention: attention to the present versus to something not physically present. According to James, attention has five major effects. Attention works to make us perceive, conceive, distinguish, remember, and shorten reactions time. 1910–1949 During this period, research in attention waned and interest in behaviorism flourished, leading some to believe, like Ulric Neisser, that in this period, "There was no research on attention". However, Jersild published very important work on "Mental Set and Shift" in 1927. He stated, "The fact of mental set is primary in all conscious activity. The same stimulus may evoke any one of a large number of responses depending upon the contextual setting in which it is placed".[74] This research found that the time to complete a list was longer for mixed lists than for pure lists. For example, if a list was names of animals versus a list with names of animals, books, makes and models of cars, and types of fruits, it takes longer to process. This is task switching. In 1931, Telford discovered the psychological refractory period. The stimulation of neurons is followed by a refractory phase during which neurons are less sensitive to stimulation. In 1935 John Ridley Stroop developed the Stroop Task which elicited the Stroop Effect. Stroop's task showed that irrelevant stimulus information can have a major impact on performance. In this task, subjects were to look at a list of colors. This list of colors had each color typed in a color different from the actual text. For example the word Blue would be typed in Orange, Pink in Black, and so on. Example: Blue Purple Red Green Purple Green Subjects were then instructed to say the name of the ink color and ignore the text. It took 110 seconds to complete a list of this type compared to 63 seconds to name the colors when presented in the form of solid squares.[3] The naming time nearly doubled in the presence of conflicting color words, an effect known as the Stroop Effect. 1950–1974 In the 1950s, research psychologists renewed their interest in attention when the dominant epistemology shifted from positivism (i.e., behaviorism) to realism during what has come to be known as the "cognitive revolution".[75] The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study. Modern research on attention began with the analysis of the "cocktail party problem" by Colin Cherry in 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known as dichotic listening and were extended by Donald Broadbent and others.[76] In a typical experiment, subjects would use a set of headphones to listen to two streams of words in different ears and selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream. Broadbent's Filter Model of Attention states that information is held in a pre-attentive temporary store, and only sensory events that have some physical feature in common are selected to pass into the limited capacity processing system. This implies that the meaning of unattended messages is not identified. Also, a significant amount of time is required to shift the filter from one channel to another. Experiments by Gray and Wedderburn and later Anne Treisman pointed out various problems in Broadbent's early model and eventually led to the Deutsch–Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in the attentional bottleneck.[77] This debate became known as the early-selection vs. late-selection models. In the early selection models (first proposed by Donald Broadbent), attention shuts down (in Broadbent's model) or attenuates (in Triesman's refinement) processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness.[78] This debate has still not been resolved. In the 1960s, Robert Wurtz at the National Institutes of Health began recording electrical signals from the brains of macaques who were trained to perform attentional tasks. These experiments showed for the first time that there was a direct neural correlate of a mental process (namely, enhanced firing in the superior colliculus).[79][not specific enough to verify] Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether deemed subjective or objective, while ignoring other perceivable information. Attention has also been referred to as the allocation of limited processing resources.[1] Attention remains a major area of investigation within education, psychology, neuroscience, cognitive neuroscience, and neuropsychology. Areas of active investigation involve determining the source of the sensory cues and signals that generate attention, the effects of these sensory cues and signals on the tuning properties of sensory neurons, and the relationship between attention and other behavioral and cognitive processes like working memory and vigilance. A relatively new body of research, which expands upon earlier research within neuropsychology, is investigating the diagnostic symptoms associated with traumatic brain injuries and their effects on attention. Attention also has variational differences among differing cultures.[2] The relationships between attention and consciousness are complex enough that they have warranted perennial philosophical exploration. Such exploration is both ancient and continually relevant, as it can have effects in fields ranging from mental health and the study of disorders of consciousness to artificial intelligence and its domains of research and development. |
About us|Jobs|Help|Disclaimer|Advertising services|Contact us|Sign in|Website map|Search|
GMT+8, 2015-9-11 20:29 , Processed in 0.142454 second(s), 16 queries .