Annotated Bibliographies for Module 2 #1:Driscoll, M. (2005). Psychology of Learning for Instruction (3rd ed.) (pp. 71-77). Boston, MA: Allyn and Bacon.
This chapter is an introduction to Cognitive Information Processing. Driscoll states that according to this view, learners are compared to computers, taking in information as input, processing it in memory and learning as an output (p. 74). In the first stage of information processing, sensory memory, a learner uses the senses to begin the input of information. This form of memory is very short-lived (p. 74). In the second stage of information process, working memory or short-term memory, the initial sensory memory continues to be processed. This form of memory is longer than sensory memory but is still short-termed. Rehearsal and chunking is believed to help the brain hold this information for longer than simply reading the information (p. 75). The third stage of cognitive information processing is long-term memory. Long-term memory is considered permanent learning and is not limited by an amount that the brain can learn (p. 75).
Learning is not thought to occur in one direction only but to be determined by the information itself being presented and with the prior knowledge of the learner (p. 77), making it imperative for educators to determine prior knowledge of learners and to build upon it.
#2: Guenther, R.K. (1998). Introduction and historical Overview. Human Cognition (pp. 1-27). Upper Saddle River, NJ: Prentice Hall.
In this chapter, Guenther examines how the thinking of the world has changed. In the first section, the change from a supernatural to a more natural perspective comes to be. Where people placed more of an emphasis on God or other deities, as science developed and the proof that the universe does not revolve around Earth or humans, other views became more prominent (p. 2). Life became more natural with the study of the human body and its major organs. Theories of evolutions began to develop and led many to no longer accept the idea that humans were born of divine intervention (p. 4). As more was learned of the brain and its functioning, the emergence of cognitive science occurred. Cognitive science can be defined “as the scientific study of mental process” (p. 8). This study does not solely consider neurophysiology but also looks at areas such as “how people perceive, remember, reason, solve problems, use language, and develop various cognitive skills” (p. 11). As cognitive science has evolved, the mind and its working has been further examined as a machine with belief that “the mind is like the actions of a machine” (p. 11). This has led to two beliefs; one, that machines (computes) can be built to think for themselves and two, that humans take in information and process it, in way similar to computers (p. 11). In answering whether cognitive psychology is even necessary, research can be used to help see the benefits of it. Research into how children react to their parents, and how those reactions will help to determine future responses as well as research into brain function has helped stroke victims as well as people with language processing disorder, seizures and depression (p. 25).
#3: Smith & Ragan (1999). Introduction to Instructional Design. Instructional Design (pp. 1-12). New York: Wiley.
This chapter begins with three essential questions: What is Instructional Design? What is Instruction? What is Design? Instructional Design is the “systematic and reflective process of translating principles of learning and instruction in to plans for instructional materials, activities, information resources, and evaluation” (p. 2). In other words, it is the planning of instructional units with particular attention paid to the strategies and resources that will used to teach the unit and is looked back on at its completion to determine its effectiveness and areas for improvement. Instruction can be thought of as a planned facilitation of learning. It is aimed at reaching a specific, preset goal (p. 2). Design is thought to be the planning or execution of a plan to reach a goal (p. 4). The Instructional Design Process can be broken down in to three sections: the analysis of what is going to be taught, the instructional strategy that will be used, and an evaluation of the successfulness of the strategy used (p. 6). When conducting the analysis of is to be taught, careful consideration must be given to the goal, the learners and the context it will be taught in (p. 7). Likewise, the strategy must be scrutinized to look at the organization of the material, the delivery model, and management strategies of the learners (p. 7). When evaluating the effectiveness of the instruction, a formal evaluation should occur and the results used to determine necessary revisions to the instruction (p. 7).
#4: Smith & Ragan (1999). Foundations of Instructional Design. Instructional Design (pp. 13-29). New York: Wiley.
This chapter looks at philosophy and theory in instructional design. Constructivism is based on the belief that there “is not a single reality to be discovered, but that each individual has constructed a personal reality” (p. 15). Individual constructivism assumes that the experience builds knowledge, that a personal interpretation is used in learning and that it is an active process based on experience (p. 15). Social constructivism looks outside of the individual believing that learning is a collaborative process from many perspectives (p. 15). Empiricism is another philosophy examined. It is based on the belief that knowledge is gained through experience (p. 17). Experimentation and hands-on manipulation of materials are key components in empiricism. Pragmatism is the other philosophy examined and is considered to be a combination of constructivism and empiricism (p. 17). Theories are examined in this chapter as well. Learning Theories “attempt to describe, explain, and predict learning” (p. 18). Cognitive Learning Theories places more emphasis on the learner being active in the learning than on the environment (p. 20). It is thought that students engaged in the learning will learn more than those apathetic to the instruction. The Developmental Theory is based on the belief that learners cannot be taught specific skills until they are developmentally ready, however, others believe that instruction can help achieve those skills (p. 23).
#5 Brady, Susan (1986). Short-term Memory, Phonological Processing and Reading Ability. Annals of Dyslexia, 36, 138-153.
This article looks at the connection between verbal short-term memory deficits and the ability to read fluently in young children. It is noted that, “Deficits in STM for individuals with reading problems have been demonstrated with digit span measures, letter strings, sentence tasks, and recall for picture of familiar objects” (p. 145). This article is a research paper investigating the link between short-term memory and low reading ability. Areas of decoding difficulties is examined such as the effects of noise on phonetic encoding and effects of phonological difficulty on phonetic encoding. The examiner determines that “the problems in phonetic processing, here observed in verbal short-term memory and in speech perception, have also been noted in other language tasks” (p. 153).
Annotated Bibliographies for Module 3 #1: Driscoll, M. (2005). Psychology of Learning for Instruction (3rd ed.) (pp. 77-91). Boston, MA: Allyn and Bacon.
This section of the chapter focuses on the stages of the human processing system. The first stage looked at is the sensory memory. Sensory memory is the taking in of information using the senses and stored temporarily. Studies have shown that for many learners, auditory information lasts up to four seconds longer than visual memory (p. 78). Attention is addressed when discussing sensory memory. Research has shown that attention is not all-or-nothing. Most humans can tune in or tune out as bits of conversations are taken in, which is known as selective attention (p. 79). There are several factors that affect attention. The first factor is the importance of the information or task to the learner (p. 79). Information that a learner deems important or as interesting will hold attention more than what is seen as boring. The second factor is the similarity of the information (p. 79). It is more difficult to listen to two different conversations when both speakers are speaking of things that interest you. The third factor is how difficult the task or information is (p. 79). For students that are having difficulty reading, having to read long passages without pictures is more difficult that reading shorter passages that use pictures as clues. In order to help keep students’ attention, research has shown that signals or cues help to focus a student’s attention prior to information beginning to be delivered (p. 80). Automaticity occurs when a learner has done the task repeatedly to the point that it can be performed without thought (p. 80). Repeated practice of math facts, sight words and decoding strategies lead to automaticity in math computation and in reading. Pattern recognition and perception occurs when the brain is able to recognize examples of things already learned (p. 82). There are several models of pattern recognition, yet known explain why certain patterns/letters can be recognized even when parts are missing (p. 84). Working memory is the information that has been selected for further processing (p. 84). While working memory is limited, it is thought to be able to be increased by strategies such as chunking, or grouping larger bits together (p. 87). Rehearsal, repeating the information over and over (p. 88) and encoding, relating the information already learned (p. 89) are also strategies used to increase the time information is stored in working memory. Long-term memory is information stored that can be retrieved for a long time, or even forever. There are two types: episodic and semantic. Episodic memory is memories that are of specific events/occurrences (p. 91). Semantic memories are more of an educational type, as they are information learned, not experienced (p. 91).
#2: Baddeley, A.D. (1992). Working memory. Science, 255, 556-559.
Working memory consists of “temporary storage and manipulation of the information necessary” to process what is stored (p. 556). There are three subcomponents: central executive, visuospatial sketch pad, and phonological loop. The central executive component is thought to be an attention controlling component, which can be impeded by afflictions which affect memory such as amnesia and Alzheimer’s disease (p. 556). It has been proposed that the central executive component main purpose is to coordinate information from slave systems (p. 557). Visual and spatial components can be separated with different tasks requiring one or both components (p. 558), The phonological loop is most commonly believed to serve “as a backup system for comprehension of speech under taxing conditions” (p. 558). “Working memory stands at the crossroads between memory, attention, and perception (p. 559). Working memory allows memory, attention, and perception to work together to begin to process information so that it can eventually be moved into long-term memory.
#3: Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological review, 63(2), 81.
This article looks at capacity for processing information. Information measurement is similar to variance in that as variance increases so does the amount of information (p. 1). It is believed that there are two ways to increase the amount of information that is input; by increasing the amount of information per unit of time or by increasing the number of alternative stimuli (p. 2). During one experiment, listeners were given a number of tones to listen for. It was shown that when there were more tones to distinguish between, many mistakes were made but when only a few tones were used, mistakes hardly occurred. When you consider absolute judgments of multidimensional stimuli, experiments have shown that more variables increase the total capacity but decreases the accuracy for any of the individual variables (p. 7). Absolute judgement is seen to be limited by the amount of information whereas immediate memory is seen to be limited by the number of items (p. 10). Memory can also be increased by recoding, or regrouping chunks of information into larger chunks since the memory span is fixed as to how many chunks it can hold at a time (p. 10).
#4: Kalyuga, S. (2010). Schema acquisition and sources of cognitive load. In J.L. Plass, R. Moreno, & R. Brünken, Cognitive Load Theory (pp. 48-64). New York: Cambridge.
This chapter looks at three principles of Cognitive Theory Load: the direct initial instruction principle, the expertise principle, and the small step-size of change principle. According to the direct initial instruction principle, instruction that gives direct explanations and guidance by using worked out examples give students a model to refer to until they are able to move the information into their long term memory. This is more effective than students trying to problem solve on their own. (p. 57). The expertise principle is similar in that using worked out examples help students but it promotes determining the amount of assistance students may need and providing that while removing details that may be distracting and overload working memory (p. 58). The third principle, the small step-size of knowledge change, is based on the idea that providing too much too quickly can cause extraneous cognitive load (p. 59). This can be diminished by delivering small bits of information at a time in order to allow the learner time to process it before gradually adding more information. Three different types of cognitive load was also discussed. Intrinsic cognitive load is experienced as learners perform activities that build connections between information being delivered and new knowledge in working memory (p. 52). Intrinsic cognitive load is often thought to be the same as germane load. “The sources of germane cognitive load are auxiliary cognitive activities designed to enhance learning…” (p. 53). Extraneous cognitive load is the load created by how the tasks are arranged and presented instead of it being needed for achieving the instructional goal (p. 54).
#5: Verenikina, I. (2008). Scaffolding and learning: its role in nurturing new learners. In P. Kell, W. Vialle, D. Konza, and G. Vogl, Learning and the learner: exploring learning for new times, (pp. 161-180). University of Wollongong.
This chapter looks at scaffolding and how it is used with young or new learners. It is noted quickly that scaffolding doesn’t give teacher clear and concise guidelines on how to use if effectively with various learners (p. 162). Most educators today are also using it as part of direct instruction and is adult-driven rather than allowing the students to experiment on their own. Scaffolding can be a strategy used to help support children in actively assisting in their own learning by helping them to self-regulate their learning by enabling them to carry out a task that they could not do without help until they are able to carry it out on their own (p. 163). Scaffolding must occur in a way that allows the child to remain an active participant in his/her own learning without it all being adult-driven (p. 164). The teacher should be the facilitator of social interactions that allow the students to learn on their own and from each other (p. 165).
Annotated Bibliographies for Module 4 #1: Driscoll, M. (2005). Psychology of Learning for Instruction (3rd ed.) (pp. 91-110). Boston, MA: Allyn and Bacon.
This section of the chapter looks more closely at long-term memory. Two different models of long-term memory are discussed: network model, feature comparison model, propositional model, parallel distributed processing model and the dual-code model. The network model assumes that nodes exist in memory and those nodes are interconnected forming a network (p. 92). Networks will be different based on the experiences of the learners. (p. 92). The feature comparison model is based on the belief that memory is not set up as a network but sets of defining features with comparisons of overlapping features (p. 93). The propositional model is based on the idea that instead of nodes making up knowledge that is stored in memory, that it is propositions, or “a combination of concepts that has a subject and predicate” (p. 94). The distributed model is based on the idea that processing of memories occur simultaneously between multiple cognitive operations rather than sequentially so that the search task is distributed in a way that all pathways are search simultaneously (p. 95). The dual-code model is based on the idea that two different systems of memory are utilized, one for verbal information and the other for nonverbal information (p. 98). Once information is stored in long-term memory it can be retrieved, or brought back to mind for use in making a response or for helping to understand new information (p. 99). Information used to help understand new information can be in two forms, recall or recognition (p. 99). There are also two different principles regarding how conditions at encoding and recall are related. The encoding principle is based on the idea that “whatever cues are used by a learner to facilitate encoding will also serve as the best retrieval cues” (p. 101). We all forget things at times; common explanations include failure to encode, failure to retrieve, and interference (p. 102). Failure to encode is simply that the information was never learned, failure to retrieve is failing to access the information previously learned and interference is when something else gets in the way of effective retrieval (pp. 102-103). Implication for instruction include providing instruction that is organized in a way to help students properly encode it. Practice should be arranged that can be extensive and with multiple cues so students are more likely to recall it (p. 105). Additional instruction in study skills may be beneficial to students (p. 105).
#2: Clark, J.M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3, 149-210.
Dual coding theory is based on the concept that mental representations are connected to nonverbal and verbal symbolic nodes and that they retain properties of the events they are based on (p. 151). The verbal systems includes visual, auditory, articulatory and other verbal representations that is processed sequentially (p. 151). The nonverbal systems include images, sounds, actions and sensory codes (related to emotions) and are processed simultaneously (p. 151). Two different connections are discussed: referential, links between two systems, and associative, joins the related verbal and nonverbal representations (p. 153). Processing assumptions maintain that verbal and nonverbal representations can be active or depressed, and will vary with conscious and nonconscious experiences (p. 154). Imagery processing is affected by instructions, the value of the material being studied, and the ability and tendency to use imagery (pp. 155-156). Education deals heavily with words in books, notes given by teacher and taken by students and in multimedia presentations (p. 158). DCT is based on the premise that the “probability and ease of image arousal plays an important role” in text meaning (p.158). Research has shown that when reading educational texts, imagery in large quantities seems to occur (p. 160). Research on text readability has shown that when text is written at a level that students can accurately answer questions and that text uses words that are “picture forming”, the students found the text more interesting and were better able to remember the text (p. 161). The dual-coding explanation for the effect imagery has on learning states that the use of imagery and verbal codes combined is better than using verbal code along (p. 165). Instructing to students to create images of what they are to remember has been shown to help students recall more details/facts from the instruction (p. 166). Research by Kulhavy and Kardash (1988), demonstrated that undergraduate education students reported that the use of generating mental images and of writing examples, helped them to remember instruction and texts read (p. 168).
#3: Mayer, R.E., Sims, V.K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86, 389-401.
This article outlines research to identify how a student’s ability to learn how a system works using words and pictures relates to the student’s spatial ability. Research supports the idea that presenting text and illustrations together positively affects student learning (p. 389). The Dual-Coding Theory of Multimedia Learning refers to students using more than one sense modality to learn new concepts (p. 390). As students listen and take in the verbal information, they begin to form a verbal representation connection (p. 390). The same applies to the pictures or video; as the student watch and begin to process what they are seeing, they begin to build visual representational connections (p. 390). As they make connections between the visual and the verbal, they begin to build referential connections (p. 390). It is thought that instructional methods that do not build all three types of connections are less successful than methods that utilize all three types of connections (p. 390). Research has also shown that students learn better when text and illustrations are presented next to each rather than separately in textbooks and when using computer generated lessons, that presenting the animations and narrations simultaneously rather than one after another (p. 391). Studies have shown that learning from animations and narrations that run together improved learning for students that had little prior knowledge of the concept but had no significant effect on the learning of students with some or expertise knowledge of the concept (p. 391). The experiments presented in this research article support the beliefs that inexperienced students were better able to transfer the information presented when that information was presented visually and verbally at the same time rather than separately (p. 399). A new finding was that: “the contiguity effect was strong for high-spatial ability students but not for low-spatial ability students” (p. 399).
#4: Pylyshyn, Z.W. (2003). Return of the mental image: Are there really pictures in the brain? Trends in Cognitive Science, 7, 113-118.
This article looks closely at the question of does the brain really create pictures. While it seems clear that we think in pictures or in sentences, more research attention has been spent on examining the brain’s ability to create mental pictures (p. 113). Pylyshyn believes that the main difference in reasoning is primarily in what the thought are about instead of the form that they take (p. 113). The picture theory of mental images claims that “mental images have a special picture-like (or depictive) format” (p. 113). One of the problems with research completed based on the picture theory is that people tend to ask themselves what it would be like to see it when they are asked to imagine something (p. 113). When considering if there are ‘functional’ pictures in the brain, picture theorists deny the claim (p. 114). Neuroscience research has led some to conclude that “images are displayed in visual cortex during mental imagery, much as visual information from the eye” (p. 115). Simply finding that parts of the visual system are active when mental imagery is occurring doesn’t reveal the form of the representation (p. 115). The actual image that the eye takes in is limited to the field of view whereas mental images are not (pp. 115-116). Cortical and mental images are different in how they are accessed and interpreted (p. 116). Mental imagery seem to have spatial characteristics, having a spatial relationship relative to the objects in the image (p. 117). Even when we close our eyes, we are still seeing images due to the spatial locations perceived through other senses (p. 117).
#5: Kelley P and Whatson T (2013) Making long-term memories in minutes: a spaced learning pattern from memory research in education. Front. Hum. Neurosci. 7:589. doi: 10.3389/fnhum.2013.00589
This article details a study in which a specific timed pattern is used to test whether encoding is possible in a very short time. Recent studies have shown that repeated stimuli that is spaced by periods without can lead to specific physical reactions that will trigger long-term encoding (p. 1). Studies such as those by Ebbinghaus (1913), have shown that many short sessions of practice spaced appropriately have had better learning outcomes than a single long session (p. 2). Trying to apply the correct time scales/patterns for educational use has raised many questions. Most classes are 45-90 minutes in length for 18 to 36 weeks, yet research has demonstrated that rather than lengthy sessions, short teaching sessions that are repetitive in nature spaced with short periods of no stimulus are more effective (p. 2). This study uses three periods of stimuli spaced between two ten minute sessions of no stimuli (p. 3). This study took place using secondary science teachers which had received specific instruction in how to deliver the instruction as well as help with planning the lessons. Instruction differed from normal due to teachers repeating the same instructional content three different times in the same session with minor variations. For the two ten minute breaks students were given opportunities to participate in physical activities such as modeling with clay, juggling and shooting hoops (p. 4). High stakes testing scores indicated that learning was effective. Teachers had a mixed reaction to the instruction with some liking the lessons and results but others fearing administration observations since the instruction did not resemble typical teaching methods (p. 4). Student response was very positive with many feeling like they were able to attend better to the lessons and that they were able to learn more rapidly (p. 4). Based on the results, it was surmised that one hour of instruction using Spaced Learning significantly impacted learning more than many hours of typical teaching (p. 5).
Annotated Bibliographies for Module 5
#1: Driscoll, M. (2005). Meaningful learning and schema theory. Psychology of Learning for Instruction (3rd ed.) (pp. 111-152). Boston, MA: Allyn and Bacon. (e.g. Chapter 4)
Ausubel recognized two types of learning in classrooms; reception learning which is presented in its final form where students are told the information and discovery learning which requires the learner to pair the instruction with preexisting knowledge and to discover new knowledge(p. 115). Rote learning is believed to be simple memorization with no real connection to prior knowledge and meaningful learning is when a student has applied new knowledge to prior knowledge to make connections (p. 116). Three possible ways that information is likely to be attached to prior knowledge are in a subordinate way (organized under a more general prior knowledge), in a superordinate way (organized so that prior knowledge is subordinate to the new knowledge), and coordinated with (on the same level as prior knowledge) (pp. 118-123). Students must be ready to learn developmentally and cognitively (p. 124). Students have demonstrated that they tend to remember the gist when given a list of sentences to remember and that material with a common theme is easier to remember as well (pp. 127-128). Having prior knowledge of the material also positively affects recall of material and perspective of material affects it as well (p. 128). Accretion is “roughly equivalent to fact learning” (p. 135). Tuning is the evolution of existing information to become consistent with the experience and restructuring is the creation of new data to replace formerly learned information (p. 136). Prior knowledge should be specifically activated in conjunction with learning new information using strategies such as advanced organizers which are similar to KWL charts without ‘to be learned’ information (p. 139). Instructional materials must be meaningful and make sense to the students or it will become impossible for the students to use prior knowledge to increase their learning (p. 143).
#2: Driscoll, M. (2005). Situated Cognition. Psychology of Learning for Instruction (3rd ed.) (pp. 153-184). Boston, MA: Allyn and Bacon. (e.g. Chapter 5)
Situated cognition is based on the concept that “what people perceive, think, and do develops in a fundamental social context (p. 157). Knowledge as lived practices is based on the idea that knowledge is gained through the experiences of the learner (p. 158). In learning by participation, a student learns through the interaction or participation with others in a common practice (p. 159). There are two processes of situated cognition. Legitimate peripheral participation define ways that learners belong to a community of practice (p. 165). Cognition as semiosis is based on the idea that knowledge is built upon signs which are determined by the world and by the individual learner (p. 170). Interaction between people and their surroundings produce signs which can stand various things such as language or math, which become generally known and accepted to the particular group/culture but may be completely different from other group signs (pp. 172-173). Implementing situated cognition can be done through apprenticeships, anchored instruction (similar to problem-based learning) and through learning communities where students and teachers work collaboratively during instruction (pp. 174-176).
#3: Mayer, R.E. & Pilegard C. (2014) Principles for managing essential processing in multimedia learning: segmenting, pre-training, and modality principles. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 316-344). New York: Cambridge. (e.g. Chapter 13)
Essential overload occurs when a concise multimedia lesson is presented at a fast rate and contains complicated material (p. 316). There are three multimedia design methods that help to minimize essential overload: segmenting, pre-training, and modality principles (p. 316). When information is presented too quickly and is too complicated, a student’s cognitive capacity, or the total amount of information that can be processed by the auditory and visual channels of the student’s working memory (p. 317). The segmenting method is just what it sounds like – information is presented to students in smaller segments. The pre-training method is helping students to learn better from a multimedia message by knowing the names and characteristics of the main concepts (p. 317). The modality method is auditorily presenting information rather than in print (p. 317).
Discovery learning is the active process of inquiry-based instruction where learners build on prior knowledge through experience and search for new information and relationships based on their interests (Coffey, n.d.). The theory that discovery learning encourages students to actively participate in the learning process by exploring concepts and answering questions was also examined by John Dewey, Jean Piaget, and Lev Vygotsky. Coffey (n.d.) indicates that according to her research, three main characteristics of discovery learning are: “exploration and problem-solving; student-centered activities based on student interest; and scaffolding new information into students’ funds of knowledge”. Discovery learning can be conducted in several different ways including, experiments, problem-based learning, simulation-based learning, and webquests. In discovery learning, students are active and the learning is hands-on, the learning process is more important than the final product, feedback is necessary for improvement and collaboration and discussion are also necessary to develop a deeper understanding of the information presented (Coffey, n.d.).
Annotated Bibliographies for Module 6
#1: Pass, F. & Sweller, J. (2014) Implications of cognitive load theory for multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 27-42). New York: Cambridge. (e.g. Chapter 2) Paas and Sweller state that evolutionary theory can be used to guide assessment of the effectiveness of instructional procedures (p. 28). There are many categories of knowledge but only those requiring different types of instructional procedures are significant (p. 28). There are two types of knowledge: biological primary (evolved over many generations and easily acquired) and biological secondary (knowledge we need for cultural reasons and generally taught in schools) (p. 28). There are generally five basic principles to describe the processing characteristics of human cognitive architecture. The information store principle is based on the idea that like genetic code deals with large amounts of biological life, long-term memory deals with large amounts of information that have changes (p. 30). The borrowing and reorganizing principle states that we borrow or imitate what others do, and that information from imitating and listening does not need to be taught but information from reading must be taught (p. 31). As we imitate others or listen to others, we learn. Ultimately, the amount of knowledge that we have in long-term memory couldn’t be learned quickly and efficiently without the borrowing and reorganizing principle (p. 31). The randomness as genesis principle is problem solving which does not have to be taught: the leaner must determine the problem, determine the next steps to take and to see if it works (p. 32). The narrow limits of change principle addresses the two limitations of working memory. “Miller (1956) indicated that working memory is able to hold only 7 elements of information” (p. 33). Working memory is also limited by the amount of time that it can hold information without rehearsal (p. 33). The environmental organizing and linking principle centers on the ability to link stored memory with the external environment, and how that differs from when new information is presented (pp. 33-34). Working memory can pull information already learned from long-term memory, with no limits on duration or on amount (p. 34). The five principles help to confirm that large amounts of information need to be organized before they can be processed effectively, otherwise the learner can only handle small amounts of information at a time (p. 35).
Understanding information occurs when that information can be processed in working memory which can be limited when dealing with all new information (p. 36). Information is organized and begins to move to long-term memory as the learner studies the information which leads to knowledge acquisition (p. 36).
Three types of cognitive load are: intrinsic (the load caused by the complexity of the information and determined by the interactivity), extraneous (high interactivity caused by inappropriate instructional design), and germane (a combination of intrinsic and extraneous), which is the preferred, most effective cognitive load (pp. 37-38). It is believed that reducing extraneous cognitive load will free working memory and will yield a greater germane cognitive load (p. 38). Also that if the intrinsic load is small, that learning may still be effective even with a large extraneous load (p. 38).
#2:Ayres, P & Sweller, J. (2014) The split-attention principle in multimedia Learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 206-226). New York: Cambridge. (e.g. Chapter 8) The split-attention principle is based on instances where materials require students to split their attention between multiple sources of information, increasing their extraneous cognitive load and negatively impacting their learning (p. 206). Instruction should eliminate the need for students to mentally combine multiple sources of information; this will reduce extraneous load and free up mental resources for learning (p. 206). Split-attention only occurs when the both sources are necessary for the instruction; if not, then the information is redundant and does not cause a split-attention (p. 208). An example of how to not have split-attention is when you have a diagram to include the descriptions of the components within the diagram. Sweller, Chandler, Tierney, and Cooper (1990) found that when explanatory notes were embedded into the diagrams that the split-attention effect was minimalized (p. 210). Mayer (1989) found that when using illustrations, labelling the illustrations were more effective than not labelling them (p. 211). The spatial contiguity principle is based on Mayer’s (2001) findings that “students learn better when corresponding words and pictures are presented near rather than far from each other” (p. 212).
All instructional materials that use more than one source of information should be evaluated for the split-attention effect (p. 212). Element interactivity must be considered as well. This is the number of elements that are presented and must be processed at the same time in working memory (p. 213). When there is low interactivity, it’s easier to learn than when there is high interactivity since there is less strain on working memory (p. 213). Temporal separation occurs due to different sources of information that must be integrated together before being understood being separated in time (p. 215). Effective temporal integrations reduces the need for mental integration which also reduces extraneous cognitive load (p. 215). The temporal contiguity principle applies the split-attention theory to spoken text; presenting words and pictures simultaneously rather than successively helps students learn better (p. 215).
Methods to prevent split attention include placing graphics with test in a vertical format, directing attention using techniques such as color coding and embedding additional information via hypertext or ‘pop-ups’ (pp. 218-219).
#3:Kalyuga, S. & Sweller, J. (2014) The redundancy principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 247-262). New York: Cambridge. (e.g. Chapter 10) Chapter 10 deals with the redundancy principal or effect. This states that information that is redundant, or when the same information is presented concurrently in different forms, it interferes with learning (p. 247). One of the two variations of the effect is when the exact same material is presented in two different forms such as in a diagram with a description of the diagram or an auditory version of the printed text used (p. 248). The other variation is when a full text is compared with a summarized text (p. 248). When a learner is presented with the exact same information in different forms twice, working memory is being used to process it, reduces the amount of learning that can take place without additional load. Miller (1937) looked at the redundancy effect as she studied the use of pictures with young students learning to read and found that the pictures when paired with the printed word and an auditory example of the word were redundant (p. 250). Redundancy of actual equipment looks at how use of the actual equipment (mainly with training manuals) can detract from the instruction if the instruction (manual) is arranged with diagrams along with the explanation of how to use that particular component (p. 253). This is most often present when there is material that is high in element interactivity (p. 253). Written/spoken text redundancy is present when information is presented in identical narrated and written text, similar to closed captioning (p. 254). Many students will attempt to read along with the text which may overwhelm them, particularly if they are struggling readers focusing much effort on the decoding of the words. Written/spoken text redundancy is especially a concern for those learning a second language. Two different studies show that listening rates may lag behind the reading rate of second language learners due to the amount of strain placed on working memory in decoding the text (p. 256). Instructional implications indicate that if a source of information is intelligible on its own, then no additional information sources should be added to it, unless there is low element interactivity in which there will be no significant redundancy effect (p. 258).
#4:Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4, 295-312. In this article, Sweller looks at factors that help to determine the difficultly of learning specific materials (p. 295). Schema is examined as it helps determine how new information is handled based on how the newly presented information is manipulated to connect with previous knowledge (p. 296). When considering schema it is important to remember that schema is developed over time, gradually rather than all at one time (p. 297). Two functions of learning are discussed: one, to store schema in long-term memory and reduce the strain on working memory (pp. 297-298).
Goal-free strategies and worked examples have been shown to help reduce extraneous cognitive load and to assist in schema acquisition (p. 301). When using worked examples, in many experiments, it has been demonstrated that integrating the example with explanations are more beneficial that explanations placed separately (p. 302). It has also been demonstrated that an explanation is not always necessary; if a diagram is self-explanatory, then an explanation is redundant (p. 303).
#5: Clark, R. (2002). Six Principles of Effective e-Learning: What Works and Why. Retrieved February 19, 2017, from https://www.learningsolutionsmag.com/articles/384/six-principles-of-effective-e-learning-what-works-and-why/pageall. This article looks at e-Learning, or instruction that is delivered digitally. Clark (2002) states that there are three important elements of any e-Lesson: the instructional methods (examples, exercises, simulations and analogies), the instructional media (computers, workbooks, instructors or any other delivery agent) and the media elements (text, graphics, and audio). Clark (2002) examined the multimedia principle and research done on it to determine that adding graphics (that support the instruction) to words can improve learning of students. Based on research examined surrounding the contiguity principle, Clark agrees that placing text near graphics improves learning by reducing the split-attention effect (Clark, 2002). When examining the modality principle, she surmises that explaining graphics that are relatively complex or unfamiliar to students using audio can improve learning yet she also examined the redundancy principle to note that explaining the graphics with audio and with text can negatively impact learning (Clark, 2002). In looking at the coherence principle, Clark (2002) agrees that the overuse of graphics, text and audio can hinder learning rather than aide it by creating a “Las Vegas approach” where glitz and games distract the learner. Clark (2002) also examined the personalization principle citing research that relayed the concept that when learners feel a personal connection to the instruction through the type of language chosen or problem they can see themselves facing, the learning is more significant.
Annotated Bibliographies for Module 7
#1:Mayer, R. E. (2014) Introduction to multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 1-26). New York: Cambridge. (e.g. Chapter 1)
Why use multimedia learning? It’s generally believed that people learn more from words and pictures together than from words alone (p. 1). Multimedia can mean various things: experiences with handheld devices, live performances, online lessons, and even “chalk-and-talk presentations” (p. 2). Mayer (2014) defines multimedia as using both pictures (spoken or printed) along with pictures (p. 2). There are different views: delivery media view requires two or more delivery devices; presentation mode view requires printed or spoken text and pictures; and sensory modalities view requires auditory and visual components (p. 3).
The multimedia principle gives us reasoning for studying multimedia learning, and is based on the idea that when presented the right way, people learn more from words and pictures combined than from words alone (p. 6). Multimedia learning should be designed with the learner in mind (p. 6). The quantitative rationale is based on the idea that information presented on two channels can be more than presented on one and gives the same information two times in two different ways given the learner more exposure in the same amount of time (pp. 6-7). The qualitative rationale is based on the idea that words and pictures complement each other and even though different qualitatively, learners’ understanding is greater due to the differences (p. 7). The three metaphors of multimedia learning include: response strengthening (strengthening or weakening connections); information acquisition (adding information to memory) and knowledge construction (building a mental structure that is coherent) (pp. 17-19). The two goals of multimedia instruction are remembering (reproducing or recognizing material that has been presented) and understanding (the ability to use the material presented in everyday situations) (p. 20).
#2: Mayer, R. E. (2014) Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 43-71). New York: Cambridge. (e.g. Chapter 3)
While it is maintained that learners benefit from having words and pictures presented together versus alone, designers must be aware that adding pictures to words doesn’t guarantee learning (p. 44). A multimedia instructional message contains words and pictures that are meant to improve learning and can be delivered through various types of mediums (p. 44). Retention and transfer are two ways to measure learning (p. 44). There are three assumptions of a cognitive theory of multimedia learning: dual channels (separate channels for processing visual and auditory material is used); limited capacity (there is a limited amount that we can process at a time); and active processing (we must actively take in, sort, and integrate the information we are taking in) (pp. 47-51). There are three cognitive processes required for active learning: selecting, organizing and integrating (p. 51). There are three memory stores in the cognitive theory of multimedia learning: sensory memory which holds visual or auditory images for a short time; working memory which temporarily holds information and manipulates it; and long-term memory which permanently stores the information (pp. 52-53).
There are five processes to be considered: selecting relevant words, selecting relevant images, organizing selected words, organizing selected images and integrating images and words with prior knowledge (p. 54). There are five forms of representation: words and pictures are in the multimedia presentation, acoustic and verbal representations are in sensory memory, sounds and images are in working memory, verbal and pictorial are also in working memory and knowledge in the long-term memory (p. 59).
#3:Schnotz, W. (2014) Integrated model of text and picture comprehension. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 72-103). New York: Cambridge. (e.g. Chapter 4)
There are two forms of representation: descriptions or texts/symbols are better able to express abstract knowledge and depictions or pictures, drawings, and paintings have an advantage of being complete informationally (pp. 76-77). Atkinson and Shiffrin (1971) distinguishes three memory systems that all have different functions (p. 80). Sensory registers are channels that carry the information to working memory (p. 80). Working memory processes the information from the senses but it has a limited capacity (p. 80). Long-term memory is when the information has been sorted and stored as prior knowledge (p. 81). There are four assumptions in the integrated model of text and picture comprehension. First, text and picture comprehension take place in sensory registers (p. 83). Second, pictorial and verbal information goes through visual and auditory channels to working memory (p. 83). Two subsystems are used to further process semantic information in working memory: descriptive subsystem and depictive subsystem (p. 83). Picture and text comprehension are active processes in building coherent knowledge (p.84).
#4:Mayer, R.E., & Anderson, B. (1991). Animations Need Narrations: An Experimental Test of a Dual-coding Hypothesis. Journal of Educational Psychology, 3, 484-490.
Initial studies involving multimedia learning revolved around words and pictures being presented in a coordinated way which yielded a more effective learning experience that words alone (p. 484). In this study, a new form of picture was examine – the animation. In the first experiment, students that were shown words with pictures scored 50% better than those shown words before pictures (p. 487). It was also shown that students that were shown an animation that included simultaneous narration learned differently than those with verbal comments immediately followed by an animation (p. 488). In the second experiment, it was found that when students were shown animations with narration were able to come up with 50% more solutions to given problems than those that were given words before the animation and that both groups showed equal recall of verbal statements (p. 488). The conclusion can be drawn from this research that while animations may be powerful in presentations, if it does not have narration, then it can be the same as having no instruction as the learners will take nothing away from the instruction (p. 490).
This article looks at the multimedia principle as studied by Richard Mayer. It is based on the belief that words and pictures presented together benefit learning more than words or pictures alone. Cobb (n.d.) goes on to describe the two main channels for processing information, the auditory channel and the visual channel and how their combined use help students to process more information and hold that information in the memory longer.
Five types of graphics are examined as well. They are the decorative graphic which is there for decoration only and can be a distraction for learners (Cobb, n.d). Representational graphics are generally a single photograph with an explanation (Cobb, n.d.). Relational graphics are typically graphs or other charts that show a relationship between two or more examples (Cobb, n.d.). Organizational graphics show how things are related and interpretive graphics help to make abstract or intangible objects more concrete (Cobb, n.d.). Cobb (n.d.) points out that studies have proven that the multimedia principle is valid and he notes that it continues to evolve today with all of the innovations in technology.
Annotated Bibliographies for Module 8
#1:Kalyuga, S. (2014) The expertise reversal principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 576-597). New York: Cambridge. (e.g. Chapter 24)
In this chapter we learn that many multimedia designs do not often make recommendations for prior knowledge of the intended audience. Most are only principles and recommendations are tested only on inexperienced learners (p. 576). Research into how prior knowledge affects the required instruction has suggested that the design should differ between students with no prior knowledge and those with prior knowledge (p. 577). The expertise reversal principle was initially treated as a type of redundancy principal in that learners with prior knowledge found information that was beneficial to inexperienced learners redundant (p. 577). Research has demonstrated that in different situations, students with prior knowledge did not need the graphic and narration in order to learn. For many, the graphic alone was enough and the narration was redundant (p. 578). The expertise reversal effect occurs when instruction with a high level of information works best for new learners when a low level of information works poorly and that same high level of instruction works poorly for knowledgeable learners when the low level of instruction works best (p. 588). The expertise reversal principle states that most of the principles of multimedia learning is dependent on the prior knowledge of the student and that those that help students with no prior knowledge may in fact hinder those with prior knowledge (p. 593).
#2:Wiley, J., Sanchez, C. A., & Jaeger, A. J. (2014). The individual differences in working memory capacity principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 598-619). New York: Cambridge. (e.g. Chapter 25)
In this chapter, we look at how working memory capacity affects multimedia learning. Multimedia comprehension is basically when words and images are combined for instruction to aide learning (p. 598). The limited capacity of working memory looks at how a learner has to attend to so many information sources and the load that may create for some learners (p. 599). Research has examined several examples of individual differences in working memory. One of those tested found that learners with high working memory capacity is less distracted by irrelevant information (p. 604). Implications for cognitive theory and instructional design is that individuals with higher working memory capacity are less distractible and are able to direct their attention better to comprehension processing which means that designers must be aware of the need to support learners with lower working memory capacity in the areas of both attention and processing (p. 610). There are not that many research studies completed in this area yet so the limitations are not fully known (p. 611).
#3:Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9(3), 103–119.
Learning styles refers to the concept that people learn information in different ways such as auditorily, visually, and kinesthetically (p. 106). The discussion of learning styles must consider both the written materials and research but also the booming commercial activities, particularly those geared toward education and helping educators assess the learning styles of their students in order to direct their instruction (p. 106). The concept of learning styles is becoming more prominent in educational psychology textbooks and instruction as well as those studying psychology and education are being taught to learn the individual learning styles of the students and tailor the instruction to those styles (p. 106).
The concept of learning styles became popular in the 1940s and assumed that people could be grouped into static groups basic on how they self-report to learn best or be most comfortable (p. 107). This study was unable to substantiate the use of learning style assessments for instructional purposes (p. 117).
#4:Plass, J.L. & Kalyuga, S., & Leutner, D. (2010). Individual differences and cognitive load theory. In J. L. Plass, R. Moreno, & R. Brünken (Eds.), Cognitive Load Theory (pp. 65-87). New York: Cambridge.
The characteristics of learners can differ in many ways including preferences for format, differences in preferred modalities, environmental conditions, cognitive abilities, and overall intelligence (p. 65). This chapter looks at the hypothesis that for instruction to be effective, the learning environments used must match the learner’s individual differences (p. 66). Some of the individual differences that can be present for learners include intelligence and prior knowledge (p. 66). The expertise reversal effect looks at how prior knowledge affects learning. Research by Kalyuga (2005) and Mayer (2001) show that one of the most important individual differences that must be considered by designers is the prior knowledge of the learners (p. 67). The expertise reversal effect has been shown in research to occur when a method that works well for those with no prior knowledge add to the cognitive load or becomes ineffective for those with prior knowledge (p. 68). Research has shown that while learners with no prior knowledge may need an image and a narration or text explanation, for those with prior knowledge, a graphic alone may be more effective so that they do not have to read through what is already known (p. 68).
In this article, the four different learning styles (visual, aural, verbal and kinesthetic) are discussed. Learning styles are believed to be determined by how a student learns best. It is noted that while many use these four styles, other theories use a different set of descriptors for how students process and organize information (Chick, n.d). It is noted that according to Coffield (2004), there are well over 70 different learning styles schemes in use today and being sold to educators around the world (Chick, n.d.).
While learning styles and inventories to assess learning styles is very popular in the education setting right now, there is little to no evidence that supports the concept of matching activities to the style of learning a student prefers (Chick, n.d.). A look at why learning styles continue to be popular despite having no research to support it was made. It was determined that it is so popular because of people’s need to identify themselves as a specific type and the feeling of needing to recognized as individuals with differing characteristics (Chick, n.d.). It is also thought to be so popular because of its resemblance to metacognition, or thinking of one’s own thinking (Chick, n.d.).
Annotated Bibliographies for Module 10
#1: Johnson, C. & Priest, H. A. (2014). The feedback principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 449-463). New York: Cambridge. (e.g. Chapter 19) The feedback principle is based on the idea that students with no prior background learn better with feedback that explains rather than with feedback for accuracy only (p. 449). Giving feedback to students helps them to evaluate their answers, to figure out where discrepancies lie and correct faulty thinking (p. 449). Explanatory feedback gives students the why their answer was correct or incorrect whereas corrective feedback only lets them know if they are correct or incorrect (P. 450). The feedback principle is based on the cognitive theory of multimedia learning where three types of processing is examined: extraneous processing (occurs due to poor instructional design that doesn’t help reach the educational goal), essential processing (required for mentally representing the information taught in working memory) and generative processing (attempting to make sense of the important information from the lesson) (pp. 450-451). Three boundary conditions have been identified: the feedback should always prompt active processing, other design principles must be considered in the development of the lesson, and individual differences in students must be considered (pp. 455-457).
#2: Scheiter, K. (2014). The learner control principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 487-512). New York: Cambridge. (e.g. Chapter 21) The learner control principle is based on the idea that when students are allowed to have some control over the instruction (pace, sequence, display format) aids their learning particularly when they have high levels of prior knowledge and if they have additional support (p. 487). In this text, interactivity is described as having control over a single aspect of the lesson whereas learner control is described as having control over multiple facets of the lesson (p. 488). According to most beliefs, learner control should aid student learning because it provides active and constructive processing, however, it has been demonstrated in multiple research projects that there is no evidence in favor of allowing students to have control over their instruction (pp. 491-492). Another belief is that learner control helps with the motivation to learn, which may increase learning, yet the evidence to support this is weak (p. 494). The third belief is that learner controls helps students to develop self-regulatory learning skills, which has been debunked (p. 495). The fourth belief is that learner controlled instructions allows students to adapt the lesson to their goals and objectives (individualized instruction), which has also been proven incorrect (p. 496).
#3: Moreno, R., & Mayer, R. E. (2005). Role of Guidance, Reflection, and Interactivity in an Agent-Based Multimedia Game. Journal of Educational Psychology, 97(1), 117-128. This article details a research project where it was investigated as to whether guidance, reflection and interactivity of games affect the learning of students. It was noted that adequate research has not taken place on how games need to be designed in order to promote deep learning in students (p. 117). For their research, the definition of interactivity was to have the learner give a solution to a given problem (p. 118). The definition they used for reflection was asking learners to explain their correct answers; to tell why it was correct (p. 118). The definition they used for feedback was allowing the learner to know if their answer was correct or not (p. 118). The definition of guidance that they used was having part of the game to explain why an answer was correct or not (p. 118). In instructional methods discussed, it was noted that a designer needs to make decisions regarding the amount of support, or guidance, which will be given throughout the instruction (p. 118). In guided discovery, the students may be guided or given scaffolded support whereas in pure discovery, the students will be provided with minimal information/support (p. 118).
#4: Kalyuga, S. (2007). Enhancing Instructional Efficiency of Interactive E-learning Environments: A Cognitive Load Perspective. Educational Psychology Review, 19, 387-399. In this article, ways to enhance the efficiency of interactive lessons delivered through e-learning within a cognitive load framework. It is noted that while students can learn with any method of instruction if highly motivated, that the point of making the investment into interactive lessons through high-tech devices is so that students learn more efficiently and without stressing themselves mentally (p. 388). Particular attention must be paid to ensure that interactivity is not causing extraneous load. Feedback messages that are poorly designed, redundant feedback information, manipulation interactivity could all add extraneous load (p. 396). It was found that most of the different types of approaches and techniques used in current e-learning lessons tend to decrease and increase cognitive load of students (p. 397). When designing interactive lessons, the cognitive characteristics of the leaners must be taken into consideration in order to increase the level of learner control (p. 397). Students with low prior knowledge appear to perform better with direct guidance than those with a general prior knowledge, also lessons that are presented at varying rates could help to be sure that working memory is not exceeded (p. 398).
This article looks at the importance of interactivity in eLearning. Suggestions for ensuring that the lesson is successful are given. Lessons should be relevant and on-topic. When the material is of good quality and contains meaningful information, students are more apt to stay motivated and attentive. When designing, the information that may be deemed relevant by the student must be considered (Pappas, 2014). In order for a lesson to have true interactivity, the student must be allowed to explore and manipulate within the lesson. Pappas (2014) believes that there must be learner control within the lesson. The scenarios within the lesson must be real-life scenarios that the students are able to relate to (Pappas, 2014). Feedback (corrective) in the form of quizzes/assessments should be integrated at the end of each lesson (Pappas, 2014). Other suggestions made by Pappas include tapping into the students’ emotions, encouraging collaboration among groups, and making the lesson aesthetically pleasing.
Annotated Bibliographies for Module 11
#1: Keller, J. M. (1987). Development and use of the ARCS model of instructional design. Journal of instructional development, 10(3), 2-10.
In this article, John M. Keller looks more closely at how motivation of students has be influenced through intentional design. The ARCS Model stands for attention, relevance, confidence and satisfaction. It was developed in response to the desire of educators to find a more effective way to understand the motivation of students to learn and to help identify and to solve issues with student motivation (p. 2). This article looks at whether motivation can be influenced by the intentional design of the lesson. The ARCS Model has three features: in contains four categories, it includes strategies to enhance motivational appeal, and it uses the systematic motivational design (p. 2). Before the development of the ARCS Model, there were no other theories or models investigating the question of how to design instruction that stimulates the motivation to learn in students (p. 2). Of the four conditions, attention is an element of motivation and must be present in order for learning to occur and must be sustained, relevance is the ability for students to understand the importance of the instruction to themselves at this moment or in the future, confidence is the belief of the student that he/she can succeed, and satisfaction is the how good about the accomplishments made make the student feel (pp. 3-6).
#2: Fredrickson, B.L. (2001). The Role of Emotion in Positive Psychology: The broaden-and-build theory of positive emotions. American Psychologist, 56, 218-226.
In this article, Barbara Fredrickson examines positive emotions. The broaden-and-build theory is based on how positive emotions such as joy, interest, love and contentment present themselves and what function they perform in students’ lives (p. 1367). It is believed that these and other positive emotions help to open an individual up to other positive actions (such as play, the urge to savor, explore, and to return positive feelings to others) (p. 1367). In previous research, negative emotions have been examined far more than positive emotions (p. 1368). Positive emotions and sensory pleasure/positive moods have been overlapped and blurred (p. 1368). More current research suggests that positive emotions aids in attention, cognition and the actions of students and that they help students to build physical, social, and intellectual resources (p. 1369). Research has provided evidence that positive emotions help to undo the effects of negative emotions that a student may have lingering (p. 1370). Evidence has also been provided that positive emotions help to encourage psychological resiliency (p. 1371). Positive emotions helps students to build up their own personal resources and help to enhance their overall well-being (pp. 1372-1373).
#3: Isen, A. M., Nowicki, G. P., & Daubman, K. A. (1987). Positive affect facilitates creative problem solving. Journal of Personality & Social Psychology, 52(6), 1122-1131.
In this article, how positive affect influences creativity in students is looked at. A series of studies was introduced and their results given. In the first study, it was found that students that had received a small treat tended to categorize the stimuli more than those in the control group on both a sorting task and a rating task (p. 1122). In a second study, students were given a positive affect by having a small snack, giving word associations, getting a small gift, or by watching a short comedy clip and the results indicate that those receiving the reward gave more word associations than the control group (p. 1122). Results from four experiments conducting by Isen, Daubman, and Nowicki indicate that positive affect (brought about by a short comedy clip or candy) can help to increase creative responses on creative tasks whereas intentionally induced negative affect had no effect (p. 1128). This implies that everyone should think of everyone else as being capable of being creative and should make an effort to bring out that creativity (p. 1129).
#4: Um, E., Plass, J. L., Hayward, E. O., & Homer, B. D. (2012). Emotional design in multimedia learning. Journal of Educational Psychology, 104(2), 485-498.
The main question posed in this article is whether or not multimedia lessons be designed to promote positive emotions and if it will improve learning in students (p. 485). Positive academic emotions are created by students’ judgments about their surroundings and/or situations and are initiated by their reaction to and interaction with a stimulus (p. 485). Research by Pekrun et al considers two types of emotions that are thought to affect performance: the valence (positive or negative) of the emotion and activation (activating or deactivating) (p. 485). Emotions such as happy, hopeful, anxious and angry are thought to be activating while emotions such as satisfied, calm and hopeless are considered to be deactivating (pp. 485-486). Design features such as colors, shapes and sounds in multimedia materials can have an effect on a student’s affect but there aren’t many theories that consider how emotions effect learning (p. 486). The research conducted in this article indicate that using emotional design principles when creating learning materials can induce positive emotions which help with cognitive processing and with learning (p. 495).)
#5: Mayer, R. E. (2003). Social cues in multimedia learning: Role of speaker’s voice. Journal of Educational Psychology, 95(2), 419-425.
This article looks at whether there is a voice affect, or whether the type and style of voice narration affects the effectiveness of multimedia lessons. Again, just like with positive emotions, there is little research on how voice aids in deep learning when using multimedia lessons (p. 419). The social agency theory is introduced and is based on the belief that social cues taken from multimedia messages can prime a student’s social conversation schema resulting in students acting as if they are conversing with other students rather than with multimedia (p. 419). The cooperation principle is based on the idea that students will assume that the speaker in multimedia lessons is attempting to make sense by being informative, accurate, concise and relevant (p. 419). The experiment conducted by Mayer, Sobko, and Mautone offers a new principle that can be used in multimedia design of instruction. The voice principle is based on the evidence that students learn better when the narration used in multimedia lessons is spoken as a human without accent rather than a more computer-like voice or a human with a heavy accent (p. 424).
This article looks at whether the authors were able to replicate the results from previous research in which it was proven that the emotional design of multimedia lessons can promote positive emotions in the students and that aides in comprehension and transfer of the information (p. 1). This research also focused particularly on the part that colors and shapes that are used in the lessons plays in learning (p. 1). The results of the first study indicates that several of the finding of Um et al. (2011) was able to be duplicated but not all of them (p. 7). They were able to provide evidence that by utilizing a positive emotional design and using a mixture of color and shape, that they were able to create a positive emotional state that maintains throughout the lesson (p. 7). In the second study, they were only able to provide evidence that the design factor of shape was able to produce positive emotions (p. 10).
Annotated Bibliographies for Module 13
#1: Derry, S., Sherin M. G., & Sherin B. (2014). Multimedia learning with video. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 785-812). New York: Cambridge. (e.g. Chapter 32) This chapter looks at how and what teachers learn from one another by interacting socially during the watching and analysis of classroom videos (p. 785). Fear of use of video learning is due to the knowledge that people can only attend to a small amount of detail at a time so videos rich in content may have areas that are overlooked, they may be easily distracted from important details, and they may form strong impressions too quickly (pp. 786-787). A good thing noted that gives optimism for using video is that since it is not live, it can be watched repeatedly so that learners can have an opportunity to attend to the various details that may have been missed during the first (or even second) watching (p. 787). Standard paradigms used in video-based professional development include video clubs, problem-solving cycles, lesson studies, problem-based learning, and cognitive flexibility theory approaches (pp. 788-791). Several key dimensions that are essential but vary among video-based learning settings include the technological infrastructure, the video content, the task structure, and the social structure (pp. 792-793). As teachers viewed the video lessons, changes in their knowledge occur in the video viewing system and the knowledge of the individuals (pp. 796-797). #2: Rouet, J. & Britt, A. (2014). Multimedia learning from multiple documents. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 813-841). New York: Cambridge. (e.g. Chapter 33) This chapter looks at learning from more than one document or source of information. Due to the wide-spread use of the internet and other networked informational systems, it has become more common for students to have access to more than one source of information when conducting research or for general knowledge searches (p. 813). As learners are using multiple documents, they must keep the interplay of source and content information in mind as well as the cohesion and coherence in multiple documents. There are two principles that must be considered when considering learning from multiple documents. The sourcing principle states that a learner’s understanding of documents is contributed to by considering the source of the document and it consists of three main processes: locating and evaluating source features, making use of the information source when interpreting the contents of the document and keeping in mind the connections between the source and the content (p. 823). The other principle is the multiple test integration principle which states that learning from multiple documents can lead to a deeper understanding of the material (p. 827).
#3: Clark, R. C. (2014). Multimedia learning in e-courses. In R. E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning. (pp. 842-881). New York: Cambridge. (e.g. Chapter 34) This chapter looks at learning in e-courses. The use of e-learning continues to grow yearly. The push for lower costs in education, the improvements in computers/mobile devices, increased internet access within learning communities, better tools for generating e-courses, and the demand from a larger geographical area have all increased the demand for e-courses (p. 843). The terms e-learning and online learning are used simultaneously to describe learning that takes place using digital technology such as tutorials, readings, simulations, serious games and applications that support performance (p. 843). Most online learning has been designed as self-paced/self-study or asynchronous learning (p. 844). Synchronous courses are those that are teacher-led and are attended by multiple people from multiple locations (p. 844). Mobile e-learning utilizes mobile devices and is currently being used for reference and educational purposes (p. 845). There are four e-learning research streams: media comparison, value-added, interactional, and unique affordances (p. 847). Many educators are now beginning to use blended learning, a combination of asynchronous and synchronous learning (p. 849).
#4: Athey, J. (2010). Best Practices for Using Video in e-Learning. Retrieved April 15, 2017, from https://www.learningsolutionsmag.com/articles/596/best-practices-for-using-video-in-e-learning This article looks at how video can be a useful tool when creating e-learning opportunities. Using video helps to blend asynchronous e-Learning with direct interaction of an instructor as well as visual demonstrations (Athey, 2010). When beginning to plan an e-Learning course, you should take a few things into account. One of these is how the video will support the learning objectives planned. Various programs offer different levels of quality video and recordings and professional quality videos are also already created that can be linked to your course. You must be prepared: if you are creating the video yourself you need to have a script, quality recording instruments, and an area to record in with limited noise and interruptions (Athey, 2010). You must keep the file size at a manageable level and within the limits of the technology that will be used to deliver the instruction, so the learning platform must be considered as well (Athey, 2010). Use of video is beneficial when the video is of appropriate content, access, and quality. Otherwise, students will not fully benefit from its use.