首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Teachers of deaf and hard of hearing students must serve as language models for their students. However, preservice deaf education teachers typically have at most only four semesters of American Sign Language (ASL) training. How can their limited ASL instructional time be used to increase their proficiency? Studies involving deaf and hard of hearing students have revealed that glosses (written equivalents of ASL sentences) can serve as "bridges" between ASL and English. The study investigated whether glossing instruction can facilitate hearing students' learning of ASL. A Web site was developed in which ASL glossing rules were explained and glossing exercises provided. Posttest scores showed the experimental group improving from 39% to 71% on ASL grammar knowledge. These findings indicate that online glossing lessons may provide the means to obtain ASL skills more readily, thus preparing deaf education teachers to serve as ASL language models.  相似文献   

2.
Deaf children who are native users of American Sign Language (ASL) and hearing children who are native English speakers performed three working memory tasks. Results indicate that language modality shapes the architecture of working memory. Digit span with forward and backward report, performed by each group in their native language, suggests that the language rehearsal mechanisms for spoken language and for sign language differ in their processing constraints. Unlike hearing children, deaf children who are native signers of ASL were as good at backward recall of digits as at forward recall, suggesting that serial order information for ASL is stored in a form that does not have a preferred directionality. Data from a group of deaf children who were not native signers of ASL rule out explanations in terms of a floor effect or a nonlinguistic visual strategy. Further, deaf children who were native signers outperformed hearing children on a nonlinguistic spatial memory task, suggesting that language expertise in a particular modality exerts an influence on nonlinguistic working memory within that modality. Thus, language modality has consequences for the structure of working memory, both within and outside the linguistic domain.  相似文献   

3.
On-line comprehension of American Sign Language (ASL) requires rapid discrimination of linguistic facial expressions. We hypothesized that ASL signers' experience discriminating linguistic facial expressions might lead to enhanced performance for discriminating among different faces. Five experiments are reported that investigate signers' and non-signers' ability to discriminate human faces photographed under different conditions of orientation and lighting (the Benton Test of Facial Recognition). The results showed that deaf signers performed significantly better than hearing non-signers. Hearing native signers (born to deaf parents) also performed better than hearing nonsigners, suggesting that the enhanced performance of deaf signers is linked to experience with ASL rather than to auditory deprivation. Deaf signers who acquired ASL in early adulthood did not differ from native signers, which suggests that there is no 'critical period' during which signers must be exposed to ASL in order to exhibit enhanced face discrimination abilities. When the faces were inverted, signing and nonsigning groups did not differ in performance. This pattern of results suggests that experience with sign language affects mechanisms specific to face processing and does not produce a general enhancement of visual discrimination. Finally, a similar pattern of results was found with signing and nonsigning children, 6-9 years old. Overall, the results suggest that the brain mechanisms responsible for face processing are somewhat plastic and can be affected by experience. We discuss implications of these results for the relation between language and cognition.  相似文献   

4.
Recent research into signed languages indicates that signs may share some properties with gesture, especially in the use of space in classifier constructions. A prediction of this proposal is that there will be similarities in the representation of motion events by sign-naive gesturers and by native signers of unrelated signed languages. This prediction is tested for deaf native signers of Australian Sign Language (Auslan), deaf signers of Taiwan Sign Language (TSL), and hearing nonsigners using the Verbs of Motion Production task from the Test Battery for American Sign Language (ASL) Morphology and Syntax. Results indicate that differences between the responses of nonsigners, Auslan signers, and TSL signers and the expected ASL responses are greatest with handshape units; movement and location units appear to be very similar. Although not definitive, these data are consistent with the claim that classifier constructions are blends of linguistic and gestural elements.  相似文献   

5.
Theory-of-mind (ToM) abilities were studied in 176 deaf children aged 3 years 11 months to 8 years 3 months who use either American Sign Language (ASL) or oral English, with hearing parents or deaf parents. A battery of tasks tapping understanding of false belief and knowledge state and language skills, ASL or English, was given to each child. There was a significant delay on ToM tasks in deaf children of hearing parents, who typically demonstrate language delays, regardless of whether they used spoken English or ASL. In contrast, deaf children from deaf families performed identically to same-aged hearing controls (N=42). Both vocabulary and understanding syntactic complements were significant independent predictors of success on verbal and low-verbal ToM tasks.  相似文献   

6.
It is unclear how children develop the ability to learn words incidentally (i.e., without direct instruction or numerous exposures). This investigation examined the early achievement of this skill by longitudinally tracking the expressive vocabulary and incidental word-learning capacities of a hearing child of Deaf adults who was natively learning American Sign Language (ASL) and spoken English. Despite receiving only 20% of language input in spoken English, the child's expressive vocabularies at 16 and 20 months of age, in each language, were similar to those of monolingual age-matched peers. At 16 months of age, the child showed signs of greater proficiency in the incidental learning of novel ASL signs than she did for spoken English words. At 20 months of age, the child was skilled at incidental word learning in both languages. These results support the methodology as it applies to examining theoretical models of incidental word learning. They also suggest that bilingual children can achieve typical vocabulary levels (even with minimal input in one of the languages) and that the development of incidental word learning follows a similar trajectory in ASL and spoken English.  相似文献   

7.
This article explores the journey of eight hearing families of bimodal-bilingual deaf children as they navigate the decision-making process reflecting their beliefs and values about American Sign Language (ASL) and English through their family language policy framework. The resources offered to families with deaf children often reflect a medical view, rather than a cultural perspective of being deaf. Because medical professionals, educators, and specialists who work with deaf and hard-of-hearing children have a strong influence on family members’ opinions, beliefs, and attitudes about being deaf, it is even more crucial to correct misconceptions about ASL and empower families to develop a family language policy that is inclusive of their deaf and hard-of-hearing children. This article informs researchers, teachers, and other professionals about the potential benefits and challenges of supporting the families’ ASL and English language planning policy.  相似文献   

8.
A Midwest school district established a demonstration Total Communication Project. Its goal was for teachers to become consistent in their role modeling of English and American Sign Language (ASL). English was the primary language of the classroom and ASL was used as an intervention tool. There has been little research on the effectiveness of ASL in the classroom. By implementing an ASL intervention program this project is a first step in establishing an environment conducive to investigating the effectiveness of ASL intervention for teaching deaf students. This paper describes: (a) techniques used for identifying classroom situations that call for the use of ASL; (b) discourse situations that influence the use of different language codes in total communication classrooms; and (c) guidelines for code-switching between English and ASL.  相似文献   

9.
There are at least two languages (American Sign Language [ASL], English) and three modalities (sign, speech, print) in most deaf individuals' lives. Mixing of ASL and English in utterances of deaf adults has been described in various ways (pidgins, diglossia, language contact, bilingualism), but children's mixing usually is treated as the 'fault' of poor input language. Alternatively, how might language mixing serve their communication goals? This article describes code variations and adaptations to particular situations. Deaf children were seen to exhibit a wide variety of linguistic structures mixing ASL, English, Spanish, signing, and speaking. Formal lessons supported a recoding of English print as sign and speech, but the children who communicated English speech were the two who could hear speech. The children who communicated ASL were those who had deaf parents communicating ASL or who identified with deaf houseparents communicating ASL. Most language produced by the teacher and children in this study was mixed in code and mode. While some mixing was related to acquisition and proficiency, mixing, a strategy of many deaf individuals, uniquely adapts linguistic resources to communication needs. Investigating deaf children's language by comparing it to standard English or ASL overlooks the rich strategies of mixing that are central to their communication experience.  相似文献   

10.
Eighteen-month-olds' spatial categorization was tested when hearing a novel spatial word. Infants formed an abstract categorical representation of support (i.e., placing 1 object on another) when hearing a novel spatial particle during habituation but not when viewing the events in silence. Infants with a productive spatial vocabulary did not discriminate the support relation when hearing the same novel word as a count noun. However, infants who were not yet producing spatial words did attend to the support relation when presented with the novel count noun. The results indicate that 18-month-olds can use a novel particle (possibly assisted by a familiar verb) to facilitate their spatial categorization but that the specificity of this effect varies with infants' acquisition of spatial language.  相似文献   

11.
This article is a response to Blue Listerine, Parochialism, and ASL Literacy (Czubek, 2006). The author presents his views on the concepts of literacy and the new and multiple literacies. In addition, the merits of print literacy and other types of literacies are discussed. Although the author agrees that there is an American Sign Language (ASL) literacy, he maintains that there should be a distinction between conversational "literacy" forms (speech and sign) and secondary literacy forms (reading and writing). It might be that cognitive skills associated with print literacy and, possibly, other captured literacy forms, are necessary for a technological, scientific-driven society such as that which exists in the United States.  相似文献   

12.
Nineteen infants who were deaf (D/H) and 19 infants who were hearing (H/H) were observed during face-to-face interactions with their hearing mothers. Infant behaviors were coded for repetitive physical activity and gaze aversion during two episodes of normal play which were interrupted by a "still-face" episode. Mothers' assessments of their infants as "difficult" or "easy" were derived from the Parenting Stress Index (Abidin, 1986). "Difficult" deaf infants displayed significantly more repetitive activity during the initial normal interaction and significantly more gaze aversion during the still-face episode, compared to "easy" deaf babies and both "easy" and "difficult" hearing babies. Implications of these findings are discussed in the context of parental perceptions of infant behaviors, and the importance of visual attention and nonverbal signals for the optimal development of infants who are deaf.  相似文献   

13.
手语产生:过程及影响因素   总被引:1,自引:1,他引:0  
手语产生是聋人借助于手形的变化并辅以相应的表情、姿势来传达思想的过程。手语产生的核心过程是词汇通达,包含词条选择(语义提取)和音位编码(语音提取)两个阶段。手语的获得年龄、手语的象似性、手语词的语音相似性、手形位置和手语加工的脑偏侧化影响手语的产生。  相似文献   

14.
Spatial relations in American Sign Language (ASL) are often signed from the perspective of the signer and so involve a shift in perspective and mental rotation. This study examined developing knowledge of language used to refer to the spatial relations front, behind, left, right, towards, away, above, and below by children learning ASL and English. Because ASL is a classifier language in which noun referents are placed into groups, each spatial relation also appeared with person, animal, and vehicle classifiers. Twenty-three children and adults who learned ASL before the age of 5 years and 23 native English-speaking adults and children participated. Both language groups participated in a comprehension task in which they chose which of 2 pictures depicted a signed or spoken relation. Results showed that children learning ASL acquired the constructions for spatial relations that typically involve perspective shifts and mental rotation later than constructions that do not involve these abilities and later than English-speaking children. Children learning ASL did not differ from English-speaking children in learning constructions that did not involve these abilities. Results also suggest that users of ASL initially comprehend spatial relations more accurately with person and animal classifiers than with the classifier for vehicles. The results are relevant to understanding the acquisition of spatial relations in ASL.  相似文献   

15.
This article presents a study that examined the impact of visual communication on the quality of the early interaction between deaf and hearing mothers and fathers and their deaf children aged between 18 and 24 months. Three communication mode groups of parent-deaf child dyads that differed by the use of signing and visual-tactile communication strategies were involved: (a) hearing parents communicating with their deaf child in an auditory/oral way, (b) hearing parents using total communication, and (c) deaf parents using sign language. Based on Loots and colleagues' intersubjective developmental theory, parent-deaf child interaction was analyzed according to the occurrence of intersubjectivity during free play with a standard set of toys. The data analyses indicated that the use of sign language in a sequential visual way of communication enabled the deaf parents to involve their 18- to 24-month-old deaf infants in symbolic intersubjectivity, whereas hearing parents who hold on to oral-only communication were excluded from involvement in symbolic intersubjectivity with their deaf infants. Hearing parents using total communication were more similar to deaf parents, but they still differed from deaf parents in exchanging and sharing symbolic and linguistic meaning with their deaf child.  相似文献   

16.
Students who are deaf or hard of hearing (SDHH) often need accommodations to participate in large-scale standardized assessments. One way to bridge the gap between the language of the test (English) and a student's linguistic background (often including American Sign Language [ASL]) is to present test items in ASL. The specific aim of this project was to measure the effects of an ASL accommodation on standardized test scores for SDHH in reading and mathematics. A total of 64 fifth- to eighth-grade (ages 10-15) SDHH from schools for the deaf in the United States participated in this study. There were no overall differences in the mean percent of items students scored correctly in the standard vs. ASL-accommodated conditions for reading or mathematics. We then conducted hierarchical linear regression analyses to analyze whether measures of exposure to ASL (home and classroom) and student proficiency in the subject area predicted student performance in ASL-accommodated assessments. The models explained up to half of the variance in the scores, with subject area proficiency (mathematics or reading) as the strongest predictor. ASL exposure was not significant with the exception of ASL classroom instruction as a predictor of mathematics scores.  相似文献   

17.
Four experiments investigated classroom learning by deaf college students receiving lectures from instructors signing for themselves or using interpreters. Deaf students' prior content knowledge, scores on postlecture assessments of content learning, and gain scores were compared to those of hearing classmates. Consistent with prior research, deaf students, on average, came into and left the classroom with less content knowledge than hearing peers, and use of simultaneous communication (sign and speech together) and American Sign Language (ASL) apparently were equally effective for deaf students' learning of the material. Students' self-rated sign language skills were not significantly related to performance. Two new findings were of particular importance. First, direct and mediated instruction (via interpreting) were equally effective for deaf college students under the several conditions employed here. Second, despite coming into the classroom with the disadvantage of having less content knowledge, deaf students' gain scores generally did not differ from those of their hearing peers. Possible explanations for these findings are considered.  相似文献   

18.
As part of a longitudinal study, the conversational skills of 67 deaf adolescents were assessed in spoken English, simultaneous communication (SimCom) and American Sign Language (ASL). Two groups of students were identified on the basis of the communication used in their current educational program: a small group of 16 students in programs using spoken English (oral) and a larger group of 51 students in programs using sign communication (bimodal). Students in spoken English programs had good spoken English skills and limited ASL skills, whereas the reverse was true for students in bimodal programs. Most students demonstrated sufficient skill in one or more systems to meet basic interpersonal communications needs, but not those required for advanced academic discourse. In neither group was spoken English related to ASL skill. SimCom skills were strongly related to spoken English in the oral program group and to ASL in the bimodal program group. Spoken English in adolescence was highly predictable from spoken English in early childhood. Within the bimodal program group, students with deaf parents had better SimCom and ASL skills than those with hearing parents. Among bimodal program students with hearing parents, better SimCom skills (but not ASL skills) were associated with earlier introduction to sign communication in school and to mothers' use of sign communication.  相似文献   

19.
These studies investigated two hundred and forty-four 24- and 30-month-olds' sensitivity to generic versus nongeneric language when acquiring knowledge about novel kinds. Toddlers were administered an inductive inference task, during which they heard a generic noun phrase (e.g., "Blicks drink milk") or a nongeneric noun phrase (e.g., "This blick drinks milk") paired with an action (e.g., drinking) modeled on an object. They were then provided with the model and a nonmodel exemplar and asked to imitate the action. After hearing nongeneric phrases, 30-month-olds, but not 24-month-olds, imitated more often with the model than with the nonmodel exemplar. In contrast, after hearing generic phrases, 30-month-olds imitated equally often with both exemplars. These results suggest that 30-month-olds use the generic/nongeneric distinction to guide their inferences about novel kinds.  相似文献   

20.
Potential effects of auditory and other communicative experience on development of visual attention were investigated for four groups of infants at 9, 12, and 18 months of age. Participants included 20 deaf infants with deaf mothers, 19 deaf infants with hearing mothers, 21 hearing infants with hearing mothers, and 20 hearing infants with deaf mothers. Infants' hearing status alone did not associate with patterns of visual attention. Deaf infants with deaf mothers showed significantly longer times in the most advanced attention state (coordinated joint) than did deaf infants with hearing mothers. However, other aspects of experience were associated with group differences. Both deaf and hearing children with deaf mothers who signed spent more time onlooking (or watching) their mothers than did children (deaf or hearing) with hearing mothers. Hearing children with hearing mothers spent more time looking at objects than did children with deaf mothers. Despite these differences in time in various attention states, the general trajectory of development of each of the attention states was similar across groups. Results indicate that early visual attention is associated with and potentially influenced by a complex interaction of maturation, communicative experiences, and other developing skills.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号