首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study tested whether second graders use benchmark-based strategies when solving a number line estimation (NLE) task. Participants were assigned to one of three conditions based on the availability of benchmarks provided on the number line. In the bounded condition, number lines were only bounded at both sides by 0 and 200, while the midpoint condition included an additional benchmark at the midpoint and children in the quartile condition were provided with a benchmark at every quartile. First, the inclusion of a midpoint resulted in more accurate estimates around the middle of the number line in the midpoint condition compared to the bounded and, surprisingly, also the quartile condition. Furthermore, the two additional benchmarks in the quartile condition did not yield better estimations around the first and third quartile, because children frequently relied on an erroneous representation of these benchmarks, leading to systematic estimation errors. Second, verbal strategy reports revealed that children in the midpoint condition relied more frequently on the benchmark at the midpoint of the number line compared to the bounded condition, confirming the accuracy data. Finally, the frequency of use of benchmark-based strategies correlated positively with mathematics achievement and tended to correlate positively also with estimation accuracy. In sum, this study is one of the first to provide systematic evidence for children’s use of benchmark-based estimation strategies in NLE with natural numbers and its relationship with children’s NLE performance.  相似文献   

2.
Research Findings: In this research, 487 Chinese children age 3 to 5 years took part in a number line estimation task. This task was used to assess children’s estimation accuracy and their estimation patterns along a number line in two different estimation circumstances. Situation A had the same line lengths for different numeric ranges, whereas situation B held the ratio of line lengths to numeric ranges constant. There were also three different number ranges (1–5/10/20). A new mathematical modeling method was proposed, where the two-dimensional estimation patterns of one child would be modeled as points in a higher dimensional space. Then the Dirichlet process Gaussian mixture model was applied to dynamically estimate the number of classes and assemble the different points into discrete classes based on distance. Three conclusions were drawn as follows: (a) significant differences were found within children for different ages and number ranges; (b) Chinese preschoolers had more estimation patterns than just linear pattern and logarithmic pattern, especially in small number ranges; and (c) mental number distance played an important role in their estimation patterns. Practice or Policy: Implications for early mathematic instruction on children’s understanding of mental number line are discussed.  相似文献   

3.
Spontaneous transfer of learning is often difficult to elicit. This finding may be widespread partly because pretests proactively interfere with transfer. To test this hypothesis, 7-year-olds' transfer was examined across 2 numerical tasks (number line estimation and categorization) in which similar representational changes have been observed. As predicted, children given feedback on numerical estimates learned to use a linear representation of numerical quantity instead of a logarithmic one, but providing practice on a categorization pretest led children to continue using a logarithmic representation on the same task, which they otherwise abandoned with surprising frequency. These findings imply unsupervised practice of inappropriate representations impedes transfer, and studies of learning can greatly underestimate children's potential for transfer if pretest effects are uncontrolled.  相似文献   

4.
This study investigated the extent to which class-specific parameter estimates are biased by the within-class normality assumption in nonnormal growth mixture modeling (GMM). Monte Carlo simulations for nonnormal GMM were conducted to analyze and compare two strategies for obtaining unbiased parameter estimates: relaxing the within-class normality assumption and using data transformation on repeated measures. Based on unconditional GMM with two latent trajectories, data were generated under different sample sizes (300, 800, and 1500), skewness (0.7, 1.2, and 1.6) and kurtosis (2 and 4) of outcomes, numbers of time points (4 and 8), and class proportions (0.5:0.5 and 0.25:0.75). Of the four distributions, it was found that skew-t GMM had the highest accuracy in terms of parameter estimation. In GMM based on data transformations, the adjusted logarithmic method was more effective in obtaining unbiased parameter estimates than the use of van der Waerden quantile normal scores. Even though adjusted logarithmic transformation in nonnormal GMM reduced computation time, skew-t GMM produced much more accurate estimation and was more robust over a range of simulation conditions. This study is significant in that it considers different levels of kurtosis and class proportions, which has not been investigated in depth in previous studies. The present study is also meaningful in that investigated the applicability of data transformation to nonnormal GMM.  相似文献   

5.
The relation between short-term and long-term change (also known as learning and development) has been of great interest throughout the history of developmental psychology. Werner and Vygotsky believed that the two involved basically similar progressions of qualitatively distinct knowledge states; behaviorists such as Kendler and Kendler believed that the two involved similar patterns of continuous growth; Piaget believed that the two were basically dissimilar, with only development involving qualitative reorganization of existing knowledge and acquisition of new cognitive structures. This article examines the viability of these three accounts in accounting for the development of numerical representations. A review of this literature indicated that Werner's and Vygotsky's position (and that of modern dynamic systems and information processing theorists) provided the most accurate account of the data. In particular, both changes over periods of years and changes within a single experimental session indicated that children progress from logarithmic to linear representations of numerical magnitudes, at times showing abrupt changes across a large range of numbers. The pattern occurs with representations of whole number magnitudes at different ages for different numerical ranges; thus, children progress from logarithmic to linear representations of the 0–100 range between kindergarten and second grade, whereas they make the same transition in the 0–1,000 range between second and fourth grade. Similar changes are seen on tasks involving fractions; these changes yield the paradoxical finding that young children at times estimate fractional magnitudes more accurately than adults do. Several different educational interventions based on this analysis of changes in numerical representations have yielded promising results.  相似文献   

6.
In some professions, speed and accuracy are as important as acquired requisite knowledge and skills. The availability of computer-based testing now facilitates examination of these two important aspects of student performance. We found that student response times in a conventional non-speeded multiple-choice test, at both the global and individual question levels, closely approximated lognormal distributions. We propose a new measure, pace, which is derived from the survival function of these distributions for analysis of individual person response times. These pace estimates could be used both to rank and compare students; pace also performed maximally compared to other parameterizations in generalizability and dependability studies. While pace was very weakly related to person ability, there was no detectable relationship to question parameters of shift, natural logarithmic mean, or natural logarithmic standard deviation. That is, pace was a person-dependent, question-independent measure. Pace measurements were also successfully used as covariates in models for estimation of person response time to specified questions and person accuracy in response to specified questions. Thus, the analysis of pace can contribute significantly to comprehensive evaluation of student performance in both the speed and ability domains and is a requisite to best practice in testing and assessment.  相似文献   

7.
This study examined the generality of the logarithmic to linear transition in children's representations of numerical magnitudes and the role of subjective categorization of numbers in the acquisition of more advanced understanding. Experiment 1 (49 girls and 41 boys, ages 5-8 years) suggested parallel transitions from kindergarten to second grade in the representations used to perform number line estimation, numerical categorization, and numerical magnitude comparison tasks. Individual differences within each grade in proficiency for the three tasks were strongly related. Experiment 2 (27 girls and 13 boys, ages 5-6 years) replicated results from Experiment 1 and demonstrated a causal role of changes in categorization in eliciting changes in number line estimation. Reasons were proposed for the parallel developmental changes across tasks, the consistent individual differences, and the relation between improved categorization of numbers and increasingly linear representations.  相似文献   

8.
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact traditional marginal maximum likelihood (ML) estimation of IRT model parameters, including sample size, with smaller samples generally being associated with lower parameter estimation accuracy, and inflated standard errors for the estimates. Given this deleterious impact of small samples on IRT model performance, use of these techniques with low-incidence populations, where it might prove to be particularly useful, estimation becomes difficult, especially with more complex models. Recently, a Pairwise estimation method for Rasch model parameters has been suggested for use with missing data, and may also hold promise for parameter estimation with small samples. This simulation study compared item difficulty parameter estimation accuracy of ML with the Pairwise approach to ascertain the benefits of this latter method. The results support the use of the Pairwise method with small samples, particularly for obtaining item location estimates.  相似文献   

9.
We taught 8 pigeons to discriminate 16-icon arrays that differed in their visual variability or “entropy” to see whether the relationship between entropy and discriminative behavior is linear (in which equivalent differences in entropy should produce equivalent changes in behavior) or logarithmic (in which higher entropy values should be less discriminable from one another than lower entropy values). Pigeons received a go/no-go task in which the lower entropy arrays were reinforced for one group and the higher entropy arrays were reinforced for a second group. The superior discrimination of the second group was predicted by a theoretical analysis in which excitatory and inhibitory stimulus generalization gradients fall along a logarithmic, but not a linear scale. Reanalysis of previously published data also yielded results consistent with a logarithmic relationship between entropy and discriminative behavior.  相似文献   

10.
本文通过后验对数似然函数,提出了若干有偏估计的后验Fisher信息比和后验似然距离统计量。这些统计量解决了模型扰动对有偏估计的影响度量问题。  相似文献   

11.
How does understanding the decimal system change with age and experience? Second, third, sixth graders, and adults (Experiment 1: N = 96, mean ages = 7.9, 9.23, 12.06, and 19.96 years, respectively) made number line estimates across 3 scales (0–1,000, 0–10,000, and 0–100,000). Generation of linear estimates increased with age but decreased with numerical scale. Therefore, the authors hypothesized highlighting commonalities between small and large scales (15:100::1500:10000) might prompt children to generalize their linear representations to ever‐larger scales. Experiment 2 assigned second graders (N = 46, mean age = 7.78 years) to experimental groups differing in how commonalities of small and large numerical scales were highlighted. Only children experiencing progressive alignment of small and large scales successfully produced linear estimates on increasingly larger scales, suggesting analogies between numeric scales elicit broad generalization of linear representations.  相似文献   

12.
This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.  相似文献   

13.
The process employed to produce the conversions that take scores from the original SAT scales to recentered scales, in which reference group scores are centered near the midpoint of the score-reporting range, is laid out. For the purposes of this article, SAT Verbal and SAT Mathematical scores were placed on recentered scales, which have reporting ranges of 920 to 980, means of 950, and standard deviations of 11. (The 920-to-980 scale is used in this article to highlight the distinction between it and the old 200-to-800 scale. In actuality, recentered scores were reported on a 200-to-800 scale.) Recentering was accomplished via a linear transformation of normally distributed scores that were obtained from a continuized, smoothed frequency distribution of original SAT scores that were originally on augmented two-digit scales (i.e., discrete scores rounded to either 0 or 5 in the third decimal place). These discrete scores were obtained for all students in the 1990 Reference Group using 35 different editions of the SAT spanning October 1988 to June 1990. The performance of this 1990 Reference Group on the original and recentered scales is described. The effects of recentering on scores of individuals and the 1990 Reference Group are also examined. Finally, recentering did not occur solely on the basis of its technical merit. Issues associated with converting recentering from a possibility into a reality are discussed.  相似文献   

14.
This article compares maximum likelihood and Bayesian estimation of the correlated trait–correlated method (CT–CM) confirmatory factor model for multitrait–multimethod (MTMM) data. In particular, Bayesian estimation with minimally informative prior distributions—that is, prior distributions that prescribe equal probability across the known mathematical range of a parameter—are investigated as a source of information to aid convergence. Results from a simulation study indicate that Bayesian estimation with minimally informative priors produces admissible solutions more often maximum likelihood estimation (100.00% for Bayesian estimation, 49.82% for maximum likelihood). Extra convergence does not come at the cost of parameter accuracy; Bayesian parameter estimates showed comparable bias and better efficiency compared to maximum likelihood estimates. The results are echoed via 2 empirical examples. Hence, Bayesian estimation with minimally informative priors outperforms enables admissible solutions of the CT–CM model for MTMM data.  相似文献   

15.
This study demonstrated the equivalence between the Rasch testlet model and the three‐level one‐parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE) with the expectation‐maximization algorithm in ConQuest and the sixth‐order Laplace approximation estimation in HLM6. The results indicated that the estimation methods had significant effects on the bias of the testlet variance and ability variance estimation, the random error in the ability parameter estimation, and the bias in the item difficulty parameter estimation. The Laplace method best recovered the testlet variance while the MMLE best recovered the ability variance. The Laplace method resulted in the smallest random error in the ability parameter estimation while the MCMC method produced the smallest bias in item parameter estimates. Analyses of three real tests generally supported the findings from the simulation and indicated that the estimates for item difficulty and ability parameters were highly correlated across estimation methods.  相似文献   

16.
To better understand the statistical properties of the deterministic inputs, noisy “and” gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the prior distribution matches the latent class structure. However, when the latent classes are of indefinite structure, the empirical Bayes method in conjunction with an unstructured prior distribution provides much better estimates and classification accuracy. Moreover, using empirical Bayes with an unstructured prior does not lead to extremely poor results as other prior-estimation method combinations do. The simulation results also show that increasing the sample size reduces the variability, and to some extent the bias, of item parameter estimates, whereas lower level of guessing and slip parameter is associated with higher quality item parameter estimation and classification accuracy.  相似文献   

17.
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory based domain score estimation methods were evaluated, under conditions of few items per content area perform taken, small domains, and small group sizes. The methods used item responses to a single form taken to estimate examinee or group ability; domain scores were then computed using the ability estimates and domain item characteristics. The IRT-based domain score estimates typically showed greater accuracy and greater consistency across forms taken than observed performance on the form taken. For the smallest group size and least number of items taken, the accuracy of most IRT-based estimates was questionable; however, a procedure that operates on an estimated distribution of group ability showed promise under most conditions.  相似文献   

18.
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a block-Toeplitz covariance matrix in the structural equation modeling framework. The third method is built in the Bayesian framework and implemented using Gibbs sampling. The fourth is the least squares method, which also employs the block-Toeplitz matrix. All 4 methods are implemented in currently available software. The simulation study shows that all 4 methods reach appropriate parameter estimates with comparable precision. Differences among the 4 estimation methods and related software are discussed.  相似文献   

19.
测量能力是把待测定的量同一个作为标准的同类量进行比较的能力,保证了数和量之间新的一系列联系的形成.经过教学干预,初中生的估测能力可得到显著提高:在任务形式、题目形式、性别、学业水平中表现出了一致的提升,但在具体提升程度上存在一些差异;在性别、问题形式和学业水平因素上,教学干预中要充分考虑多种因素,恰当运用不同方法,因材施教;对初中生进行估测的教学干预训练是可行的和有效的,在教育中应注意这方面能力的重视和培养.  相似文献   

20.
In this second part of the article we discuss how simple growth models based on Fibbonachi numbers, golden section, logarithmic spirals, etc. can explain frequently occuring numbers and curves in living objects. Such mathematical modelling techniques are becoming quite popular in the study of pattern formation in nature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号