首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Behavioral momentum theory provides a framework for understanding how conditions of reinforcement influence instrumental response strength under conditions of disruption (i.e., resistance to change). The present experiment examined resistance to change of divided-attention performance when different overall probabilities of reinforcement were arranged across two components of a multiple schedule. Pigeons responded in a delayed-matching-to-sample procedure with compound samples (color + line orientation) and element comparisons (two colors or two line orientations). Reinforcement ratios of 1:9, 1:1, and 9:1 for accurate matches on the two types of comparison trials were examined across conditions using reinforcement probabilities (color/lines) of .9/.1, .5/.5, and .1/.9 in the rich component and .18/.02, .1/.1, and .02/.18 in the lean component. Relative accuracy with color and line comparisons was an orderly function of relative reinforcement, but this relation did not depend on the overall rate of reinforcement between components. The resistance to change of divided-attention performance was greater for both trial types in the rich component with presession feeding and extinction, but not with decreases in sample duration. These findings suggest promise for the applicability of quantitative models of operant behavior to divided-attention performance, but they highlight the need to further explore conditions impacting the resistance to change of attending.  相似文献   

2.
Task difficulty in delayed matching-to-sample tasks (DMTS) is increased by increasing the length of a retention interval. When tasks become more difficult, choice behavior becomes more susceptible to bias produced by unequal reinforcer ratios. Delaying reinforcement from choice behavior also increases both task difficulty and the biasing effect of unequal reinforcer probability. Six pigeons completed nine DMTS conditions with retention intervals of 0, 2, 4, 6, and 8 sec, in which reinforcer delays of 0, 2, and 4 sec were combined with ratios of reinforcer probabilities of .5/.5, .2/.8, and .8/.2 for correct red and green responses. Discriminability (logd) decreased with both increasing retention interval duration and increasing reinforcer delay. Sensitivity to reinforcement, the tendency for ratios of choice responses to follow unequal reinforcer probabilities, also increased as a function of both increasing retention interval and increasing reinforcer delay. The result is consistent with the view that remembering in DMTS tasks is a discriminated operant in which increasing task difficulty increases sensitivity to reinforcement.  相似文献   

3.
Each of four pigeons was exposed to a single random-ratio schedule of reinforcement in which the probability of reinforcement for a peck on either of two keys was 1/25. Reinforcer amounts were determined by an iterated prisoner’s dilemma (IPD) matrix in which the “other player” (a computer) playedtit-for-tat. One key served as thecooperation(C) key; the other served as thedefection(D) key. If a peck was scheduled to be reinforced and the D-key was pecked, the immediate reinforcer of that peck was always higher than it would have been had the C-key been pecked. However, if the C-key was pecked and thefollowing peck was scheduled to be reinforced, reinforcement amount for pecks on either key were higher than they would have been if the previous peck had been on the D-key. Although immediate reinforcement was always higher for D-pecks, the overall reinforcement rate increased linearly with the proportion of C-pecks. C-pecks thus constituted a form of self-control. All the pigeons initially defected with this procedure. However, when feedback signals were introduced that indicated which key had last been pecked,cooperation (relative rate of C-pecks)—hence, self-control—increased for all the pigeons.  相似文献   

4.
Pigeons’ preference between fixed-interval and variable-interval schedules was examined using a concurrent-chains procedure. Responses to two concurrently available keys in the initial links of the concurrent chains occasionally produced terminal links where further responses were reinforced under either a fixed- or variable-interval schedule. In previous studies, preferences for the variable schedule with such a procedure have been interpreted as reflecting atemporal scaling process that heavily weights the shorter intervals in the variable schedule. The present experiment examined whetherpredictability, i.e., the presence of external stimuli correlated with the reinforcement interval, might also influence preference in such situations. When the two intervals in a variable schedule were made predictable by being associated with different key colors, preference for that schedule increased. This increase was reliable but small in magnitude and transient when initial-link responses only occasionally produced terminal links; it was large in magnitude when only one response in the initial link was required to produce the appropriate terminal-link schedule. The results suggest that preference between fixed and variable schedules may be influenced both by temporal scaling and to a lesser extent by predictability of the reinforcement intervals.  相似文献   

5.
In Experiment 1, three food-deprived pigeons received trials that began with red or green illumination of the center pecking key. Two or four pecks on this sample key turned it off and initiated a 0- to 10-sec delay. Following the delay, the two outer comparison keys were illuminated, one with red and one with green light. In one condition, a single peck on either of these keys turned the other key off and produced either grain reinforcement (if the comparison that was pecked matched the preceding sample) or the intertrial interval (if it did not match). In other conditions, 3 or 15 additional pecks were required to produce reinforcement or the intertrial interval. The frequency of pecking the matching comparison stimulus (matching accuracy) decreased as the delay increased, increased as the sample ratio was increased, and decreased as the comparison ratio was increased. The results of Experiment 2 suggested that higher comparison ratios adversely affect matching accuracy primarily by delaying reinforcement for choosing the correct comparison. The results of Experiment 3, in which delay of reinforcement for choosing the matching comparison was manipulated, confirmed that delayed reinforcement decreases matching accuracy.  相似文献   

6.
Experiment 1 compared the acquisition of initial- and terminal-link responding in concurrent chains. The terminal-link schedules were fixed interval (FI) 10 sec and FI 20 sec, but some presentations were analogous to no-food trials in the peak procedure, lasting 60 sec with no reinforcement delivery. Pigeons completed a series of reversals in which the schedules signaled by the terminal-link stimuli (red and green on the center key) were changed. Acquisition of temporal control of terminal-link responding (as measured by peak location on no-food trials) was more rapid than acquisition of preference in the initial links. Experiment 2 compared acquisition in concurrent chains, using the typical procedure in which the terminal-link schedules are changed with a novel arrangement in which the initial-link key assignments were changed while the terminal-link schedules remained the same. Acquisition of preference was faster in the latter condition, in which the terminal-link stimulus-reinforcer relations were preserved. These experiments provide the first acquisition data that support the view that initial-link preference is determined by the values of the terminal-link stimuli.  相似文献   

7.
This study presents a theory by which to understand how pigeons learn response patterns in simple choice situations. The theory assumes that, in a choice situation, patterns of responses compete for the final common path; that the competition is governed by two variables, the overall reinforcement probability obtained by emitting the patterns,T, and the differences in reinforcement probabilities among the patterns,D; and that the ratioD/T determines the final strength of specific response patterns. To test these predictions, three experiments were run in which pigeons were more likely to receive food when they pecked the momentarily least-preferred of three response keys. On the basis of previous research, it was predicted that the birds would be indifferent among the keys (molar aspect) and would also acquire a response pattern that consisted of pecking each key once during three consecutive trials (molecular aspect). The present theory went further and predicted that the strength of that pattern would increase with the ratioD/T. In the first two experiments,D was manipulated whileT remained constant, and in the third,T was manipulated whileD remained constant. The results agreed with the theory, for the strength of the response pattern increased withD and decreased withT, whereas overall choice proportions were always close to the matching equilibrium.  相似文献   

8.
Pigeons pecked on three keys, responses to one of which could be reinforced after 3 flashes of the houselight, to a second key after 6, and to a third key after 12. The flashes were arranged according to variable-interval schedules. Response allocation among the keys was a function of the number of flashes. When flashes were omitted, transitions occurred very late. Increasing flash duration produced a leftward shift in the transitions along a number axis. Increasing reinforcement probability produced a leftward shift, and decreasing reinforcement probability produced a rightward shift. Intermixing different flash rates within sessions separated allocations: Faster flash rates shifted the functions sooner in real time, but later in terms of flash count, and conversely for slower flash rates. A model of control by fading memories of number and time was proposed.  相似文献   

9.
Pigeons pecked keys on concurrent-chains schedules that provided a variable interval 30-sec schedule in the initial link. One terminal link provided reinforcers in a fixed manner; the other provided reinforcers in a variable manner with the same arithmetic mean as the fixed alternative. In Experiment 1, the terminal links provided fixed and variable interval schedules. In Experiment 2, the terminal links provided reinforcers after a fixed or a variable delay following the response that produced them. In Experiment 3, the terminal links provided reinforcers that were fixed or variable in size. Rate of reinforcement was varied by changing the scheduled interreinforcer interval in the terminal link from 5 to 225 sec. The subjects usually preferred the variable option in Experiments 1 and 2 but differed in preference in Experiment 3. The preference for variability was usually stronger for lower (longer terminal links) than for higher (shorter terminal links) rates of reinforcement. Preference did not change systematically with time in the session. Some aspects of these results are inconsistent with explanations for the preference for variability in terms of scaling factors, scalar expectancy theory, risk-sensitive models of optimal foraging theory, and habituation to the reinforcer. Initial-link response rates also changed within sessions when the schedules provided high, but not low, rates of reinforcement. Within-session changes in responding were similar for the two initial links. These similarities imply that habituation to the reinforcer is represented differently in theories of choice than are other variables related to reinforcement.  相似文献   

10.
The effects of schedule of reinforcement (partial vs. consistent) and delay of reward (0 to 20 sec) on running in rats were examined in two investigations. The effects of delay depended upon schedule of reinforcement; acquisition speed decreased as delay increased under consistent reinforcement, a common finding, while acquisition speed was independent of delay under partial reinforcement, a new finding. The partial-reinforcement acquisition effect or PRAE is defined as faster acquisition speed under partial than under consistent reinforcement. Because running speed was independent of delay under partial reinforcement, but decreased as delay increased under consistent reinforcement, the PRAE increased as delay of reinforcement increased.  相似文献   

11.
In Experiment I, eight groups of rats (n = 20) were given shuttlebox-avoidance training. Two levels of shock (.3 and 1.6 mA) were combined factorially with two levels of reward (large and small) under both continuous and discontinuous (.75 sec on and 2.00 sec off) shock. Visual situational cues were absent after a shuttle response for the large-reward condition and present for the small-reward condition. Superior performance was obtained with weak rather than strong shock under both reward conditions and with large rather than small reward only under the weak-shock condition. Continuity of shock had no differential effect on performance. Experiment II allowed the conclusion that the reward effect was attributable to a reinforcement mechanism. The data were taken as support for the effective reinforcement theory, which emphasizes the importance in avoidance learning of fear conditioned to situational cues.  相似文献   

12.
Although an arbitrarily specified instrumental response may persist when free reinforcers are concurrently available, the interpretation that earned reinforcers are preferred is tenuous. The present advance-response procedure used both time allocation and advance response rates as indices of preference between free and earned water in rats. When multiple schedule components were two response-dependent schedules with different overall reinforcement rates, higher rates of reinforcement were preferred. However, when the multiple schedule consisted of response-dependent and response-independent components equated for overall rates of reinforcement, no consistent preference for free or earned reinforcers was evident. That a preference for free reinforcers was not obtained is difficult to reconcile with concepts of least effort.  相似文献   

13.
Behavioral contrast was produced in two target components of a four-component multiple schedule by having two target stimuli followed either by a higher rate of reinforcement or by extinction. Response rate was higher in the target followed by extinction. Periodic probe trials were then presented in which the two target stimuli were presented together. Choice on these probe trials was in favor of the stimulus followed by the higher rate of reinforcement during regular training. Experiment 2 replicated this finding but with probe trials presented throughout training. Here, preference for the stimulus followed by the higher rate of reinforcement was evident early in training, substantially before the contrast effects developed. The results challenge interpretations of contrast based on the concept of relative value.  相似文献   

14.
Pigeons responded in a successive-encounters procedure that consisted of a search period, a choice period, and a handling period. The search period was either a fixed-interval or a mixed-interval schedule presented on the center key of a three-key chamber. Upon completion of the search period, the center key was turned off and the two side keys were lit. A pigeon could either accept a delay followed by food (by pecking the right key) or reject this option and return to the search period (by pecking the left key). During the choice period, a red right key represented the long alternative (a long handling delay followed by food), and a green right key represented the short alternative (a short handling delay followed by food). The experiment consisted of a series of comparisons for which optimal diet theory predicted no changes in preference for the long alternative (because the overall rates of reinforcement were unchanged), whereas the hyperbolic-decay model predicted changes in preference (because the delays to the next possible reinforcer were varied). In all comparisons, the results supported the predictions of the hyperbolic-decay model, which states that the value of a reinforcer is inversely related to the delay between a choice response and reinforcer delivery.  相似文献   

15.
Three rats responding on fixed-interval schedules received either 1 or 4 pellets at the end of 2-min intervals. Five experimental conditions manipulated the relative probabilities of these two reinforcers. Response rates following the 1-pellet reinforcer were always higher than the rates following the 4-pellet reinforcer. The rates after the 1-pellet reinforcer were also highest in those experimental conditions where it was delivered with low probability. Contrast effects were observed when two sequential fixed intervals differed in reinforcer magnitudes. It was concluded that the context of reinforcement as well as the specific reinforcer magnitude affects responding under fixed-interval schedules.  相似文献   

16.
Pigeons pecked on two response keys that delivered reinforcers on a variable-interval schedule. The proportion of reinforcers delivered by one key was constant for a few sessions and then changed, and subjects’ choice responses were recorded during these periods of transition. In Experiment 1, response proportions approached a new asymptote slightly more slowly when the switch in reinforcement proportions was more extreme. In Experiment 2, slightly faster transitions were found with higher overall rates of reinforcement. The results from the first session, after a switch in the reinforcement proportions, were generally consistent with a mathematical model that assumes that the strength of each response is increased by reinforcement and decreased by nonreinforcement. However, neither this model nor other similar models predicted the “spontaneous recovery” observed in later sessions: At the start of these sessions, response proportions reverted toward their preswitch levels. Computer simulations could mimic the spontaneous recovery by assuming that subjects store separate representations of response strength for each session, which are averaged at the start of each new session.  相似文献   

17.
Three experiments investigated the effects of magnitude and schedule of reinforcement and level of training in instrumental escape learning at a 24-h intertriai interval. In Experiment I, two magnitudes of reinforcement were factorially combined with two schedules of reinforcement (CRF and PRF). Under PRF, large reward produced greater resistance to extinction than did small reward, while the reverse was true under CRF. In Experiment II, two levels of acquisition training were factorially combined with three schedules of reinforcement (CRF, single-alternation, and nonalternated PRF). Patterned running was observed late in acquisition in the single-alternation extended-training condition. Resistance to extinction was greater for the nonalternated PRF condition than for the single-alternation condition following extended acquisition, and the reverse was true following limited acquisition. Experiment III confirmed the extinction findings of Experiment II. The results of all three experiments supported an analysis of escape learning at spaced trials in terms of Capaldi’s (1967) sequential theory.  相似文献   

18.
Pigeons’ responses on two keys were recorded before and after the percentage of reinforcers delivered by each key was changed. In each condition of Experiment 1, the reinforcement percentage for one key was 50% for several sessions, then either 70% or 90% for one, two, or three sessions, and then 50% for another few sessions. At the start of the second and third sessions after a change in reinforcement percentages, choice percentages often exhibited spontaneous recovery—a reversion to the response percentages of earlier sessions. The spontaneous recovery consisted of a shift toward a more extreme response percentage in some cases and toward a less extreme response percentage in other cases, depending on what reinforcement percentages were previously in effect. In Experiment 2, some conditions included a 3-day rest period before a change in reinforcement percentages, and other conditions included no such rest days. Slightly less spontaneous recovery was observed in conditions with the rest periods, suggesting that the influence of prior sessions diminished with the passage of time. The results are consistent with the view that choice behavior at the start of a new session is based on a weighted average of the events of the past several sessions.  相似文献   

19.
The acquisition, maintenance, and extinction of autoshaped responding in pigeons were studied under partial and continuous reinforcement. Five values of probability of reinforcement, ranging from .1 to 1.0, were combined factorially with five values of intertrial interval ranging from 15 to 250 sec for different groups. The number of trials required before autoshaped responding emerged varied inversely with the duration of the intertriai interval and probability of reinforcment, but partial reinforcement did not increase the number of reinforcers before acquisition. During maintained training, partial reinforcement increased the overall rate of responding. A temporal gradient of accelerated responding over the trial duration emerged during maintenance training for partial reinforcement groups, and was evident for all groups in extinction. Partial reinforcement groups responded more than continuous reinforcement groups over an equivalent number of trials in extinction. However, this partial-reinforcment extinction effect disappeared when examined in terms of the omission of “expected” reinforcers.  相似文献   

20.
Abstract

In each of the 9th (N = 75) and 11th (N = 84) grades and on each subtest of the ITED battery, overachieving and underachieving groups were identified by using the predicted scores plus and minus one SB as the cutting points. When these two groups were compared on tests of creative thinking, the mean scores did not show any consistent trend across the different achievement areas to favor either one of these groups. Because of the low correlations between IQ and creativity (.12 in 9th, "-.01 in 11th) and achievement and creativity (—.02 to .21 in 9th, —.16 to .07 in 11th), results of analyses of covariance, controlling for IQ, did not affect the findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号