According to the experiments of Jozefowiez, Cerutti & Staddon (2005), pigeons could choose either fixed interval schedule or variable-interval schedules, and in both cases “the pigeons timed reinforcement on each schedule from trial onset”¯ (Jozefowiez, Cerutti & Staddon, 2005); Jozefowiez et al. (2005) identified two different processes in the pigeons’ behavior: first process relates to emitting of response and determines the timing for the whole activity, and the second process determines the type of the response and is responsible for allocating behaviors between two response keys. Second experiment of this research group which relied on two different fixed interval schedules also supported these conclusions.
The findings of Lee, Sturmey & Fields (2007) show that response variability is affected by the schedules of reinforcement, and that variability of response induced by schedule is different for different schedules: continuous reinforcement leads to weaker response variability than intermittent schedule of reinforcement. However, the studies of the effect of the periodicity, intermittency and type of schedule on the response variability provided mixed results; it was determined that contingencies have a direct impact on the variability of operant response. In particular, it was determined that such factors as differential reinforcement of novel or less frequent behavior, lag reinforcement schedules and percentile reinforcement schedules have a direct influence on operant response variability (Lee, Sturmey & Fields, 2007). Lee et al. (2007) also explored indirect factors affecting response variability and identified that such characteristics as response covariation, response generalization and response allocation also determine the differences in treatment packages and contingencies affecting response variability. The effect of external variables such as the use of drugs and different levels of food deprivation on response variability was also determined by Lee et al. (2007).
There also are numerous studies devoted to extinction in operant conditioning and its interaction with fixed-ratio and variable-ration schedules. The work of Nevin & Grace (2005) is devoted to the analysis of resistance to extinction in transition and in the steady state. Rate estimation theory was used to draw the hypothesis that resistance to extinction depends on the training reinforcer rate. Three experiments with pigeons with variable-interval schedules and different number of reinforcers showed that resistance to extinction was not statistically related to the number of reinforcers, but was related to the rate of reinforcement. Comparison of continuous and partial reinforcement trials showed that more reinforcers were omitted in the continuous reinforcement trials than in partial reinforcement trials.
Moody, Sunsay & Bouton (2006) explored the effect of trial spacing and priming on extinction. This research group has previously identified that priming of the conditional stimulus in the short-term memory can have an impact on the trial-spacing effects: non-reinforced presentation of a conditional stimulus to rats shortly before reinforced trial (e.g. in 60 s) led to slower acquisition compared to presentation of a conditional stimulus long before the reinforced trial (e.g. in 4 min). Moody, Sunsay & Bouton further explored the effects of trial spacing on extinction; it was determined that short inter-trial intervals strongly affected extinction performance, but have affected extinction learning less.
Miguez, Witnauer & Miller (2012) have studied the effect of contextual associations on the partial reinforcement acquisition deficit. The associations of conditioned and unconditioned stimulus to the degraded stimulus control in the conditions of partial reinforcement and continuous reinforcement training were studied. It was identified that training context “must be an effective competitor to produce the partial reinforcement acquisition deficit”¯ (Miguez, Witnauer & Miller, 2012); it was also determined that partial reinforcement allows to increase extinction to a level comparable with continuous reinforcement.