Modeling Procedures

In this supplement, we describe in detail the diffusion model procedures and results from Response biases in simple decision making: Faster decision making, faster response execution, or both?by Starns and Ma. As noted in the main paper, our goals for modeling were to make sure that our joystick response procedure produced similar results to past studies and to identify which model parameter best explains our mapping effect. We did not attempt to estimate separate nondecision parameters across responses or compare the fit of models with and without separate nondecision time parameters. We made this choice because having a unique nondecision time for each response makes the model incredibly flexible, promoting unreliable parameter estimation and model selection. Specifically, Voss, Voss, and Klauer (2010) explored parameter estimation and model recovery with separate nondecision time parameters and found that both parameter estimation and model selection were unreliable with standard trial counts for a single-session experiment. The results for empirical data can only be worse, because the data will not necessarily conform to the assumptions of the model in every way (unlike data generated from the model itself) and we would have to consider the possibility that both response biases and nondecision time change across conditions (whereas Voss et al. only considered discriminating a pure-bias model from a pure-nondecision model).

We fit data from the constant-mapping conditions in Experiments 2A and 2B because these conditions are similar to previous asterisk-task experiments except that they use a joystick instead of keypresses for responding. We estimated parameter values using the χ2 method (see Ratcliff & Tuerlinckx, 2002,for details). The data set for Experiment 2A had six conditions produced by crossing the high and low stimulus classes with the three hint conditions (no hint, high hint, low hint). Each condition contributes 11 degrees of freedom (df) with the standard χ2method (Ratcliff & Tuerlinckx, 2002), so the data had a total of 66 df. The model that we fit these data had 12 free parameters: boundary width (a); starting point (z) for no hint, high hint, and low hint; range in starting point variation across trials (sZ); drift rate discriminability (vD; the difference in drift rates for high and low stimuli); drift criterion bias (vC) for no hint, high hint, and low hint (this parameter is subtracted from all drift rates and shifts them equally towards the lower or upper boundary for positive and negative values, respectively; Starns, Ratcliff, & White, 2012); standard deviation in drift rate variability across trials (η); average nondecision time (TER); and range of nondecision time variation across trials (sT).

The data set for Experiment 2B had four conditions produced by crossing the high and low stimulus classes with the two hint conditions (high hint and low hint), so the data had a total of 44 df. The model used to fit these data had 10 free parameters: boundary width (a); starting point (z) for high hint and low hint; range in starting point variation across trials (sZ); drift rate discriminability (vD); drift criterion bias (vC) for high hint and low hint; the standard deviation in drift rate variability across trials (η); average nondecision time (TER); and range of nondecision time variation across trials (sT).

The mean χ2 value was 102 for Experiment 2A and 68 for Experiment 2B. If the model fit perfectly, then the mean χ2 value would be near the number of dfassociated with the fit, 54 for Experiment 2A (66 dfin data – 12 free parameters) and 34 for Experiment 2B (44 dfin data – 10 free parameters). Our χ2 values are about double the expected value, which is not unusual for diffusion model fits. Table S1 reports the average best-fitting parameter values across participants.

Results were consistent with past applications of the model. We chose data from Experiment 1 in Starns (2014) as a specific comparison data set, because these data were collected in the same lab with the same subject population as the current experiments (but with standard keypress responding). The average boundary width was .13 and .11 for Experiments 2A and 2B, similar to the .11 value for the comparison data set. As expected, relative starting point varied based on the hint condition, with values around .5 (unbiased) for the no hint condition and below/above .5 for the low-hint/high-hint conditions. The range of between-trial variation in starting point (.03) was similar to the to the comparison data set (.02). The drift discriminability parameters indicated a high level of performance in discriminating high and low stimuli (these can’t be directly compared to the other data set because the distributions of asterisks were different). The drift criterion bias parameters also followed the hint conditions, with no bias (0) for no hint, very slight biases toward the bottom boundary (.01) with “low” hints, and larger biases toward the top boundary (-.05 and -.04) for “high” hints. Given that we used a different response modality, we were especially interested in whether or not the nondecision time parameters would be comparable to previous fits. They were: average nondecision times were 0.46 and 0.49 s versus 0.47 s in the comparison data set, and the ranges were 0.19 and 0.24 s versus 0.21 in the comparison data set.

Mapping Effect Mechanisms

In the paper we noted that bias parameters in sequential sampling models (including starting-point and drift-rate biases in the diffusion model) affect response proportion in addition to RT. Thus, if bias mechanisms were driving the mapping effect, then we should see a mapping effect in accuracy. That is, the bias effect should be larger with constant than with variable mapping for both RT and accuracy, whereas only RT should show a larger bias effect if nondecision timedrives the mapping effect. Our results showed a mapping effect on RT and a null mapping effect for accuracy, supporting a nondecision-time locus of the effect. One important consideration for evaluating this conclusion is how large the mapping effect on accuracy should be if the effect was driven by bias parameters. If only a small accuracy effect is predicted, then failing to observe an effect is not strong evidence against a role for bias parameters. To explore this further, we used the best-fitting parameters to define how large the mapping effect on accuracy should be if the mapping effect on RT is produced by bias parameters instead of nondecision time. We used data from Experiment 2B because its within-subjects design allowed us to define a mapping effect for each participant.

We adjusted the target model parameter in the variable-mapping condition to match the size of the mapping effect on the median correct RTfor each participant. The values of all other parameterswere constant across the mapping conditions. For starting point and drift rate bias, we moved the values from the high-hint and low-hint conditions equally in opposite directions. For example, if a participant’s bias effect on RT was smaller with variable than with constant mapping, we would move the variable-mapping starting point in the high-hint condition lower (farther from the top, “high” boundary) and move the starting point in the low-hint condition higher (farther from the bottom, “low” boundary) until the bias effect on RT was small enough to match the empirical data. Once we found how much the parameter had to change to match the mapping effect on RT, we recorded the corresponding mapping effect on accuracy.As reported in the main paper,the results showed that trying to explain the mapping effect in terms of either starting-point or drift-rate biases produces a mapping effect on accuracy that is too large for the empirical data. These results show that our mapping effect can be confidently attributed to changes in nondecision time.

Table S1

Average best-fitting parameter values across participants

Experiment and
Condition / a / z/a / sZ / vD / vC / η / TER / sT
2A
“Low” Hint / .13 / .40 / .03 / .44 / .01 / .21 / 0.46 / 0.19
No Hint / .52 / .00
“High” Hint / .63 / -.05
2B
“Low” Hint / .11 / .41 / .03 / .42 / .01 / .19 / .49 / .24
“High” Hint / .61 / -.04

Note: Empty cells indicate parameters that were fixed to the same value as the cell above. Parameter labels are identified in the main text.

References

Ratcliff, R., & Tuerlinckx, F. (2002). Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin and Review, 9, 438-481.

Starns, J. J. (2014). Using response time modeling to distinguish memory and decision processes in recognition and source tasks. Memory & Cognition, 42, 1357-1372.

Starns, J. J., Ratcliff, R., & White, C. N. (2012). Diffusion model drift rates can be influenced by decision processes: An analysis of the strength-based mirror effect. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 1137-1151.

Voss, A., Voss, J., & Klauer, K. C. (2010). Separating response-execution bias from decision bias: Arguments for an additional parameter in Ratcliff’s diffusion model. British Journal of Mathematical and Statistical Psychology, 63, 539-555.