Machine learning paper on the Stroop effect resubmitted

We were able to revise and resubmit our supervised machine learning paper on the Stroop effect. We applied supervised machine learning on the ManyLabs 3 data and got pretty surprising results. What they are exactly, check out the paper. The really nice feature of the revision is that we added in extra thresholds to determine how real data compares against random noise and, as such, we updated pretty well known guidelines set forth by Strobl et al. (2009, see for example here). We suspect that our thresholds will be more accurate, but that will definitely require further research.

For the abstract, see below. The paper can be downloaded for free here.

Abstract:

An experimental science relies on solid and replicable results. The last few years have seen a rich discussion on the reliability and validity of psychological science and whether our experimental findings can falsify our existing theoretical models. Yet, concerns have also arisen that this movement may impede new theoretical developments. In this article, we re-analyze the data from a crowdsourced replication project published in this journal that concluded that lab site did not matter as predictor for Stroop performance, and, therefore, that context was likely to matter little in predicting the outcome of the Stroop task. The authors challenge this conclusion via a new analytical method– supervised machine learning – that “allows the data to speak.” The authors apply this approach to the results from a Stroop task to illustrate the utility of machine learning and to propose moderators for future (confirmatory) testing. The authors discuss differences with some conclusions of the original article, which variables need to be controlled for in future inhibitory control tasks, and why psychologists can use machine learning to find surprising, yet solid, results in their own data.