In this post, I share simple techniques to filter participants before they take part in your online experiment. These techniques filter bots and participants using automated scripts plus participants who fake their geolocation using VPN/VPS, proxies, and server farms.
The COVID pandemic has increased the popularity of online experiments in the field of accounting, but, at the same time, our criticism of online participants has been mounting up rapidly. Does this mean online experiments are rubbish or are we perhaps a little too harsh?
Both replications and practical relevance are awkward discussion topics for most experimental accounting researchers. Yet, replications offer a concrete way to address concerns we may have about the 'practical relevance' of experimental findings.
People in workplace settings can typically communicate freely with each other, but many experiments scale communication down to a restricted form. Should we maintain this status quo or is there room for free-form communication? Read this post by Farah Arshad and Cardin Masselink to find out more.
We are happy to announce our updated tutorial series on oTree, which is an open-source platform for behavioral research. The tutorial is a three-part series created for doctoral students and researchers who are interested in using oTree for their survey and experimental research.
Developing theory after collecting data is problematic because the theoretical predictions are post hoc. However, does that imply that all exploratory analyses are pointless? In this post, Jeremy Bentley explains that exploratory analyses can still add value even when researchers prefer to pre-commit to ex-ante theoretical predictions.
Experiments that recruit from online participants pools such as MTurk and Prolific have become increasingly popular over the past two decades. However, since scholars have referred to such experiments as both laboratory and field experiments, which classification should we use?