The COVID pandemic has increased the popularity of online experiments in the field of accounting, but, at the same time, our criticism of online participants has been mounting up rapidly. Does this mean online experiments are rubbish or are we perhaps a little too harsh?
Both replications and practical relevance are awkward discussion topics for most experimental accounting researchers. Yet, replications offer a concrete way to address concerns we may have about the 'practical relevance' of experimental findings.
Experiments that recruit from online participants pools such as MTurk and Prolific have become increasingly popular over the past two decades. However, since scholars have referred to such experiments as both laboratory and field experiments, which classification should we use?
Choosing the right participant pool for your experiment is challenging. Which experiments require professional participants? Does it matter whether you recruit students or online participants? In this post, Jeremy Bentley explains his approach to participant pool selection.
Bots are a powerful yet often overlooked tool that helps experimental researchers test their applications more effectively and efficiently. In this post, Victor van Pelt explains their use and argues that their usefulness may even extend beyond testing.
Which design features of accounting experiments contribute the most to participant motivation, participant engagement, and perceived similarity to practice? Bart Dierynck and Victor van Pelt are in the process of providing an empirical answer.
Many experiments generate random numbers for participants. Yet, the code used to generate those numbers sometimes does not do what we think it does, which could lead to deception when reporting about the number generation process to participants.