Are online participants still a treasure? Or are they rubbish?

Since the COVID pandemic, the number of online experiments in accounting has been increasing rapidly. Lockdown measures at universities and business schools have pushed experimental researchers out of the lab and toward online participant pools. Field researchers found themselves without partnering firms and institutions and have started fishing from online participant pools to maintain their research pipeline. You would think this significant shift toward online experimental research would boost its popularity in accounting. While the number of online experiments has indeed increased over the past few years, it has also led to harsh and often one-sided criticism. In this post, I discuss this criticism and argue that, despite the challenges of recruiting online participants, they are still a valuable pool to fish from, depending on the type and focus of your experiment.
I have experienced the popularity of online experiments as a rollercoaster. I still remember a seminar speaker presenting an online experiment during my doctoral studies a little over five years ago. The seminar speaker claimed that the online participants they recruited using Amazon’s Mechanical Turk (MTurk) were, to a large degree, real executives participating in MTurk studies during their lunch. This claim was (rightfully) met with much skepticism by the seminar participants, but it shows the tone at the time with papers like Farrell, Grenier, and Leiby (2017) defending the online experiment’s honor by suggesting that their quality was "just as sound" as student samples. During those years, it was challenging to formally critique the use of online participant pools in the experimental accounting community.
Today, little remains of the euphoria from that time, and, as things stand right now, our rollercoaster cart seems to be accelerating fast on its way down. Recent research has revealed that online participants can have multiple accounts, enabling them to participate in the same study multiple times, and there are issues with participant farms, VPN/VPSs, and bots (Ahler et al. 2019, Dennis et al. 2020, Kennedy et al. 2020). Since this evidence came to light, an increasing amount of criticism has emerged on the use of online participants in the experimental accounting community, even to the extent that choosing online participants in and of itself can be raised as a major concern during discussions and in reviews.
In response to these concerns, many have proposed fixes that should address these problems. Some, for instance, argue that some online platforms and labor markets are better than others. They propose that the only way to get high-quality online participants nowadays is to ditch MTurk entirely and exclusively recruit participants on Prolific. To these folks, Mturk is a lost cause, and Prolific miraculously solves any problem we, experimental accounting researchers, might see with recruiting online participants. Others seem more open-minded and believe MTurk can still have value. This stance seems more reasonable given that many online workers operate on multiple platforms anyway and each online platform has strengths and weaknesses. However, they offer fixes such as exclusively running studies through parties like Cloud Research or adding extensive pre-screening questions to filter out "poor quality" types (see Bentley 2021). Personally, I am a proponent of using programming to objectively filter participants violating the license agreement of the online platform and the consent form they accepted before taking part in the study. In my recent experiments, I always use Javascript to filter participants relatively effortlessly and without storing personally identifiable data serverside. For more information, please see this post.
Regardless of what we prefer as a fix, there is little consensus within the community about which fix, or combination of fixes, helps address some or all of the abovementioned challenges and why. Thus, they are rather ineffective in slowing our rollercoaster down, making me wonder whether the end of the ride is near. However, this rollercoaster ride I find myself in does little to no justice to the broader trade-offs behind deciding which participant pool to use as an experimental accounting researcher. While I will not go as far as claiming that online participant pools are bulletproof as some of my colleagues did about five years ago, I will use these trade-offs to offer some counterweight to the problems that we are so conveniently using to sell our snake oil fix.
The most important reason favoring online participant recruitment is that it is cheap and fast. However, whether online participants are suitable also depends on the type and focus of your experiment. Accounting is an applied academic discipline interested in a wide array of research questions, and even the type of experiments we use tend to vary. We have econ-based experiments and more scenario-heavy experimental designs. Thus, the type of participant pool suited for the research question targeted by the experiment will also vary. For instance, online participant pools might not be the BEST choice for managerial scenarios and complex audit tasks. It is likely that such a heterogenous and noisy participant pool proxies significantly worse for your target population of managers and auditors than, for instance, a student pool or practitioner pool. All the problems associated with recruiting participants online make matters worse. Also, any proposed fix or combination will never really cut it. Your trimmed-down, highly selective sample will never miraculously reflect the manager or auditor population better than a student or practitioner sample.
However, recruiting online participants may be acceptable for other types of research questions and target populations. For instance, if your experiment features simple effort tasks or everyday decisions that any type of person should relate to, then your research question is far less restrictive in its target population and recruiting participants online may be perfectly fine. Another example is experiments focusing on retail/unsophisticated investors or the average consumer. These experiments may also benefit from recruiting online participants rather than students or experts. Some of the fixes may still help improve the quality of your sample. However, many of the remaining control problems and related fixes are just not as critical for this type of research. The quality of your research will never hinge entirely on whether you apply these fixes or not. There is an important difference between trying to recruit joe the plumber for a straightforward real effort task and recruiting an auditor for a detailed scenario featuring an audit task. The type of research question, experiment, and target population can put an entirely different spin on what a “high quality” participant is supposed to be.
Economists are often interested in broader and more fundamental research questions about human behavior. Thus, they tend to view online experiments as means to generalize results beyond experimental labs and the (rather restrictive) student samples (Chen et al. 2016, Snowberg and Yariv 2021). Yet, like us they also want to ensure online participants are not just a convenience sample that is particularly characteristic of low quality. One of the solutions they use is throwing quantity at the problem of low quality online participants. Namely, empirical evidence reveals that online samples are primarily more noisier than student samples (Snowberg and Yariv 2021). Noisier dependent variables are not necessarily detrimental to the hypothesis tests of your experiment. It just means you may require a larger sample size to cut through the noise and detect your hypothesized effects.
Conclusion
We should exit the rollercoaster ride. Online participant pools are (still) a good pond for accounting experimenters to fish from because they offer relatively cheap and a broad range of participants fast. Depending on your research question and target population, these participants may even be right up your experiment’s alley. Suppose you target a heterogenous population or people with no particular prerequisite knowledge or specialization (e.g., consumers, retail investors, etc.). In that case, online participant pools may be preferred over students or professionals burdened by a long-term career.