Choosing the right participants for your experiment

A common question when designing an experiment is whom to use as participants. I often see this question being raised at the end of the planning phase (i.e., after designing the experiment), and often the answer has more to do with convenience than anything else. Many people quote Libby, Bloomfield, and Nelson (2002, p. 802) as the basis for this convenience approach: “When should experiments use professional subjects? Our advice is to match subjects to the goals of the experiment, but to avoid using more sophisticated subjects than is necessary to achieve those goals.”

That sentence conveys only a part of the truth that Libby et al. (2002) were trying to convey. Immediately after that quote, they talk about the need for different participant groups for studies that “peer into the minds” versus “focus on general cognitive abilities, or responses to economic institutions or financial market forces.” More broadly, they would probably say that the participants need to be matched to the research question.

I believe the argument can be taken one step further: the research question, the participant, and the experimental design need to be carefully paired together like a chef pairing courses in a meal. Proper planning can make the difference between mediocre and gourmet. This post offers a few thoughts on pairing research questions, participants, and experimental designs to maximize the likelihood of running a successful study.

Order matters

First, I must emphasize that the research question and the participant pool must be chosen before the experiment is designed. Sometimes the question comes first. Other times the participant pool comes first. However, the experimental design should always come last. Let me illustrate with a few examples:

1. Starting with the research question

This approach seems to be the natural direction implied by Libby et al. (2002): decide on your research question, and then choose the participant pool and experiment to match.

For example, many auditing studies use a scenario-based experiment with audit professionals as participants. The goal is to peer into the minds of auditors. The research question and participants were chosen first, and then the experiment was designed to be in a setting familiar to the auditors. Hammersley’s (2006) experiment is an excellent example of a good pairing. She had a question about audit specialization. She then found her participant pool (banking and government audit specialists) and designed an experiment to match, including banking and government elements in the scenario.

In other situations, a research question may imply a particular participant pool, but that participant pool is difficult to obtain. For example, Bowlin, Hales, and Kachelmeier (2009) wanted to examine how audit experience affects reporting decisions when auditors become managers of audited firms. That is a challenging population to find and even harder to manipulate! Many researchers might either give up on the research question or use a scenario study with a less-than-ideal population. Bowlin et al. (2009) had a better idea! They designed an experiment that could replicate the key features of the natural world in a setting that the available participants (undergrads) could understand. I expect that if they had tried a scenario-based study with their undergrads, the results would not have worked!

This thought process can be extended to online studies. One option is to screen participants to find a very select group with the required knowledge to do a scenario study. Sometimes that works, but you are going to have a hard time convincing readers that you found MTurkers who are also audit partners or CEOs! Another option is to design a more abstract experiment with the same institutional features without requiring years of experience from your participants. Sometimes that is not possible. If not, do not try to fit the round peg in the square hole. See my Accounting Horizons paper for a lengthier discussion of MTurk worker qualifications and screening.

2. Starting with the sample population

Sometimes researchers get access to a group of participants. In this case, the population should dictate both the research question and the experimental design.

For example, Rob Bloomfield, Matt Bloomfield, Tamara Lambert, and I had the opportunity to include a few questions on a nationwide representative survey. We got together and brainstormed research questions that would benefit from that diverse population. We decided to investigate how demographic factors affect people’s beliefs about different types of measure management. We then designed a concise scenario that could be asked during a phone survey. We did not have time for a scenario study or designing a market with incentives. We had a limited number of words and a requirement that we use an elementary-school vocabulary. We designed the experiment to meet the research question and the participant pool, and we learned something from it!

Unfortunately, I have sometimes made the mistake of trying to fit a professional participant pool to a pre-existing research question and experimental design. For example, I once had access to a group of finance and accounting professionals through an IMA training I was conducting. I made the mistake of trying to use an experiment I was already working on. The result was that I created too complex of a scenario with too many conditions for the number of participants. 20-something people do not work well for a complex experimental design.

The point of these examples is that the proper experimental design depends just as much on the participant pool as it does on the research question. Designing the experiment before identifying your participant pool is like preparing a marinara sauce only to discover that your only other ingredients are apples, oranges, and bananas. Individually, all of the ingredients are edible, but the combination probably will not win you any cooking awards.

Choosing a Participant Pool

Recognizing that we need to design the experiment for a particular participant pool, let’s chat about some of the pros and cons of various participant pools. Take all of these suggestions with a grain of salt. Some of them are based on my anecdotal experiences. Others are based on research.

Professionals

Pros:

  1. Have the required expertise.
  2. Easier to generalize findings.
  3. They are generally viewed favorably by reviewers.

Cons:

  1. Difficult and/or expensive to obtain.
  2. Often have time constraints and can only complete a short study.
  3. Can be unresponsive to monetary incentives. Most professionals are not doing the study for the money.
  4. Limited population.

Students

Pros:

  1. Inexpensive. Extra credit works wonders for recruiting students!
  2. Relatively easy to access.
  3. Greater experimenter control than online settings.
  4. Less heterogeneity than online settings or professional pools which can lead to less uncontrolled sources of variance.

Cons:

  1. May be unresponsive to incentives. If the student is doing the task for extra credit, they’re going to be less interested in a monetary bonus. They would certainly work harder for more extra credit, but I am not sure that would pass the IRB (I have never tried).
  2. May lack required knowledge/expertise.
  3. Limited population size.
  4. Less heterogeneity than online settings or professional pools which can make it harder to generalize results.

Tips:

  1. Avoid running experiments the last week of classes.
  2. If you want powerful monetary incentives, do not offer extra credit for participation. Make sure the students are there for the money, not the extra credit.
  3. Design the experiment to fit the population.

Online participants (e.g., MTurk)

Pros:

  1. Huge sample pool.
  2. High convenience (can get thousands of participants in a matter of hours).
  3. Relatively inexpensive.
  4. Heterogeneous population implies easier to generalize to a diverse population.

Cons:

  1. May lack required knowledge/expertise.
  2. May be inattentive.
  3. Lack of experimental control.
  4. Heterogeneous population implies lots of uncontrolled sources of variance.
  5. May be aware of manipulation/paradigm and be unresponsive to manipulations.

Tips:

  1. I discuss ways to improve statistical power and reliability in my Accounting Horizons paper. In particular, I discuss ways to minimize the effects of each of the above cons.
  2. Online participants often skip over text. Have more pages with less text on each page.
  3. Different platforms have different specializations. Some may be better at getting qualified participants than others. However, realize that they are generally black boxes. The actual levels of verification are pretty minimal and rely on participant self-reports. I had one study where someone claimed to be a senior executive to the recruiting company. The company believed him. When we asked what his job title was, he told us that he was king. Given that there are only 15 actual kings globally, I have a hard time believing the participant’s claim, even if the recruiting company claims to have verified his credentials.
  4. Design the experiment to fit the population. Novel, abstract tasks that are intrinsically interesting often work well with online populations (think games).

References

Bentley, J.W. 2021. Improving the statistical power and reliability of research using Amazon Mechanical Turk. Accounting Horizons, forthcoming.
Bentley, J.W., M.J. Bloomfield, R.J. Bloomfield, and T.A. Lambert. 2021. Measure management and the masses: How acceptability of operating and reporting distortion varies with deservingness, demography, and damage. Available at: https://dx.doi.org/10.2139/ssrn.2823705.
Bowlin, K.O., Hales, J. and Kachelmeier, S.J. 2009. Experimental evidence of how prior experience as an auditor influences managers’ strategic reporting decisions. Review of Accounting Studies 14 (1): 63-87.
Hammersley, J.S. 2006. Pattern identification and industry-specialist auditors. The Accounting Review 81 (2): 309-336.
Libby, R., Bloomfield, R. and Nelson, M.W. 2002. Experimental research in financial accounting. Accounting, organizations and society 27 (8): 775-810.

How to reference this online article?

Bentley, J.W., (2021, July 17). Choosing the right participants for your experiment, Accounting Experiments, Available at: https://www.accountingexperiments.com/post/choosing-the-right-participants/
Avatar
Jeremy Bentley
Associate Professor of Accounting

Related