The danger of practical anecdotes
The use of practical anecdotes (e.g., "Google did this" and "Microsoft said that") has become increasingly popular in experimental accounting research. It now even emerges as one of the key comments in the review process and paper discussions at conferences: "I know a CFO who said this," "why not add an example of a company that used the intervention?," and "your experiment fails my sanity check!" In this post, I caution against making anecdotal evidence an integral part of the discourse of experimental research. My proposition is that it only resembles commentary that can restrict and bias the discourse in experimental accounting research. As cuh, it should never form a criterion for evaluating experimental research.
Two interrelated problems
Let assume that I conducted an experiment testing whether a new presentation format of performance information increases employee performance. I finalize the paper and stumble on a media source suggesting that Google has implemented this particular presentation format among its workforce and even reports it has helped improve performance recently. It does not take me long to find another example of an organization that has been considering a seemingly similar presentation format for a related purpose. Not long after, I find even more examples that I can relate my presentation format to, which makes me relatively convinced that my experiment matters! Quite an appealing way of thinking, isn't it? It may be appealing, but there are two notable issues with using anecdotal evidence in this way to argue for the relevance of experimental research.
The Anecdotal FallacyThe use of anecdotal evidence, or isolated examples that rely on personal testimonies, to support or refute more general claims.
Anecdotal evidence can bias the discourse of experimental research because people (wrongly) use it to make general claims. Even if you find practical anecdotes that closely resemble your experiment's intervention, the anecdotal fallacy suggests it carries little to no weight. Google might be a big and important company, but the media source I found is subjective and its claims have not been verified. If you believe that a relation between your experimental study and what Google supposedly reported via its PR people and journalists makes it relevant or important, then you relying on a practical anecdote to make an overly strong claim. This precisely what the anecdotal fallacy entails; we mistakenly use our personal experience, which is subjective, to make a claim about what practice is generally like and thus how this may link up to the relevance of experiments.
A related problem is that anecdotal evidence can restrict the discourse of experimental research to real-world ideosyncrasies. Traditionally, a experiment is considered useful when it offers a strong connection with theory that is plausibly generalizable and that can be used in the broader empirical literature. Practical anecdotes that lay bare understudied and idiosyncratic interventions do have the potential for experimental research to advance and develop pre-existing theory. However, a similar argument can be made for interventions that never exist in practice or naturally-occuring settings. This last point highlights the broader issue surrounding the motivation and contribution of experimental research in accounting. Specifically, anecdotal evidence of interventions or phenomena in practice should never be a necessary condition for studying their effects, merely a sufficient condition.
Commentary is part of writing and the discourse of experimental research. It may include practical anecdotes to spice things up or motivate the research. However, this form of commentary should not be a necessary criterion for experimental research because it can bias and restrict its discourse. The alternative is straightforward and has been the status quo in the empirical social sciences for decades. Our only task is evaluating the extent to which an experimental study offers an incremental contribution to a literature. In other words, to what extent does an experiment push our shared, commonly-accepted, and well-documented knowledge about accounting phenomena forward? One sentence explaining an experiment's incremental contribution to a literature should carry a thousand times more weight that a six-page introduction full with anecdotes.
How to reference this online article?
- Replications can help improve practical relevance of accounting experiments
- How to prevent bots and farms from taking over and ruining your online experiment
- Are online participants still a treasure? Or are they rubbish?
- Are experiments that recruit from online subject pools field experiments or laboratory experiments?
- Choosing the right participants for your experiment