<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Farah Arshad | Accounting Experiments</title><link>https://www.accountingexperiments.com/author/farah-arshad/</link><atom:link href="https://www.accountingexperiments.com/author/farah-arshad/index.xml" rel="self" type="application/rss+xml"/><description>Farah Arshad</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><item><title>Why We Ignore Free-form Communication and Why We Shouldn't</title><link>https://www.accountingexperiments.com/post/freeform_communication/</link><pubDate>Mon, 07 Feb 2022 00:00:00 +0000</pubDate><guid>https://www.accountingexperiments.com/post/freeform_communication/</guid><description>&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="" srcset="
/media/freeform_hu68d7d303cec56dcda07aa43228da379b_48980_c345a3e6a78f7e696d917579b5d38356.webp 400w,
/media/freeform_hu68d7d303cec56dcda07aa43228da379b_48980_cdb6d3e9e2d18775f45fc74518e922a4.webp 760w,
/media/freeform_hu68d7d303cec56dcda07aa43228da379b_48980_1200x1200_fit_q85_h2_lanczos.webp 1200w"
src="https://www.accountingexperiments.com/media/freeform_hu68d7d303cec56dcda07aa43228da379b_48980_c345a3e6a78f7e696d917579b5d38356.webp"
width="760"
height="408"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>When someone says: “I heard it through the grapevine”, what comes to your mind first? While you may not immediately think of the typical conversation with one of your coworkers at the coffee machine, almost 70% of all communication within organizations disseminates “through the grapevine”, a.k.a. informal communication (Wroblewski, 2018). Yes, this includes the gossip train your secretary is running at your department. But it also includes giving informal advice or feedback to a coworker, reinforcing social norms, building trust, and sharing knowledge.&lt;/p>
&lt;p>Communication does not stop here. The remaining 30% of organizational communication happens through more official or formal channels, such as during departmental meetings, client visits, feedback systems, and so on. Such formal settings might be more task-oriented and hierarchical, but there is at least one thing they have in common with informal communication: they both entail free-form communication.&lt;/p>
&lt;p>In experimental settings, free-form communication can be defined as communication that is unstructured, real-time, and that has no or limited restriction placed on the communication by the experimenter. This could be an interaction via e-mail or chat, but it also includes audio and face-to-face interactions. Free-form communication can also involve numerous participants simultaneously.&lt;/p>
&lt;p>In many accounting experiments, regardless of whether they accommodate formal or informal communication, free-form interaction is often scaled down to a restricted communication setting. However, the overarching discussion is whether research questions in accounting necessitate more complex forms of communication. In this blog, we try to offer more guidance on why and when free-form communication can be helpful in accounting experiments.&lt;/p>
&lt;h2 id="restricted-communication">Restricted communication&lt;/h2>
&lt;p>There is no doubt that free-form communication is insightful and, to some degree, reflective of what happens in practice. Still, most experiments in accounting stick to a more restricted form of communication. We believe that a large set of honesty and budgeting experiments offer a fair example of this because they allow participants to send a cost report in the form of a number (e.g., Evans, Hannan, Krishnan, &amp; Moser, 2001; Hannan, Rankin, &amp; Towry, 2010; Maas &amp; Van Rinsum, 2013). Similar to listening to your favorite podcast on Spotify, these experiments accommodate one-sided communication and do not allow the receiver of the report to respond to it.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="freeformmeme" srcset="
/media/freeformmeme_hub35efa311bd11962640b7e6b102772c5_123184_08ed7ec294ce2bdaa5c8b7f8cbf38a32.webp 400w,
/media/freeformmeme_hub35efa311bd11962640b7e6b102772c5_123184_27bed72abcc3bd90f5de6188949cf5e8.webp 760w,
/media/freeformmeme_hub35efa311bd11962640b7e6b102772c5_123184_1200x1200_fit_q85_h2_lanczos_3.webp 1200w"
src="https://www.accountingexperiments.com/media/freeformmeme_hub35efa311bd11962640b7e6b102772c5_123184_08ed7ec294ce2bdaa5c8b7f8cbf38a32.webp"
width="560"
height="456"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>Other experiments do incorporate a bilateral feature that allows the other party to respond (e.g., Chow Hwang, Liao, &amp; Wu, 1998; Fisher, Maines, Peffer, &amp; Sprinkle, 2002; Majors, 2016), but participants remain restricted in the responses they can provide. Participants can respond either with a numeric value or with a pre-determined written message designed by the experimenter.&lt;/p>
&lt;p>While the experimental designs mentioned above might not offer a full-blown representation of reality, the restricted communication context carries several benefits.&lt;/p>
&lt;ul>
&lt;li>First, there are three important principles in an experimenter’s life: safeguard the random assignment of treatments, ensure that the experiment replicates, and maintain experimental control at all costs. These principles are in fact so well-ingrained in us, experimental researchers, that it almost makes you want to abide by these rules in your daily life too. It goes without saying that we do not disregard these principles. As Steven Kachelmeier (2018) rightly said in his commentary: “Experiments are well-suited to test theories, not to simulate reality.” With this in mind, restricted communication ties in with the third principle and generally allows us to test the pre-selected theory while exercising tight control over the experiment and subsequent analysis.&lt;/li>
&lt;li>Another closely related reason is that pre-defined signals (e.g., a number or pre-formulated message) are often better suited to disentangle mechanisms than free-form communication. Consider the example of budget reporting settings again (e.g., Evans et al., 2001), where the main interest commonly lies in honesty as a psychological mechanism. In such settings, honesty is measured by computing the difference between the budgeted cost and actual cost (i.e., slack), which shows that restricting the communication content to numerical values is sufficiently reflective of the underlying mechanism. If one would allow for free-form communication instead, it could introduce a myriad of social preferences other than honesty.&lt;/li>
&lt;/ul>
&lt;p>Garofalo and Rott (2018) provide another insightful study where restricted communication facilitates the disentangling of mechanisms. They investigate whether individuals react differently depending on (1) &lt;i>who&lt;/i> communicates bad news or unfair decisions (source) and (2) &lt;i>how&lt;/i> that unfair decision is communicated (style). If the researchers had not opted for pre-formulated messages in their experimental design, it would have been unclear whether the communication source or style was driving their results.&lt;/p>
&lt;h2 id="free-form-communication">Free-form communication&lt;/h2>
&lt;p>In an earlier post, Jeremy Bentley (2021) correctly explained that a proper experimental design depends on both the participant pool and the research question. Surprise! Because it is no different this time. Juggling the benefits of a restricted communication setting with the benefits of a free-form communication setting also depends, by and large, on the research question. In fact, a variety of research questions, such as those described below, accommodate the use of free-from communication.&lt;/p>
&lt;p>The first type of research question that could benefit from the richness of free-form communication relates to the transfer of (private) information, such as sharing knowledge or convincing someone to make a certain decision. Cheap talk offers a good example (e.g., Zhang, 2008; Lundquist et al., 2009; Lunawat, Waymire, &amp; Xin, 2021), where communication between players does not &lt;i>directly&lt;/i> affect payoffs. Restricted cheap talk experiments often present a message space that is too limited because it only includes scripted intentions (Dugar &amp; Shahriar, 2018). The authors show that cheap talk is a lot more effective if people can explain &lt;i>why&lt;/i> they choose a specific action. It even happens in daily life. For instance, imagine you and your partner are deciding to go to McDonald’s (MD) or Burger King. You suggest choosing MD because you are lazy, and MD is closer to your house. Assuming both restaurants are of equal quality (up for debate), just saying you &lt;i>intend&lt;/i> to choose MD would not make the talk to your partner truly cheap and make it less likely to convince him/her that MD is the better option.&lt;/p>
&lt;p>The second type of research question that allows the integration of free-form communication pertains to the holistic nature of certain types of interactions, such as those between auditors and their clients or between feedback seekers and givers. If you were present at the 2017 AOS conference, you likely remember Steven Kachelmeier’s memorable presentation, to say the least. We would like to pat him on the back (quite literally) for having the courage to present his slides with his back turned to the audience and for offering us such a fitting illustration of the value of free-form communication. As Kachelmeier argues: “The &lt;i>ceteris paribus&lt;/i> nature of their [ref. Bennett &amp; Hatfield, 2018] experiment limits the extent to which their study can address something as rich and complex as human communication.” At the risk of some self-promotion, our own research shows that settings where employees seek real-time feedback from a supervisor (Dierynck &amp; Masselink, 2020) or where calibration committees discuss employee performance (Arshad, Cardinaels, &amp; Dierynck, 2020) are no different.&lt;/p>
&lt;h2 id="why-is-free-form-communication-worthwhile">Why is free-form communication worthwhile?&lt;/h2>
&lt;p>Despite the nature of the research question, there are also more generic reasons why free-form communication might be useful, and we try to outline some of these reasons below.&lt;/p>
&lt;ul>
&lt;li>We may obtain different experimental results when restricting communication to numerical values or prefabricated messages. Just a brief reminder: we are not (yet) robots. We are programmed differently when we can communicate whatever is on our mind than when our communication is limited. Relative to restricted communication, free-form communication does not only enhance trustworthiness and trusting (Ben-Ner, Putterman, &amp; Ren, 2011), cooperation in repeated games (Cooper &amp; Kuhn, 2014), and the efficiency of promises (Charness &amp; Dufwenberg, 2006, 2010), but it can also reduce lying (Lundquist et al., 2009).&lt;/li>
&lt;li>We gain mundane realism (i.e., perceived similarity to practice). To elicit the right beliefs, more realistic scenario settings can be specifically valuable in experiments that use more sophisticated participants, such as auditors or corporate directors (Bloomfield, Nelson, &amp; Soltes, 2016). For instance, Bowlin, Hobson, &amp; Piercey (2015) manipulate auditors’ access to an &lt;i>unstructured chat&lt;/i> with a manager to investigate whether engaging in such interpersonal chat increases the rate of low-effort audits. It is important to note, however, that there is a tradeoff between mundane realism and internal validity where more mundane realism may come at the cost of internal validity. In this case, theory-testing experimentalists generally seem to favor internal validity, but there could be ways to overcome this issue (more on this in another blog post).&lt;/li>
&lt;li>We can collect richer data in a controlled environment. Especially if the experimenter’s main interest does not lie in a specific aspect of the communication process, collecting data on the communication content, social preferences or other social cues can be valuable. For instance, by quantifying the dialogues using textual analysis software or independent coding, you can shed more light on the mechanism driving the results (e.g., Zhang, 2008; Bowlin et al., 2015; Lunawat et al., 2021). &lt;/li>
&lt;/ul>
&lt;p>Given the above reasons, would you feel comfortable with incorporating free-form communication in an accounting experiment? In our experience, the receptiveness of the (management) accounting community to free-form communication is maturing. However, as we discuss in this blog post, a lot of times it all boils down to your research question. No method will be superior in all aspects.&lt;/p>
&lt;p>So, whenever you are wearily waiting for your Monday morning coffee at the coffee machine, try to engage in a free-form interaction with your coworker, because real-life communication really is a lot more unique than the average prefabricated “Hi, my reported cost is equal to 5.00 Lira per unit” message. As George Bernard Shaw rightly said: “The single biggest problem in communication is the illusion that it has taken place.”&lt;/p>
&lt;h2 id="references">References&lt;/h2>
&lt;font size="4">
&lt;div style="margin-left: 20px; text-indent: -20px;">Arshad, F., Cardinaels, E., and Dierynck, B. 2020. Facing a calibration committee: The impact on costly information collection and subjective performance evaluation. Available at: https://dx.doi.org/10.2139/ssrn.3021683.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Ben-Ner, A., Putterman, L., &amp; Ren, T. 2011. Lavish returns on cheap talk: Two-way communication in trust games. &lt;i>The Journal of Socio-Economics&lt;/i>, 40(1): 1-13.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bennett, G. B., &amp; Hatfield, R. C. 2018. Staff auditors' proclivity for computer-mediated communication with clients and its effect on skeptical behavior. &lt;i>Accounting, Organizations and Society&lt;/i>, 68: 42-57.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bentley, J.W., (2021, July 18). Choosing the right participants for your experiment, &lt;i>Accounting Experiments&lt;/i>, Available at: https://www.accountingexperiments.com/post/choosing-the-right-participants/&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bloomfield, R., Nelson, M. W., &amp; Soltes, E. 2016. Gathering data for archival, field, survey, and experimental accounting research. &lt;i>Journal of Accounting Research&lt;/i>, 54(2): 341-395.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bowlin, K. O., Hobson, J. L., &amp; Piercey, M. D. 2015. The effects of auditor rotation, professional skepticism, and interactions with managers on audit quality. &lt;i>The Accounting Review&lt;/i>, 90(4): 1363-1393.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Charness, G., &amp; Dufwenberg, M. 2006. Promises and partnership. &lt;i>Econometrica&lt;/i>, 74(6): 1579-1601.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Charness, G., &amp; Dufwenberg, M. 2010. Bare promises: An experiment. &lt;i>Economics Letters&lt;/i>, 107(2): 281-283.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Chow, C. W., Hwang, R. N. C., Liao, W., &amp; Wu, A. (1998). National culture and subordinates' upward communication of private information. &lt;i>The International Journal of Accounting&lt;/i>, 33(3): 293-311.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Cooper, D. J., &amp; Kühn, K. U. 2014. Communication, renegotiation, and the scope for collusion. &lt;i>American Economic Journal: Microeconomics&lt;/i>, 6(2): 247-78.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Dierynck, B., &amp; Masselink, C. 2020. Demand-driven feedback systems and employee creativity. Available at: https://ssrn.com/abstract=3738177.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Dugar, S., &amp; Shahriar, Q. 2018. Restricted and free-form cheap-talk and the scope for efficient coordination. &lt;i>Games and Economic Behavior&lt;/i>, 109: 294-310.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Evans III, J. H., Hannan, R. L., Krishnan, R., &amp; Moser, D. V. 2001. Honesty in managerial reporting. &lt;i>The Accounting Review&lt;/i>, 76(4): 537-559.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Fisher, J. G., Maines, L. A., Peffer, S. A., &amp; Sprinkle, G. B. 2002. Using budgets for performance evaluation: Effects of resource allocation and horizontal information asymmetry on budget proposals, budget slack, and performance. &lt;i>The Accounting Review&lt;/i>, 77(4): 847-865.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Garofalo, O., &amp; Rott, C. 2018. Shifting blame? Experimental evidence of delegating communication. &lt;i>Management Science&lt;/i>, 64(8): 3911-3925.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Hannan, R. L., Rankin, F. W., &amp; Towry, K. L. 2010. Flattening the organization: The effect of organizational reporting structure on budgeting effectiveness. &lt;i>Review of Accounting Studies&lt;/i>, 15(3): 503-536.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Kachelmeier, S. J. 2018. Testing auditor-client interactions without letting auditors and clients fully interact: Comments on Bennett and Hatfield (2018). &lt;i>Accounting, Organizations and Society&lt;/i>, 68: 58-62.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Lunawat, R., Waymire, G., &amp; Xin, B. 2021. Do verified earnings reports increase investment? &lt;i>Contemporary Accounting Research&lt;/i>, 38(2): 1368-1394.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Lundquist, T., Ellingsen, T., Gribbe, E., &amp; Johannesson, M. 2009. The aversion to lying. &lt;i>Journal of Economic Behavior &amp; Organization&lt;/i>, 70(1-2): 81-92.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Maas, V. S., &amp; Van Rinsum, M. 2013. How control system design influences performance misreporting. &lt;i>Journal of Accounting Research&lt;/i>, 51(5): 1159-1186.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Majors, T. M. 2016. The interaction of communicating measurement uncertainty and the dark triad on managers' reporting decisions. &lt;i>The Accounting Review&lt;/i>, 91(3): 973-992.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Wroblewski, M. T. 2018. The importance of the grapevine in internal business communications. Available at: https://smallbusiness.chron.com/importance-grapevine-internal-business-communications-429.html.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Zhang, Y. 2008. The effects of perceived fairness and communication on honesty and collusion in a multi-agent setting. &lt;i>The Accounting Review&lt;/i>, 83(4): 1125-1146.&lt;/div>
&lt;/font></description></item><item><title>Why PEQs do not provide the best process evidence</title><link>https://www.accountingexperiments.com/post/peqs/</link><pubDate>Fri, 23 Apr 2021 00:00:00 +0000</pubDate><guid>https://www.accountingexperiments.com/post/peqs/</guid><description>&lt;p>The questionnaire (or self-administered survey) was invented by Sir Francis Galton, A British anthropologist, explorer, and statistician in the late 1800s. He used them as a cheap and standardized way to elicit people’s opinions. After an experiment, researchers often also distribute a questionnaire. Traditionally, the primary role of this post-experimental questionnaire (PEQ) has been an ancillary one. They have been used to elicit a handful of stable variables that help safeguard the quality of the data and verify randomization across treatments. The variables elicited by PEQs include manipulation checks, which are statements that help evaluate the degree to which the experimental stimuli worked, and demographic variables, which help test the extent to which treatments have been randomly allocated to participants.&lt;/p>
&lt;p>However, in the experimental accounting literature, PEQs serve a much more critical role because many also use them to obtain process evidence. Process evidence is empirical evidence supporting, as narrowly and tightly as possible, the theoretical mechanism central to a hypothesis. Accounting researchers often require such evidence before accepting that the results generated by an experiment support a hypothesis. Although there are many ways that controlled experiments can obtain process evidence (see, for instance, Asay, Guggenmos, Kadous, Koonce, and Libby (2021) for an insightful discussion), PEQs, by far, seem the most popular way for experimental accounting researchers to try and obtain it.&lt;/p>
&lt;p>There are two broad ways accounting researchers use PEQs to obtain process evidence for controlled experiments. First, PEQs may ask participants to reflect on their behavior and decisions during the experiment they just took part in. If participants’ reflections match the maintained theoretical explanation and explain part of the observed relationship predicted by the hypothesis, experimental researchers have produced process evidence. Second, PEQs may also ask participants to reflect on their personality and general attitudes in daily life. If their personality or general attitudes change (part of) the observed relationship predicted by the hypothesis in a conceptually consistent way, experimental researchers have also produced process evidence.&lt;/p>
&lt;p>Although PEQs are a popular method for obtaining process evidence in the experimental accounting literature, using them has some significant drawbacks that are rarely discussed in the context of controlled experiments. Also, once you carefully consider the experimental method’s strengths, it should become clear that PEQs, at best, offer process evidence that is ancillary to the process evidence that controlled experiments can directly produce.&lt;/p>
&lt;h2 id="what-are-the-problems">What are the problems?&lt;/h2>
&lt;h3 id="peqs-invite-socially-desirable-reporting">PEQs invite socially desirable reporting&lt;/h3>
&lt;p>Social psychologists, who make great use of questionnaires, have long known about and fought their drawbacks. One of the most pronounced and discussed problems with questionnaires is respondents’ tendency to report socially desirable answers (Nederhof 1985; Podsakoff, MacKenzie, Lee, and Podsakoff 2003; De Jong, Pieters, and Fox 2010; Steenkamp, De Jong, and Baumgartner 2010). In the context of controlled experiments, socially desirable reporting means that once participants arrive at the PEQ, they will say what they think is socially desirable, which is a critical problem if the theory involves a socially sensitive issue (i.e., think about misreporting, honesty, and effort distortion).&lt;/p>
&lt;h3 id="participants-may-experience-fatigue">Participants may experience fatigue&lt;/h3>
&lt;p>Accounting researchers often include an array of statements that jointly capture some stable personality trait or mindset among participants. Accordingly, PEQs tend to prolong the duration of controlled experiments substantially. However, participants’ attention span is limited and wears down as the experiment progresses, which is why many do not prolong the experiment more than they have to. When we use PEQs, the chances are relatively high that participants experience fatigue by the time they reach it. Accordingly, they may not fill out the PEQ seriously, leading to more uniform answers (Krosnick, 1999; Galesic and Bosnjak, 2009).&lt;/p>
&lt;h3 id="peqs-may-implement-false-memories">PEQs may implement false memories&lt;/h3>
&lt;p>Even if participants are still fresh and pay attention by the time they reach the PEQ, they may not perfectly recall what happened or how they thought and felt during the experiment. Ross (1989) and Loftus (1999) provide numerous examples of people reconstructing beliefs under such circumstances using implicit theories rather than relying on their imperfect memory of what actually occurred and what they actually believed at the time. There is also a trail of evidence that false memories and beliefs can arise, sometimes unintentionally, in people’s minds (Gilbert, Krull, and Malone 1990).&lt;/p>
&lt;h3 id="participants-may-rationalize-what-happened">Participants may rationalize what happened&lt;/h3>
&lt;p>PEQs are, by definition, distributed after the experiment has been conducted. Therefore, everything that came before the PEQ influences how participants answer questions and evaluate statements in the PEQ. Suppose participants still have a detailed account in their mind of what happened and are sufficiently motivated to finish the PEQ. In that case, they may fall prey to rationalizing their recent behavior and decisions. In other words, when answering statements in PEQs, participants may resolve any cognitive dissonance by changing their minds about why they behaved the way they did . Suppose we ask participants whether some theoretical explanation and intervention drove their behavior and decisions during the experiment. In that case, it is likely they may agree, even if another explanation is more representative of the truth.&lt;/p>
&lt;h3 id="peqs-fail-to-detect-changes-in-thinking">PEQs fail to detect changes in thinking&lt;/h3>
&lt;p>People often believe that what they are thinking at this moment in time represents how they have thought all along. Therefore, they tend to be unaware of changes in their thinking (Bem and McConnell, 1970). If a participant has changed their opinion during the experiment, they may not report this change in the PEQ. Therefore, using the PEQ to detect changes in thinking may lead to wrong conclusions. For instance, PEQs may reveal no evidence that changes in their thinking drive the changes in participant behavior caused by an intervention while actually, they did.&lt;/p>
&lt;h2 id="what-about-stable-personality-traits">What about stable personality traits?&lt;/h2>
&lt;p>Proponents of using PEQs to obtain process evidence may point out that verified instruments measuring stable personality traits are less prone to the abovementioned issues. Precisely, these instruments try to measure participants’ personality type in daily life, which should be independent of what happened during the experiment. If the instrument has been verified and is sufficiently concise and internally consistent, it could provide an excellent way to verify the process underlying an observed relationship.&lt;/p>
&lt;p>However, social psychologists have become increasingly concerned about using these instruments specifically and the concept of stable personality traits more generally. As measured by these instruments, many personality traits turn out not to be very stable, in particular, in terms of stability within people over time (Bardi and Zentner 2017; Golsteyn and Schildberg-Hörisch 2017). Some social psychologists have devoted entire careers to proving that the situation is a much more stable driver of people’s behavior than some individual typology (Ross and Nisbett 2011). Thus, even if we use verified personality instruments to showcase process evidence, the relative instability of participants’ responses make it challenging to draw reliable conclusions.&lt;/p>
&lt;h2 id="the-experiment-itself-can-provide-the-process-evidence">The experiment itself can provide the process evidence&lt;/h2>
&lt;p>Although questionnaires, outside the context of controlled experiments, give us a cheap way to gain insight into people’s thinking, they rarely provide robust causal evidence of theoretical relationships (Bloomfield, Nelson, and Soltes 2016). It turns out this particular weakness of questionnaires is precisely the comparative advantage of controlled experiments! Controlled experiments introduce interventions and manipulate independent variables with meticulous precision to see how they change participants’ behavior. Depending on how tightly we control the experimental setting, you can find relatively convincing evidence of the process underlying an observed relationship.&lt;/p>
&lt;p>Controlled experiments can vary in their ability to provide process evidence because some grant more insight into the process underlying observed relationships than others. One convincing way that controlled experiments can directly obtain process evidence is by featuring interaction effects between different manipulated independent variables. Suppose one manipulation attenuates or strengthens the influence of another manipulation in a conceptually consistent way. In that case, we are surer our theoretical explanation drives the variation observed in the dependent variable(s). Therefore, this type of design we often coin a moderation-of-process design.&lt;/p>
&lt;p>Another way that controlled experiments can directly obtain convincing process evidence is by unobtrusively observing participant behavior (Webb, Campbell, Schwartz, and Sechrest 1999). For instance, consider an experiment in which we measure how much time participants spend on a page. The time spent on a particular page could indicate how much attention participants give to a particular piece of information or stimuli. Depending on your hypothesis, it can be used to verify its underlying mechanism. For learning more about this approach to obtaining process evidence, please see &lt;a href="https://www.accountingexperiments.com/post/process-variables/" target="_blank">this excellent post&lt;/a> by Christian Peters.&lt;/p>
&lt;h2 id="references">References&lt;/h2>
&lt;font size="4">
&lt;div style="margin-left: 20px; text-indent: -20px;">Asay, H. S., R. Guggenmos, K. Kadous, L. Koonce, and R. Libby. 2021. &lt;i>Theory testing and process evidence in accounting experiments&lt;/i>. Available at: https://ssrn.com/abstract=3485844.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bardi, A., and M. Zentner. 2017. Grand challenges for personality and social psychology: Moving beyond the replication crisis. &lt;i>Frontiers in Psychology&lt;/i> 8: 2068.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Bloomfield, R. J., M. W. Nelson, and E. F. Soltes. 2016. Gathering data for archival, field, survey and experimental accounting research. &lt;i>Journal of Accounting Research&lt;/i> 54 (2): 341-395.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">De Jong, M. G., R. Pieters, and J.-P. Fox. 2010. Reducing Social Desirability Bias Through Item Randomized Response: An Application to Measure Underreported Desires. &lt;i>Journal of Marketing Research&lt;/i> 47 (1): 14-27.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Gilbert, D. T., D. S. Krull, and P. S. Malone. 1990. Unbelieving the unbelievable: Some problems in the rejection of false information. &lt;i>Journal of Personality and Social Psychology&lt;/i> 59 (4): 601-613.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Golsteyn, B., and H. Schildberg-Hörisch. 2017. Challenges in research on preferences and personality traits: Measurement, stability, and inference. &lt;i>Journal of Economic Psychology&lt;/i> 60: 1-6.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Loftus, E. F. 1999. &lt;i>Suggestion, Imagination, and The Transformation of Reality&lt;/i>, edited by A. A. Stone, C. A. Bachrach, J. B. Jobe, H. S. Kurtzman and V. S. Cain: Psychology Press, 201-210.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Nederhof, A. J. 1985. Methods of coping with social desirability bias: A review. &lt;i>European Journal of Social Psychology&lt;/i> 15 (3): 263-280.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee, and N. P. Podsakoff. 2003. Common method biases in behavioral research: a critical review of the literature and recommended remedies. &lt;i>Journal of Applied Psychology&lt;/i> 88 (5): 879.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Ross, L., and R. E. Nisbett. 2011. &lt;i>The Person and the Situation: Perspectives of Social Psychology&lt;/i>: Pinter &amp; Martin Publishers.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Ross, M. 1989. Relation of implicit theories to the construction of personal histories. &lt;i>Psychological Review&lt;/i> 96 (2): 341-357.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Steenkamp, J.-B. E., M. G. De Jong, and H. Baumgartner. 2010. Socially desirable response tendencies in survey research. &lt;i>Journal of Marketing Research&lt;/i> 47 (2): 199-214.&lt;/div>
&lt;div style="margin-left: 20px; text-indent: -20px;">Webb, E. J., D. T. Campbell, R. D. Schwartz, and L. Sechrest. 1999. &lt;i>Unobtrusive Measures&lt;/i>. Vol. 2: Sage Publications.&lt;/div>
&lt;/font></description></item></channel></rss>