Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, BUSINESS AND MANAGEMENT (business.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 20 November 2017

Workplace Dishonesty and Deception as Socially Situated Organizational Behavior

Summary and Keywords

Truthfulness and accuracy are critical for effective organizational functioning, but dishonesty (in the form of lying, misrepresentation, and fraud) continue to be pervasive in organizational life. Workplace dishonesty is an inherently unique behavior that should be distinguished from broader categories of unethical workplace behavior and organizational deviance, in that dishonesty is an overt social behavior—that is, requiring an audience to exist as a behavior. Compared to stealing or cheating, dishonest acts require knowing fabrication of false information, intended to deceive an anticipated audience. Thus, considering the overt social aspect of dishonesty (compared to the relatively clandestine behaviors of cheating and stealing) may add conceptual clarity to the construct of workplace dishonesty, which is surprisingly absent from extant literature. The potential audience for dishonest acts in the workplace is notably critical, in that dishonest organizational actors generally anticipate characteristics of the audience (in terms of relationship closeness, as well as expertise and motivation to evaluate the claim) and likely adapt and tailor their dishonesty accordingly. Historically two underlying paradigms have been used to study workplace dishonesty: the rational actor (economic) paradigm and the behavioral ethics (psychological) paradigm, but an emerging and nascent third paradigm (the social actor paradigm) may offer new opportunities for understanding antecedents of workplace dishonesty that do not occur exclusively for self-interested reasons. This novel paradigm suggests here important areas of inquiry related to the aftermath of workplace dishonesty: when will workplace dishonesty be detected in social interactions; what are the social and relational consequences of discovering dishonesty; how are dishonest actors likely to behave in the aftermath of their dishonest actions. Finally, two varying discrepancies relevant to workplace dishonesty should accordingly be considered when predicting subsequent behavior of the dishonest actor: the magnitude of the discrepancy between the truth and the fabrication, and the temporal discrepancy between the trigger event and dishonest act.

Keywords: workplace dishonesty, deception in organizations, lying, fraud, white lies, social motives for deviance, unethical behavior

Workplace Dishonesty

Dishonesty can be devastating for organizations, with high-profile cases bringing both public embarrassment and severe disruption to organizations and their members. For example, in 1999 Lotus chief executive officer (CEO) Jeff Papows resigned amid controversy surrounding lies about his military record (inflating his attained rank), his educational attainment (augmenting both the degrees earned and the prestige of institutions attended), and his personal history (claiming to be an orphan, although his parents were both living). Similarly, the string of lies told by professional cyclist Lance Armstrong about his use of banned substances caused untold reputational harm to his former teammates (many of whom he wrongfully sued for libel), his sponsors, and the nonprofit organization that bears his name. Thus, the impact of workplace dishonesty in organizations can be profound. Accordingly, great time and resources are spent attempting to navigate the veracity of people’s claims in everyday work life. Indeed, controversial techniques (such as the use of polygraphs for “lie detection”), extensive research attention, and entire industries have emerged to detect and combat lying (Murphy, 1993; see Frank & Feeley, 2003 for a meta-analysis and narrative review). However, dishonest acts continue to be pervasive in organizational life: ordinary people report dishonesty in 14% of their emails, 27% of face-to-face conversations, and 37% of phone calls (Hancock, 2007; in Mulder & Aquino, 2013).

Unethical workplace behavior has captured an ever increasing amount of attention from scholars (see Kish-Gephart, Harrison, & Trevino, 2010 for meta-analysis) with workplace dishonesty accounting for much of this attention. Unfortunately, workplace dishonesty has experienced little, if any, conceptual clarity distinguishing it from other forms of workplace malfeasance. Critically, the current breadth of unethical workplace behavior, as a construct, may greatly limit theoretical and predictive precision, whereas the lack of conceptual clarity surrounding the (narrower) construct of workplace dishonesty eliminates the possibility for examining subsets of dishonest behaviors as unique outcomes with distinct ontologies.

In this article, the construct of workplace dishonesty is explored and conceptually bounded. Dishonest workplace behaviors are separated from nomologically related but distinct constructs that have emerged within the behavioral ethics literature (i.e., stealing and cheating; cf. Green, 2006; Grover, 1993a). Specifically, fraud, misrepresentation, and lying are conceptually separated from covert forms of unethical workplace behavior, as they are (to varying degrees) inherently overt social behaviors. Next, this article identifies distinct features of dishonest behavior and reviews two common paradigms that have been employed for studying dishonesty—(1) the rational actor paradigm rooted in economics and (2) the behavioral ethics paradigm rooted in psychology—which may help to identify social antecedents of workplace dishonesty behaviors that are distinct from those that predict unethical behavior more broadly. In doing so, this article identifies lying in the workplace (a particularly social form of workplace dishonesty) as an area of inquiry especially open to further exploration and novel predictions. Additionally, trends in the existing literature and opportunities for future research related to the likely downstream consequences of workplace dishonesty are presented: (1) implications for detection of workplace dishonesty within social interactions, (2) consequences for workplace relationships when deception is detected, and (3) likely subsequent behavior of the dishonest individual upon creating fabrications. Two relevant “discrepancies” especially relevant to workplace dishonesty are identified: the magnitude of the gap between the truth and the fabrication and time elapsed between consideration of fabrication and misreporting. Finally, directions for future research are provided that are intended to broaden our understanding of dishonesty in the workplace.

Distinguishing Closely Related Concepts

Research focusing on the darker side of organizational life has produced several overlapping and semiorthogonal categories of illicit behavior that are differentially defined from a normative perspective: they describe acts that violate the standards of different moral communities (society at large, or the organization). However, these definitions generally lack motivational or behavioral specificity that should be useful for better discerning their antecedents and outcomes. Because theoretical concepts are generally abstract in nature, they must be distinguished by their features, attributes and characteristics which separate them from other related phenomena (Podsakoff, MacKenzie, & Podsakoff, 2016) as well as by their unique relationships to antecedents and outcomes (Goertz, 2006). Accordingly, workplace dishonesty can be specifically defined by several characteristics. In addition, subcategories of dishonest behavior (i.e., fraud, misreporting, and lying) should be further distinguished from each other.

Unethical workplace behavior has been defined broadly in terms of acts by organizational members that violate societal normative standards for moral appropriateness (Kish-Gephart, Harrison, & Treviño, 2010; Rest, 1986; Treviño, Weaver, & Reynolds, 2006), and has generally been operationalized in organizational research as lying, cheating, and stealing (Green, 2006; Treviño et al., 2006). This category of behaviors is necessarily broad, in that it includes behaviors which not only harm the organization (e.g., theft, sabotage) but also may include behaviors undertaken for the specific benefit of the organization at the expense of society (i.e., unethical pro-organizational behaviors such as withholding risks associated with a new product; see Umphress, Bingham, & Mitchell, 2010). Relatedly, the construct of workplace deviance has been defined as “voluntary behavior that violates significant organizational norms and, in so doing, threatens the well-being of the organization or its members, or both” (Bennett & Robinson, 2000, p. 349). Critically, it must be noted that these two constructs (unethical workplace behavior and organizational deviance) only partially overlap as a function of the differences in society’s (unethical behavior) versus the organization’s (organizational deviance) normative expectations: Workplace deviance may include behaviors that may violate workplace standards but which would not be viewed as deeply morally troubling to society at large (e.g., taking an additional break, fantasizing and daydreaming instead of working, gossiping about one’s supervisor; see Bennett & Robinson, 2000).

Problematically, defining a broad swath of behaviors based on how they are normatively evaluated (including all “things that are viewed as bad”) may reduce precision in describing and predicting such behavior. To wit, scholars have effectively completed the “top-down” work of finding commonalities in behavior (“things that are viewed as normatively problematic by an audience”) to create taxonomies of unethical and deviant behavior. However, such taxonomies should be further considered using a “bottom-up” approach to examine whether items within this taxonomy demonstrate that behaviors within that category are sufficiently similar in their features to share a category. Accordingly, precision in conceptualizing and describing ontological networks for concepts (including workplace behavior) should begin by finding common attributes that describe shared attributes, particularly those that are necessary and sufficient to define the concept and subsets of cases (Podsakoff et al., 2016). While normative values offer a good starting point for describing unethical and deviant workplace behaviors, a bottom-up examination of the differences between the specific behaviors (i.e., how lying is fundamentally different than stealing) reveals an opportunity for examining workplace dishonesty as a categorical distinct set of organizational behaviors.

The total set of behaviors that comprise workplace dishonesty may share characteristics with unethical behavior, workplace deviance, or both, while some cases of workplace dishonesty (e.g., benevolent lies) may not fit either. In fact, while the construct of dishonesty has appeared frequently within the organizational literature (e.g., Gino & Pierce, 2009; Lewicki, 1983; Mazar, Amir, & Ariely, 2008; Scott & Jehn, 2003; Scott, 2003), surprisingly little definitional clarity has been offered to distinguish dishonesty from these broader classes of behavior. Accordingly, it is useful to unpack the attributes of dishonest behavior, such that workplace dishonesty can be conceptualized as a family of behaviors sharing several key behavioral features.

The Merriam-Webster dictionary defines dishonesty as a lack of honesty or integrity: disposition to defraud or deceive, and secondarily as a dishonest act. Accordingly, workplace dishonesty can be defined as intentionally presenting misinformation or false claims in the workplace, for the purpose of deceiving others (Gino & Shea, 2012; Murnighan, 1991). Dishonesty requires not that an unacceptable behavior occurs, but that it is characterized by deception—tacit in this definition is the deliberate presentation of a dishonest claim, which is known to be untrue.1 Whereas theft, sneaking out of work early, or dumping toxic substances are clandestine behaviors intended to go unnoticed, dishonesty is defined by the individual knowingly making a false claim (verbally or in writing) that they intend to be believed by others (cf. Leavitt & Sluss, 2015). This definition suggests two further important distinctions that separate dishonesty from other forms of unethical workplace behavior and deviance. First, dishonesty is inherently social in that it requires an audience for the false claim—that is, the dishonest person has some expectation that others will see or hear the claim, and the nature of the dishonest behavior itself will be shaped or adjusted with the anticipated characteristics of the audience in mind. For example, a consultant falsifying billable hours that will receive little scrutiny from a central accounts payable office may be particularly vague in her deception (i.e., simply writing down an incorrect number of hours). By contrast, a financial services advisor lying about significant losses to a client with whom she has a long-standing personal relationship may fabricate a lot of detailed information, to both hide some of the loss and mitigate culpability while maintaining a friendship. Indeed, while many unethical acts such as vandalism, theft, or academic cheating occur when no one is looking, dishonesty can only occur in the presence of some sort of audience, even if it is a diffused one (e.g., the claims office of an insurance company). Thus, dishonesty must be described and understood as socially situated behavior, with the deceptive actor considering the expectations, motives, expertise and likely reactions of the intended audience at the time they engage in deception. Secondly, while many unethical or deviant workplace behaviors may occur with limited intentionality or awareness (Kish-Gephart, Harrison, & Treviño, 2010; Reynolds, Leavitt, & Decelles, 2010), workplace dishonesty requires some level of intentionality, in that the individual must knowingly formulate a fabrication to present to the anticipated audience in the form of a written or spoken statement. Moreover, these fabrications are generally constructed and tailored with the goal of credibility, suggesting that the actor will invest some amount of consideration and effort in anticipating how they will be perceived. Whereas some dishonesty may occur relatively spontaneously (e.g., telling one’s supervisor a lie about a network failure when put on the spot about a missed deadline), dishonesty is characterized by a specific and salient awareness that one has deviated from the truth, even if only after the dishonesty occurs. Whereas an individual may mindlessly toss litter from their car in the company parking lot or take property from their organization without recognizing the moral import of their actions, deviating from a known truth in an effective manner requires or creates recognition of the discrepancy between the truth and the manufactured falsehood.2

Furthermore, workplace dishonesty can be differentiated from the broader categories of unethical workplace behavior and deviance, in that some instances of dishonesty may not cause genuine harm, or may actually affirm normative social standards to the extent to which misleading others may somehow benefit them. For example, Levine and Schweitzer (2015) found that individuals who are discovered to have engaged in prosocial deception actually garner more benevolence-based trust from both those who witness or are the target of such deception. Similarly, white lies (which benefit an interaction partner) are viewed as significantly less aversive than other types of deception (Erat & Gneezy, 2012). Relatedly, some acts of dishonesty may best be understood as tactics of impression management, which are deliberate attempts to control the impressions others form of them (Leary & Kowalski, 1990). While dishonesty for the sake of impression management may indeed have meaningful negative consequences in an organization (such as fabricating details on one’s resume such that an individual is then hired in to a job they are ultimately not qualified for), other instances of dishonest impression management may be relatively innocuous and largely serve to facilitate social interactions. For example, the common response of “yes--I did read that book” (when the actor has not) is likely a social-functional response to maintain a relationship by avoiding conversational disruption or validating the other party’s enthusiasm (cf. Leavitt & Sluss, 2015). Thus, while many acts of dishonesty may in fact be unethical (violating a normative standard of society) or deviant (violating a normative standard of an organization), the category of workplace dishonesty also includes behaviors which are not particularly unethical or deviant, as well as behaviors which are actually intended to be prosocial in nature. For example, telling a coworker facing dire circumstances that everything will be alright for them reflects the use of deception to provide interpersonal support (Brown & Levinson, 1987), and showing appreciation for an unwanted or undesirable gift reflect the use of dishonesty to maintain politeness (Broomfield, Robinson, & Robinson, 2002; Levine & Schweitzer, 2015). At the extreme, doctors may give falsely favorable prognoses to reduce the anxiety of terminally ill patients (Iezzoni, Rao, DesRoches, Vogeli, & Campbell, 2012). Accordingly, it has been argued that certain acts of dishonesty which protect friends from unnecessary emotional harm may actually be viewed as moral or virtuous choices, and some acts of job-related dishonesty (e.g., lying by undercover police officers) may be necessary to serve the public good and save lives.

Thus, the set of behaviors that comprise workplace dishonesty are not readily subsumed within the traditional categories of unethical workplace behavior or organizational deviance. Some instances, such as lying about one’s own accomplishments unrelated to work, may have more in common with impression management, whereas dishonest behaviors such as white lies may have more in common with prosocial behavior. Thus, to garner a deeper understanding of workplace dishonesty, researchers should focus on the social considerations relevant to deception, rather than the extent to which deception is considered normatively appropriate. With this set of distinctions in mind, the two most common paradigms for studying dishonesty (the rational actor paradigm rooted in economics, and the behavioral ethics paradigm rooted in psychology) are considered here. The formalization of a third emerging paradigm focusing on social factors is proposed to help explain many instances of workplace dishonesty.

Dominant Paradigms in the Study of Dishonest Behavior

Dominant models for dishonest behavior in the workplace focus primarily on two underlying approaches: economic explanations that describe dishonesty as rational self-interested behavior and psychological explanations that typically focus on shortcomings in information processing or deficiencies in self-control resources.

Rational Actor (Economic) Factors

Briefly, an economic perspective of dishonesty relies on traditional cost-benefit approach to dishonesty, suggesting that individuals will generally weigh the potential payoff of a dishonest act against the probability of being caught and the magnitude of punishment (Lewicki, 1983; Mazar, Amir, & Ariely, 2008). This approach has been fruitful, and notably, it explains dishonest behavior across a variety of contexts. For example, research on negotiation has uncovered the prevalence (and effectiveness!) of lying to gain advantage in ethical dilemmas (Tenbrunsel, 1998). Negotiators often feign opposing interests in a common-value issue to gain an advantage on other issues by faking a concession (O’Connor & Carnevale, 1997), and the likelihood of dishonesty increases when clear ethical rules for negotiation are absent (Aquino, 1998) or when repeated interactions with the negotiations partner are unlikely (Lewicki, 1983). The rational-economic approach is also commonly used to explain instances of fraud—theorizing that financial misreporting generally occurs at the confluence of pressure (the need for committing fraud), opportunity (weak controls or poor oversight), and rationalization (a mindset that enables self-justification) (Albrecht, Wernz, & Williams, 1995). This “fraud triangle” has emerged as the dominant approach to fraud deterrence among members of the American Institute of Certified Public Accountants (AICPA), demonstrating that the economic approach is relevant to predicting and preventing many large-scale acts of workplace dishonesty. Accordingly, intervention strategies based upon the economic model frequently focus around increased employee monitoring and accountability (Pierce, Snow, & McAffee, 2015).

Behavioral Ethics (Psychological) Factors

By contrast, common psychological explanations tend to describe individuals as more inadvertently engaging in unethical behavior while pursuing their workplace goals. While the economic model generally argues that individuals will engage in dishonesty to the extent to which it is self-benefitting and they can get away with it, common psychological approaches tend to describe actors as simultaneously economically and morally motivated. Within this broad psychological paradigm, individuals typically pursue work goals with the intention of maintaining ethical standards, but engage in dishonesty as a function of compromised self-regulation (Barnes, Schaubroeck, Huth, & Ghumman, 2011; Cohen et al., 2014) or a lack of in-the-moment awareness of moral standards and their relevance to the issue at hand (Jones, 1991; Reynolds, Dang, Yam, & Leavitt, 2014). Accordingly, this approach assumes that activation of moral content and bolstering the ability to resist temptation are critical for reducing workplace dishonesty. For example, Gino and Margolis (2011) found that individuals primed to pursue goals from a promotion focus (associated with maximizing gains and obtaining ideal outcomes) engaged in significantly more overclaiming of unearned money than individuals primed with a prevention focus (associated with minimizing errors and meeting expectations), suggesting that subtle messaging about goal pursuit removes the individual’s attention from monitoring for moral hazards. Similarly, missing goals by a small margin can lead individuals to behave more unethically than missing goals by a larger margin (Schweitzer, Ordonez, & Douma, 2004), suggesting that as goals become proximal, moral concerns become less salient and temptations related to goal outcomes become greater. According to the general assumptions of the psychological paradigm, individuals will engage in less self-benefitting unethical behavior when cues that remind the individual of moral rules and values (Desai & Kouchaki, 2016) or social sanctions which inhibit unethical behavior (including one’s own cognitions or valued identities; Aquino, Reed, Thau, & Freeman, 2007; Moore, 2008) are present in the workplace. Accordingly, much of the psychologically grounded work focused on reducing unethical behavior in the workplace focuses on improving ethical decision making (Rest, 1986), modeling and rewarding appropriate behavior (Brown, Treviño, & Harrison, 2005), and presenting overt (Gino & Desai, 2012) or non-conscious cues (Leavitt, Reynolds, Barnes, Schilpzand, & Hannah, 2012; Leavitt, Zhu, & Aquino, 2016) that activate appropriate moral content.

Social Actor Factors

While much unethical workplace behavior is undertaken in the context of meeting instrumental or economic goals (e.g., cutting corners around employee safety to save money or stealing from a cash register), both the economically and psychologically grounded literatures have largely overlooked the fact that many forms of dishonesty may be undertaken for explicitly social (rather than self-interested) reasons. An individual lying to cover for the absences of a coworker or a lending agent falsifying records to postpone repossession of a customer’s home represent instances that fall outside the scope of the two most common paradigms for understanding workplace dishonesty—both approaches are generally insufficient to describe situations under which an individual engages in dishonesty primarily to benefit another person or restore perceived injustices. Thus, scholars have begun to approach workplace dishonesty by considering explicitly social (rather than instrumental) motives.

Indeed, social self-concept has been central to understanding workplace dishonesty, with scholars generally finding that placing importance on “moral” as central to one’s identity generally decreases unethical behavior (Aquino & Reed, 2002; Aquino, Freeman, Reed, Lim, & Felps, 2009). Dishonesty is often kept in check by the fundamental desire to maintain a self-concept which includes honesty (Aronson, 1969), and recent work focusing on dishonest behavior has generally supported the notion that self-concept maintenance places boundaries around the extent to which most people are willing to behave dishonestly (Mazar, Amir, & Ariely, 2008). In other words, people engage in dishonesty to the extent to which they find an equilibrium between gaining incentives from their dishonesty while maintaining their self-image as honest people. For example, a recent study found that having individuals sign important documents before self-reporting claims (such as tax or insurance statements) significant decreases dishonest reporting compared to signing at the end by heightening self-awareness and forcing individuals to declare their own honesty (Shu, Mazar, Gino, Ariely, & Bazerman, 2012).

More recently, however, scholars have also begun to argue that social motives not only inhibit or bound dishonest behavior carried out for the purposes of self-interest, but that social motives themselves may serve as principal drivers of dishonesty in the absence of self-serving opportunities. That is, upholding one’s valued identities, maintaining the well-being of relationship partners, and protecting the principles or status of a collective may all drive dishonesty. Indeed, this research has generally found support for the notion that financial incentives/personal gain are not sufficient to explain many instances of workplace dishonesty, and that dishonesty is sometimes undertaken even at personal expense or risk to one’s own well-being. For example, Gino and Pierce (2009) found that moral emotions stemming from inequity led to dishonest behavior when grading and rewarding another participant’s performance on an anagram solving task. Specifically, envy inspired by the target’s relative wealth (compared to the grader) led to dishonesty intended to harm the target; by contrast, guilt from a positive inequity (i.e., the rater’s own relative wealth) and empathy from witnessing the target’s negative inequity relative to others both led to dishonest helping behavior. Extending these findings and using archival field data from the Department of Motor Vehicles, Gino and Pierce (2010) found evidence for wealth-based motives in illicit customer help: emissions inspectors showed a pattern of being lenient (fraudulently passing cars that should have failed emissions testing) that favored standard over luxury vehicles. A follow-up experiment using vignettes showed that envy stemming from relative wealth likely explains the greater leniency toward standard versus luxury vehicles (Gino & Pierce, 2010).

A related stream of research has focused on the concept of benevolent dishonesty, including lies intended to benefit the target at the expense of the person telling the lie (termed altruistic lies) and those which benefit both the target and the liar (termed Pareto lies) (Erat & Gneezy, 2012). Indeed, diary studies have found that most individuals lie more than once per day, with many lies being told for the sake of another party (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996), and lab studies have demonstrated that individuals are significantly less averse to lying when it is done for purely benevolent reasons (Erat & Gneezy, 2012). Similarly, dishonesty is viewed as more ethical when it is carried out for altruistic reasons (Levine & Schweitzer, 2014) wherein observers of dishonesty actually respond with increased benevolence-based trust (Levine & Schweitzer, 2015). Complicating matters, people are more willing to engage in dishonesty (misreporting performance) when there is a secondary beneficiary to their dishonest behavior, even when the benefits to the other party are quite negligible (Wiltermuth, 2011). Thus, the potential benefit to others can both motivate dishonest behavior (Erat & Gneezy, 2012), as well as serve as a convenient justification for engaging in self-serving dishonesty (Wiltermuth, 2011).

Finally, role and identity-based demands may also motivate episodes of dishonesty (Grover, 1993a; Leavitt & Sluss, 2015). Grover (1993b) found that conflicts between roles led to falsified reporting among healthcare professionals, such that they generally resolved the role-based conflict by directing dishonesty toward the role they were less identified with (Grover, 1993b). Recent theoretical work has similarly suggested that workplace dishonesty may often be motivated by a desire to uphold identity-based needs specific to the personal, relational, and collective levels of identity (Leavitt & Sluss, 2015). Identity threats may trigger a broad swath of dishonesty in organizations, such that unmanaged or intractable threats to highly valued identities are especially likely to lead to lying. Moreover, identity-based lies become increasingly likely during interactions with an audience for whom the threatened identity is especially relevant, and with whom they share few other common identities for meaningful social interaction (Leavitt & Sluss, 2015). Accordingly, the authors conclude that much workplace dishonesty may stem from individuals simply attempting to live up to the prototypes and internalized expectations of a valued identity, and that the likelihood of dishonesty increases when the individual lacks alternative opportunities for self-affirming in the workplace.

With the potential social antecedents of dishonest behavior in mind, known and unexplored consequences of dishonest behavior can be explored, focusing on detection of dishonesty, effects on interpersonal relationships once dishonesty has been detected, and subsequent behavior of the actor who has engaged in a dishonest act.

Detection of Dishonesty in Interpersonal Encounters

While technology for confirming dishonesty in the workplace (including the use of polygraphs and Functional Magnetic Resonance Imaging) has generated much debate and controversy (cf. Murphy, 1993), an emerging body of research has focused on factors that increase the likelihood of detecting deception within interpersonal encounters. In general, human judgments of dishonesty are little better than chance alone (Bond & DePaulo, 2006), suggesting that much of the time interpersonal deception in organizations may go unnoticed. However, recent work has uncovered new insights about the discovery of deception. Many of these findings have relevance for organizational contexts, and may generate additional areas of inquiry for scholars interested in dishonesty within the workplace. First, increased cognitive load on the part of the deceiver (in this case, having the liar retell events in reverse chronological order) led to more cues of deceit and consequently better observer judgment (Vrij, Mann, Fisher, Leal, Milne, & Bull, 2008). This finding may suggest that deception produced under organizational conditions where cognitive load of employees is likely to be relatively high (i.e., situations that create cognitive depletion, fatigue or stress) may be more readily detectable than deception that is produced under conditions where cognitive load is lower. Future research may similarly examine whether deception that occurs “reflexively” in response to high stress or busy situations (e.g., end of quarter reporting, periods of potential downsizing) is easier to detect than deception that occurs during times of relative tranquility.

Other promising research has found evidence that humans may have an unconscious ability to detect lies that outperforms our more deliberative attempts to discern whether someone is lying or telling the truth (ten Brinke, Stimson, & Carney, 2014). Specifically, these authors found that implicit cognitive activation of concepts related to “truth” or “dishonesty” following interactions with another party were significantly more accurate (average r = .28) than direct (self-reported) judgments of the other party’s honesty (average r = –.11; z for difference = –3.32, p <.001). Thus, relatively automatic processes may be effective for detecting deception, but conscious deliberation about deception may lead to the inclusion of biases and incorrect decision rules that reduce that accuracy. In organizational settings, this finding may suggest that organizational members are less likely to detect interpersonal deception from people they trust or have lengthy experience with; conversely, they may also be likely to misinterpret veracity (false positives) from individuals they view negatively for non-honesty based reasons (e.g., low performers). Future research may thus examine how workplace relationship history and common biases of social judgment (i.e., race, age, gender) influence the likelihood of correctly discerning truth from deception in workplace interactions.

Finally, recent unpublished research has demonstrated that power differentially affects the ability to both successfully deceive and to detect deception on the part of others (Carney, Dubois, Nichiporuk, ten Brinke, Rucker, & Galinsky, 2013). Specifically, across two studies, the authors found that experimentally endowed power led to an overall greater ability to deceive others. The authors argue that these effects occur because high power both mitigates stress associated with dishonesty (reducing transmission of detectable bodily cues for deception) and because high power reduces sensitivity to social norms (which are typically condemnatory of deception). This generally supports previous research, which demonstrates that bodily configurations and postures that prime power generally lead to more dishonest behavior (Yap, Wazlawek, Lucas, Cuddy, & Carney, 2013). Moreover, these authors discovered an interesting equilibrium regarding power and lying—namely, while high power leads to increased believability, low power actually increases the ability to successfully detect lies (Carney et al., 2013). While this stream of research is nascent, the potential extension to organizational settings is intriguing. Those organizational members that most need to detect dishonesty—those high in power—are less able to than those with lower power. Accordingly, future research may examine the use of lower status organizational members or subordinates as advice networks on issues of potential dishonesty, and organizations may even consider simple interventions that temporarily lower self-perceived power (e.g., low chairs) when asking members to honestly report troubling information.

Future research may also consider additional factors that increase the likelihood of detecting, believing, or even propagating deceptive messages in organizations. While it has been proposed that threats to valued identities may greatly increase the likelihood of workplace dishonesty (Leavitt & Sluss, 2015), it is also possible that collectively experiencing identity threats with the dishonest individual may increase the likelihood that organizational members will believe the dishonest message to the extent to which it assuages discomfort caused by the threat. For example, a district manager in a poorly performing region may be more likely to believe falsified sales reports from his historically low-performing team, to the extent to which he or she attributes the “improvements” to his or her own competence as a manager. Similarly, employees at Enron were noted to have believed and spread optimistic (yet untrue) rumors about the health and future of the company immediately preceding its fall (McLean & Elkind, 2013; in Leavitt & Sluss, 2015). Researchers may also consider the extent to which dishonest acts on behalf of another person may be less detectable in interpersonal interactions than those which benefit only the self. Research on relational job design (Grant, 2007; 2008) has found that acting on behalf of another’s interests and can increase felt conviction and confidence (Amanatullah & Morris, 2010). Thus, it is reasonable to suggest that dishonesty which benefits another person may reduce anxiety or doubt that make lies more detectable by others. In summary, just as many (non-work) social interactions may reward some degree of dishonesty (Bok, 2011; Nyberg, 1993), future organizational research may focus on conditions under which dishonesty is not just more likely to occur or be detected, but also when it may be favored over unvarnished truth by organizational members.

Responses to Dishonesty

Researchers have also devoted some attention to date to how organizational members respond once dishonesty has been uncovered. In general, deception has been argued (Bok, 2011) and empirically demonstrated (Schweitzer, Hershey, & Bradlow, 2006) to erode trust, while also causing harm to relationships (Tyler, Feldman, & Reichert, 2006). Moreover, deception increases the likelihood of retaliation for transgressions (Boles, Croson, & Murnighan, 2000). Thus, it is reasonable to conclude that the discovery of deception is generally corrosive to workplace relationships.

However, recent work examining motives and norms for dishonesty have uncovered points of nuance. For example, dishonesty about a negotiator’s emotional states and interpersonal reactions are generally viewed as more ethical or appropriate than other forms of dishonesty in a negotiations context, such as misrepresenting facts about one’s preferences or resources (Fulmer, Barry, & Long, 2009). This finding implies that in contexts where some degree of dishonesty or bluffing is to be expected, responses to dishonesty may be tempered. Relatedly, researchers may consider investigating responses and corrective strategies of audiences in contexts where dishonesty, bluffing, or exaggeration is likely. For example, entrepreneurs sometimes lie to investors to improve the likelihood of acquiring resources required for firm growth and survival (Pollack & Bosse, 2014). While many funding pitches are likely to include exaggerations or projections made from unrealistically optimistic assessments (Pollack & Bosse, 2014), the process by which experienced investors navigate the veracity of such claims and forgive versus punish dishonesty is worthy of future inquiry.

Relatedly, new research on benevolent lies (those told with the interests of protecting another party in mind) has shown that both observers and audiences of benevolent dishonesty actually respond with increased benevolence-based (but not integrity-based) trust. This work overturns long-held assumptions that dishonesty itself necessarily breaches trust suggesting that the effects on trust following dishonesty “may really tell us more about the consequences of selfish behavior than deception per se” (Levine & Schweitzer, p. 89). However, it is extremely likely that other important organizational factors may moderate the dishonesty–trust relationship. One plausible missing component may be the extent to which dishonesty restricts the target’s agency. Specifically, individuals may still favorably view coworkers who have used dishonesty to protect them from harm, but this may greatly depend on the extent to which dishonesty impinged upon their ability or choice to exercise alternative options. For example, an assistant professor submitting her materials for tenure and promotion may react quite differently to a senior colleague who told her a benevolent lie about her chances for tenure, if she also had an outside opportunity that she turned down based upon this false information. Thus, researchers may explore the extent to which dishonest information interacts with agency and influences subsequent trust or retaliation.

Downstream Behaviors Following Dishonesty

Many clandestine instances of unethical or deviant behavior, such as taking company property, represent discrete episodes in time. Dishonesty, by contrast, can often linger into the future: a fraudulent earnings statement becomes a permanent record and the incorrect numbers will influence future earnings statements; a lie to one’s supervisor about a lacking skill set may emerge again in the future any time that skill set it needed. Thus, the behavior of dishonest people following acts of dishonesty is a potentially fruitful area for further inquiry. A brief review of themes in the extant literature follows, including downstream behaviors stemming from dishonesty, and key discrepancies that may steer future research are highlighted.

Living with Dishonesty

One especially promising avenue for future inquiry is the subsequent behavior of individuals who have engaged in dishonest behavior. Previous research has generally found that individuals who engage in dishonest or deviant acts are likely to engage in subsequent discretionary ethical behavior as a way of restoring moral credentials (Gino & Margolis, 2011; Merritt, Effron, & Monin, 2010). This effect appears particularly pronounced for individuals who internalize morality as especially self-defining as a way to restore moral identity, whereas individuals who do not value morality as a self-defining trait are more likely to behave in ways consistent with subsequent dishonesty (Mulder & Aquino, 2013). While this literature on licensing may be especially insightful for considering when “good employees” may engage in bad acts (Klotz & Bolino, 2013) or understanding how people maintain positive self-concepts in light of unethical behavior (Gino & Margolis, 2011), it may fall short of explaining subsequent behavior when dishonesty is public and the fabrication is enduring. Because dishonesty involves making a statement (verbal or written) to an audience, many cases of dishonesty involve commitment to a fabrication for which there is a permanent record, and which the individual may need to restate again in the future. In short, compared to many unethical acts, many cases of dishonesty cannot be thought of as discrete episodes, because future decisions and social encounters may again make the fabrication salient. Thus, while an individual may recredential themselves for sneaking out of the office early on a Tuesday (deviance) by rendering help to a coworker via email that night, an individual who has lied about his credentials on a resume may not be able to readily rectify the act by engaging in benevolent behavior—instead, dishonestly may beget further dishonesty to support the original fabrication (Bok, 2011).

When dishonest acts re-emerge in the workplace, individuals may have four likely response options: they may retract the fabrication (“sorry—I misspoke earlier!” or “let me resubmit that report—I was careless!”); they may hedge the fabrication by adding a qualifying truth (“yes, I was a college athlete, but it was only a club sport” or “that computer crash did slow me down, but I may not have made the deadline anyway”); they may commit to the fabrication by adding supplemental fabrications (e.g., supplementing fraudulent earnings statements supplied to an auditor with fraudulent cash-flow statements); or in some cases, work to make the fabrications true. For example, an employee who has overrepresented his knowledge of a topic to his supervisor may spend his weekend studying up on it to make his claim accurate. In a more dramatic example, in the nonfiction book The Soul of a New Machine, author Tracy Kidder (2011) describes the leadership style on a critical skunkworks project at Data General as “mushroom management—keeping them in the dark, feeding them sh_t, and watching them grow”—the rationalization for which was that if the engineers had an honest perspective on the state of the company and the true risk of the project, it would have undermined their naïve passion and rendered them unlikely to succeed. In this case, dishonesty about current conditions may have actually created useful goals within the organization, which actually led to its eventual success. Thus, a promising area of inquiry with regard to workplace dishonesty focuses on what dishonest organizational members do, once their fabrications become public.

Discrepancies Emerging From Dishonesty

Accordingly, some insights in to subsequent behavior following dishonesty in organizations may be gleaned by considering factors relevant to dishonest acts. Consider that two critical discrepancies are created by dishonest behavior, and the magnitude of these discrepancies may have meaningful implications for subsequent behavior: First, the magnitude of the discrepancy between the truth and the claim; second, the elapsed time between the critical event and the reporting of the fabrication. Both of these discrepancies may have significant influence on whether the dishonest statements are retracted, hedged, or followed by further acts of dishonesty in an effort to commit to the original act. Some dishonest statements show a greater discrepancy from the truth than others, potentially making them less amenable to retraction or hedging. For example, in 2012, then-vice presidential candidate Paul Ryan famously misstated his marathon completion time as less than three hours (a claim that showcased him as an especially elite athlete). When media sources tracked down his actual marathon time, however, it was revealed that his single marathon completion took slightly over four hours (a time more in line with a reasonably fit running enthusiast). Ryan responded by commenting that his brother had a time around the claimed three-hour mark (a hedge), while also commenting that he likely misremembered (a retraction—in fairness to now Speaker Ryan, it is unknown whether the falsehood was deliberate or not). Smaller fabrications such as this (which show only a moderate discrepancy from the truth) are more likely to be addressed through retraction or hedging, as it is possible that the actor’s dishonesty will be overlooked or readily forgiven in light of corrective or clarifying information. By contrast, a fabrication representing a larger discrepancy from the truth (e.g., a similar statement by someone who had never run a marathon or failed to complete one) cannot be readily explained to the audience in a credible way. Thus, when fabrications with larger discrepancies from the truth re-emerge, the actor would more likely manage them by adding supplemental fabrications (e.g., “I didn’t run the actual official marathon; I just ran the distance myself informally for time” or “the race officials must have carelessly lost my results!”). While this example illustrates a small embellishment with minimal consequences for work life, it is likely that acts of resume fraud, falsified earnings statements, and other dishonest workplace behaviors are more likely to be resolved through retraction or hedging when their distance from the truth is smaller rather than greater. Accordingly, organizational scholars may consider exploring the notion that small acts of dishonesty beget eventual truth, and large acts of dishonesty beget more dishonesty.

Similarly, dishonest claims may be made immediately following a trigger event (e.g., a subordinate lies to their supervisor who has caught an error immediately after it is made), or some length of time after the trigger event (e.g., a CFO realizes a month before end-of-year reporting that his firm’s financial performance will fall woefully short of projections, and spends weeks considering whether he should engage in fraud to preserve the stock value). Temporal unfolding of events has been argued to be underspecified and undertheorized within the organizational and behavioral sciences (Mitchell & James, 2001), and workplace dishonesty is an area where this is especially true. This temporal discrepancy between the trigger event and the fabrication may have considerable impact on how the dishonest organizational member subsequently behaves. While it is likely that lies told relatively reflexively may be good candidates for retraction or hedging, dishonesty that occurs after lengthy deliberation may be more likely to inspire additional commitment, resulting in additional acts of dishonesty to cover the original act, or attempts to make the fabrication true. To wit, lies may sometimes be seen as borrowing from the future or true of an alternate and idealized conception of one’s self (Leavitt & Sluss, 2015), and psychological processes such as guilt, cognitive dissonance, and self-deception are likely factors in explaining how time spent developing and “living with” a fabrication will affect subsequent behavior in the workplace.

Finally, because social motives and their relative impact on behavior may vary between national or regional cultures, scholars may find meaningful boundary conditions in considering how social motives may differentially affect dishonesty across national contexts. For example, employees within collectivist cultures may be more motivated to engage in fraud that protects the well-being of their workgroup, and cultures that emphasize honor may unwittingly encourage dishonesty in response to affronts to status. Thus, cultural differences with regard to identity construal, felt obligations to close others, or need for social approval may all prove to be meaningful avenues for inquiry within the study of workplace dishonesty.

Conclusion

While unethical workplace behavior and organizational deviance will likely continue to generate interest in the organizational and behavioral sciences, scholars should further consider studying illicit and questionable behavior with social motives in mind. To wit, workplace dishonesty, which is socially situated and audience-directed, offers great opportunity for researchers interested in organizationally relevant and socially directed behavior. By considering social conditions under which dishonesty will be detected, overlooked or even proliferated, organizations may be able to reduce the harm caused by dishonesty. By considering how and why trust will be harmed by the use of “prosocial” deception, organizational members may consider when painful truths may be more suitable than false platitudes. By examining how people behave following dishonesty, organizational leaders may be able to uncover dishonest acts and build cultures of honesty following deception.

References

Albrecht, W. S., Wernz, G. W., & Williams, T. L. (1995). Fraud: Bringing light to the dark side of business. Burr Ridge, IL: Irwin Professional.Find this resource:

Amanatullah, E. T., & Morris, M. W. (2010). Negotiating gender roles: Gender differences in assertive negotiating are mediated by women’s fear of backlash and attenuated when negotiating on behalf of others. Journal of Personality and Social Psychology, 98(2), 256–267.Find this resource:

Aquino, K. (1998). The effects of ethical climate and the availability of alternatives on the use of deception during negotiation. International Journal of Conflict Management, 9(3), 195–217.Find this resource:

Aquino, K., Freeman, D., Reed, A., II, Lim, V. K., & Felps, W. (2009). Testing a social-cognitive model of moral behavior: The interactive influence of situations and moral identity centrality. Journal of Personality and Social Psychology, 97(1), 123–141.Find this resource:

Aquino, K., & Reed, A., II. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83(6), 1423–1440.Find this resource:

Aquino, K., Reed, A., Thau, S., & Freeman, D. (2007). A grotesque and dark beauty: How moral identity and mechanisms of moral disengagement influence cognitive and emotional reactions to war. Journal of Experimental Social Psychology, 43(3), 385–392.Find this resource:

Ariely, D. (2013). The honest truth about dishonesty: How we lie to everyone—especially ourselves. New York: HarperCollins.Find this resource:

Aronson, E. (1969). The theory of cognitive dissonance: A current perspective. Advances in Experimental Social Psychology, 4, 1–34.Find this resource:

Barnes, C. M., Schaubroeck, J., Huth, M., & Ghumman, S. (2011). Lack of sleep and unethical conduct. Organizational Behavior and Human Decision Processes, 115(2), 169–180.Find this resource:

Bennett, R. J., & Robinson, S. L. (2000). Development of a measure of workplace deviance. Journal of Applied Psychology, 85(3), 349–360.Find this resource:

Bok, S. (2011). Lying: Moral choice in public and private life. New York: Vintage.Find this resource:

Boles, T. L., Croson, R. T., & Murnighan, J. K. (2000). Deception and retribution in repeated ultimatum bargaining. Organizational Behavior and Human Decision Processes, 83(2), 235–259.Find this resource:

Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234.Find this resource:

Broomfield, K. A., Robinson, E. J., & Robinson, W. P. (2002). Children’s understanding about white lies. British Journal of Developmental Psychology, 20(1), 47–65.Find this resource:

Brown, M. E., Treviño, L. K., & Harrison, D. A. (2005). Ethical leadership: A social learning perspective for construct development and testing. Organizational Behavior and Human Decision Processes, 97(2), 117–134.Find this resource:

Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage (Vol. 4). Cambridge, U.K.: Cambridge University Press.Find this resource:

Carney, D. R., Dubois, D., Nichiporuk, N., ten Brinke, L., Rucker, D. D., & Galinsky, A. D. (2013). The deception equilibrium: The powerful are better liars but the powerless are better lie-detectors. Manuscript submitted for publication.Find this resource:

Cohen, T. R., Panter, A. T., Turan, N., Morse, L., & Kim, Y. (2014). Moral character in the workplace. Journal of Personality and Social Psychology, 107(5), 943–963.Find this resource:

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70(5), 979–995.Find this resource:

Desai, S., & Kouchaki, M. (2016). Moral Symbols: A Necklace of Garlic against Unethical Requests. Academy of Management Journal, 60(1), 7–28.Find this resource:

Erat, S., & Gneezy, U. (2012). White lies. Management Science, 58(4), 723–733.Find this resource:

Frank, M. G., & Feeley, T. H. (2003). To catch a liar: Challenges for research in lie detection training. Journal of Applied Communication Research, 31, 58–75.Find this resource:

Fulmer, I. S., Barry, B., & Long, D. A. (2009). Lying and smiling: Informational and emotional deception in negotiation. Journal of Business Ethics, 88, 691–709.Find this resource:

Gino, F., & Desai, S. D. (2012). Memory lane and morality: How childhood memories promote prosocial behavior. Journal of Personality and Social Psychology, 102(4), 743–758.Find this resource:

Gino, F., & Margolis, J. D. (2011). Bringing ethics into focus: How regulatory focus and risk preferences influence (un)ethical behavior. Organizational Behavior and Human Decision Processes, 115(2), 145–156.Find this resource:

Gino, F., & Pierce, L. (2009). Dishonesty in the name of equity. Psychological Science, 20(9), 1153–1160.Find this resource:

Gino, F., & Pierce, L. (2010). Robin Hood under the hood: Wealth-based discrimination in illicit customer help. Organization Science, 21(6), 1176–1194.Find this resource:

Gino, F., & Shea, C. (2012). Deception in negotiation: The influence of emotion. In G. Bolton & R. Croson (Eds.), The Oxford handbook of economic conflict resolution (pp. 47–60). Oxford: Oxford University Press.Find this resource:

Goertz, G. (2006). Social science concepts: A user’s guide. Princeton: NJ: Princeton University Press.Find this resource:

Grant, A. M. (2007). Relational job design and the motivation to make a prosocial difference. Academy of Management Review, 32(2), 393–417.Find this resource:

Grant, A. M. (2008). Does intrinsic motivation fuel the prosocial fire? Motivational synergy in predicting persistence, performance, and productivity. Journal of Applied Psychology, 93, 48–58.Find this resource:

Green, S. P. (2006). Lying, cheating, and stealing: A moral theory of white-collar crime. Oxford: Oxford University Press.Find this resource:

Grover, S. L. (1993a). Lying, deceit, and subterfuge: A model of dishonesty in the workplace. Organization Science, 4(3), 478–495.Find this resource:

Grover, S. L. (1993b). Why professionals lie: The impact of professional role conflict on reporting accuracy. Organizational Behavior and Human Decision Processes, 55(2), 251–272.Find this resource:

Hancock, J. T. (2007). Digital perception: Why, when and how people lie online. In A. Joinson, K. McKenna, T. Postmes, & U. D. Reips (eds.), The Oxford Handbook of Internet Psychology (pp. 289–301). Oxford: Oxford University Press.Find this resource:

Iezzoni, L. I., Rao, S. R., DesRoches, C. M., Vogeli, C., & Campbell, E. G. (2012). Survey shows that at least some physicians are not always open or honest with patients. Health Affairs, 31(2), 383–391.Find this resource:

Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issue-contingent model. Academy of Management Review, 16(2), 366–395.Find this resource:

Kidder, T. (2011). The soul of a new machine. New York: Back Bay Books.Find this resource:

Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. Journal of Applied Psychology, 95, 1–31.Find this resource:

Klotz, A. C., & Bolino, M. C. (2013). Citizenship and counterproductive work behavior: A moral licensing view. Academy of Management Review, 38(2), 292–306.Find this resource:

Leary, M. R., & Kowalski, R. M. (1990). Impression management: A literature review and two-component model. Psychological Bulletin, 107(1), 34–47.Find this resource:

Leavitt, K., Reynolds, S. J., Barnes, C. M., Schilpzand, P., & Hannah, S. T. (2012). Different hats, different obligations: Plural occupational identities and situated moral judgments. Academy of Management Journal, 55(6), 1316–1333.Find this resource:

Leavitt, K., & Sluss, D. M. (2015). Lying for who we are: An identity-based model of workplace dishonesty. Academy of Management Review, 40(4), 587–610.Find this resource:

Leavitt, K., Zhu, L., & Aquino, K. (2016). Good without knowing it: Subtle contextual cues can activate moral identity and reshape moral intuition. Journal of Business Ethics, 137(4), 785–800.Find this resource:

Levine, E. E., & Schweitzer, M. E. (2014). Are liars ethical? On the tension between benevolence and honesty. Journal of Experimental Social Psychology, 53, 107–117.Find this resource:

Levine, E. E., & Schweitzer, M. E. (2015). Prosocial lies: When deception breeds trust. Organizational Behavior and Human Decision Processes, 126, 88–106.Find this resource:

Lewicki, R. J. (1983). Lying and deception: A behavioral model. In M. H. Bazerman & R. J. Lewicki (Eds.), Negotiating in organizations (pp. 68–90). Beverly Hills, CA: SAGE.Find this resource:

Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45(6), 633–644.Find this resource:

McLean, B., & Elkind, P. (2013). The smartest guys in the room: The amazing rise and scandalous fall of Enron. New York: Penguin.Find this resource:

Merritt, A. C., Effron, D. A., & Monin, B. (2010). Moral self‐licensing: When being good frees us to be bad. Social and Personality Psychology Compass, 4(5), 344–357.Find this resource:

Mitchell, T. R., & James, L. R. (2001). Building better theory: Time and the specification of when things happen. Academy of Management Review, 26(4), 530–547.Find this resource:

Moore, C. (2008). Moral disengagement in processes of organizational corruption. Journal of Business Ethics, 80(1), 129–139.Find this resource:

Mulder, L. B., & Aquino, K. (2013). The role of moral identity in the aftermath of dishonesty. Organizational Behavior and Human Decision Processes, 121(2), 219–230.Find this resource:

Murnighan, J. K. (1991). The dynamics of bargaining games. Upper Saddle River, NJ: Prentice Hall.Find this resource:

Murphy, K. R. (1993). Honesty in the workplace. Belmont, CA: Brooks/Cole.Find this resource:

Nyberg, D. (1993). The varnished truth. Chicago: University of Chicago Press.Find this resource:

O’Connor, K. M., & Carnevale, P. J. (1997). A nasty but effective negotiation strategy: Misrepresentation of a common-value issue. Personality and Social Psychology Bulletin, 23(5), 504–515.Find this resource:

Pierce, L., Snow, D. C., & McAfee, A. (2015). Cleaning house: The impact of information technology monitoring on employee theft and productivity. Management Science, 61(10), 2299–2319.Find this resource:

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2016). Recommendations for creating better concept definitions in the organizational, behavioral, and social sciences. Organizational Research Methods, 19(2), 159–203.Find this resource:

Pollack, J. M., & Bosse, D. A. (2014). When do investors forgive entrepreneurs for lying? Journal of Business Venturing, 29(6), 741–754.Find this resource:

Rest, J. R. (1986). Moral development: Advances in research and theory. New York: Praeger.Find this resource:

Reynolds, S. J., Dang, C. T., Yam, K. C., & Leavitt, K. (2014). The role of moral knowledge in everyday immorality: What does it matter if I know what is right? Organizational Behavior and Human Decision Processes, 123(2), 124–137.Find this resource:

Reynolds, S. J., Leavitt, K., & DeCelles, K. A. (2010). Automatic ethics: The effects of implicit assumptions and contextual cues on moral behavior. Journal of Applied Psychology, 95(4), 752–760.Find this resource:

Schweitzer, M. E., Hershey, J. C., & Bradlow, E. T. (2006). Promises and lies: Restoring violated trust. Organizational Behavior and Human Decision Processes, 101(1), 1–19.Find this resource:

Schweitzer, M. E., Ordóñez, L., & Douma, B. (2004). Goal setting as a motivator of unethical behavior. Academy of Management Journal, 47(3), 422–432.Find this resource:

Scott, E. D. (2003). Plane truth: A qualitative study of employee dishonesty in the airline industry. Journal of Business Ethics, 42(4), 321–337.Find this resource:

Scott, E. D., & Jehn, K. A. (2003). About face: How employee dishonesty influences a stakeholder’s image of an organization. Business & Society, 42(2), 234–266.Find this resource:

Shu, L. L., Mazar, N., Gino, F., Ariely, D., & Bazerman, M. H. (2012). Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proceedings of the National Academy of Sciences, 109(38), 15197–15200.Find this resource:

ten Brinke, L., Stimson, D., & Carney, D. R. (2014). Some evidence for unconscious lie detection. Psychological Science, 25(5), 1098–1105.Find this resource:

Tenbrunsel, A. E. (1998). Misrepresentation and expectations of misrepresentation in an ethical dilemma: The role of incentives and temptation. Academy of Management Journal, 41(3), 330–339.Find this resource:

Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32(6), 951–990.Find this resource:

Tyler, J. M., Feldman, R. S., & Reichert, A. (2006). The price of deceptive behavior: Disliking and lying to people who lie to us. Journal of Experimental Social Psychology, 42(1), 69–77.Find this resource:

Umphress, E. E., Bingham, J. B., & Mitchell, M. S. (2010). Unethical behavior in the name of the company: The moderating effect of organizational identification and positive reciprocity beliefs on unethical pro-organizational behavior. Journal of Applied Psychology, 95(4), 769–780.Find this resource:

Vrij, A., Mann, S. A., Fisher, R. P., Leal, S., Milne, R., & Bull, R. (2008). Increasing cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order. Law and Human Behavior, 32(3), 253–265.Find this resource:

Wiltermuth, S. S. (2011). Cheating more when the spoils are split. Organizational Behavior and Human Decision Processes, 115(2), 157–168.Find this resource:

Yap, A. J., Wazlawek, A. S., Lucas, B. J., Cuddy, A. J., & Carney, D. R. (2013). The ergonomics of dishonesty: The effect of incidental posture on stealing, cheating, and traffic violations. Psychological Science, 24(11), 2281–2289Find this resource:

Notes:

(1.) Some philosophers would consider acts of omission (leaving out truth) tantamount to dishonesty, whereas others would not. Some (such as Kant or Aquinas) considered stating mistruth in any form (including telling fables to children, or jokes that include hypothetical scenarios) to be egregious acts of dishonesty. Augustine of Hippo, by contrast, did not consider fallacious jocose statements (e.g. saying that an elephant, a Texan and a Rabbi walked in to a bar together, with no knowledge of this event ever actually happening, for the purpose of creating amusement) to be dishonest. To avoid confounding dishonesty with constructs such as organizational silence, the definition is limited to deliberate commissions of fabrication that are expected to be believed.

(2.) Recognizing that there are instances where individuals may pass along false information without knowing it (e.g., an automobile sales person passes along information about vehicle emission rates from a spec sheet that they believed was accurate), the focus here is on deliberate acts of dishonesty in which the actor is aware of the falsehood.