Skip to main content

Biases in bioethics: a narrative review

Abstract

Given that biases can distort bioethics work, it has received surprisingly little and fragmented attention compared to in other fields of research. This article provides an overview of potentially relevant biases in bioethics, such as cognitive biases, affective biases, imperatives, and moral biases. Special attention is given to moral biases, which are discussed in terms of (1) Framings, (2) Moral theory bias, (3) Analysis bias, (4) Argumentation bias, and (5) Decision bias. While the overview is not exhaustive and the taxonomy by no means is absolute, it provides initial guidance with respect to assessing the relevance of various biases for specific kinds of bioethics work. One reason why we should identify and address biases in bioethics is that it can help us assess and improve the quality of bioethics work.

Peer Review reports

Background

Biases can broadly be defined as “pervasive simplifications or distortions in judgment and reasoning that systematically affect human decision making” [1]. There is an extensive literature on biases in general, and a wide range of biases have been identified for specific professions, including health professionals and health researchers [2,3,4,5,6,7,8].

In psychology and behavioral economics, more than 180 biases have been identified, and in empirical research there are many measures to estimate, avoid, and reduce biases [9,10,11]. Compared to other fields, biases have been gained less attention in bioethics.

Certainly, biases have been mentioned in bioethics. For example, some biases have been identified in clinical ethics committees work [12], and specific biases have been recognized, such as “whiteness in bioethics” [13, 14], gender bias (“maleness”) [15, 16], western bias [17], cultural bias [18], and geographical bias (in terms of which topics that are discussed in bioethics) [19]. Specific biases have also been shown to undermine patient autonomy [20, 21] and health professionals’ autonomy [22]. Bias amongst journal editors [23] may be relevant for bioethics, and biases have been identified in specific practices, such as in nursing ethics and health care research ethics [24,25,26]. Moreover, outcome bias have been demonstrated in post hoc ethical judgment as “the same behaviors produce more ethical condemnation when they happen to produce bad rather than good outcomes, even if the outcomes are determined by chance”[27].

Nonetheless, compared to other areas, biases have attracted surprisingly little and fragmented attention in bioethics. Does this mean that there are no biases in bioethics and its literature? Or does it mean that there are biases in bioethics, but we have no need to identify [28] or avoid them? Or is the reason that we do not have any means to avoid them, making it unnecessary to address them? Or that we are dealing with them under other names? These are relevant question to be addressed in this article.

Accordingly, the key question for this article is: what kind of biases can be relevant in bioethics, and what can we do about them? While it is impossible to provide an exhaustive list of and in-depth analysis of all biases in bioethics, this article tries to provide an overview and to suggest a classification of biases in bioethics. This will be done by asking three specific questions:

  1. 1.

    What is the relevance for bioethics of various biases identified in other fields of research?

  2. 2.

    For which type of bioethics work are these biases (more or less) relevant?

  3. 3.

    Are there biases specific to bioethics work?

Given the vast number of biases in the literature it is impossible to provide a detailed description and deep analysis of all of them. For that purpose, I refer the reader to the special literature (found in the references). The goal of this article is more modestly (a) to provide a compilation of a wide range of research to facilitate the access to a fascinating and important field to newcomers (i.e., an educational purpose), (b) to demonstrate an approach to investigate biases in bioethics in a systematic manner (typology), and (c) to draw attention to a topic that deserves more explicit attention than it has obtained (i.e., to stimulate debate). However, the overall aim is to contribute to the improvement of bioethics, as biases are something that can distort or reduce the quality of our work, and the first step is to identify and to acknowledge them.

Some initial distinctions and clarifications

The provided definition of bias is very broad: “pervasive simplifications or distortions in judgment and reasoning that systematically affect human decision making” [1]. It can be argued that it is directed towards rapid decision-making and not more elaborate reflection [9, 11, 29]. This is true, but the task in this article is to investigate the relevance for bioethics. Hence, “decision making” must reflect the decisions that we do in bioethics.

Moreover, biases have been defined in the field of ethics as “faulty beliefs, attitudes, or behavioral tendencies that constrain cognition and thereby inhibit an individual's ability to make ethical decisions” [30]. While the latter definition works well, this article will not be constricted to “cognition” in a narrow sense, but includes affection, judgment, and moral deliberation.

Additionally, it can be relevant to differentiate between bias in the process, in the content (of the outcome of ethics work), and in the characteristics of (specific) bioethicists. Clearly, as bioethicists we are subject to a wide range of biases and prejudices in all parts of our activities. Due to limited space, this article will focus more specifically on the biases that can appear in our activities, i.e., in doing bioethics.

As indicated, biases may pose ethical problems, such as stigmatization, discrimination, and injustice. As with psychological biases, biases in bioethics may result in unsafe, ineffective, or unwarranted care [31]. While highly important and ethically relevant, the consequences of biases in actual care are beyond the topic of this study and will be left for other and more specific investigations.

Accordingly, it can be helpful to differentiate between various types of biases depending on the type of bioethics work. Bioethics is a type of applied ethics [32] that has been defined as “a field spanning a range of different philosophical approaches, normative standpoints, methods and styles of analysis, metaphysics, and ontologies” [33]. While there are many ways to classify the activities within bioethics, the following appears relevant for this study:

  1. (a)

    Philosophical, ethical, and conceptual analyzes (PEC): theoretical or explorative analysis without explicit normative conclusion, decision, favoritism, or advice.

  2. (b)

    Ethical analyzes (EA) in which bioethicists explores the evidence and relevant argumentation, provide careful deliberation, and provide a well-founded conclusion (working as a judge in the terminology of Haidt [34]).

  3. (c)

    Clinical ethics consultation (CEC): Communication and facilitating decision-making [35].

  4. (d)

    Agitation (A): where bioethicists argue (one-sidedly, rhetorically, or defensive) for a given view (and function as lawyers in the wording of Jonathan Haidt [34]).

  5. (e)

    Empirical research (ER): in which bioethicists contribute to producing empirical evidence on normative issues.

  6. (f)

    Ethics literature synthesis (ELS): Summaries of normative literature, such as in reports etc. [36].

Moreover, there may be differences between biases in meta-ethics, normative ethics, empirical, applied, and descriptive/empirical ethics [32]. Although the biases that are discussed in this article are identified in bioethics, they may be relevant for other types of as well.

In the following I will provide an overview of and address biases in bioethics work and introduce a taxonomy that hopefully can be useful for (a) educational and (b) classificatory purposes, as well as (c) stimulate debate and further research. Please note that the taxonomy is neither exhaustive nor absolute.

Methods

In order to identify biases in bioethics, a narrative (overview) review [37] was conducted. An initial search was performed in PubMed (February 5 2022) with the search string ((bias[MeSH Terms]) AND ((bioethics[MeSH Terms]) OR (medical ethics[MeSH Terms])).

156 references were identified and screened for titles and abstracts. References that thematically did not address ethical issues or were not related to biases relevant for bioethics were excluded. Of the 43 full-text examined papers, 32 were included. Based on snowballing techniques, 74 references were added from the included papers and further 61 relevant articles referred to in these were included.

Additionally, based on previous research [7, 38], biases in psychology and behavioral economics were reviewed and those relevant for bioethics were included. References for and examples of these were identified by specific (targeted) searches. As the goal was to find relevant examples (and not to be exhaustive), only one or two examples or references were included.

Main text

Mental Biases

In accordance with the extensive studies of biases in psychology and behavioral economics it is reasonable to differentiate between cognitive and affective biases, as they can distort and bias bioethics work. Additionally, I have identified imperatives (as a type of bias) and moral biases (see below). The space does not allow for mentioning, explaining, and exemplifying of all the biases that are relevant in bioethics. While several biases will be listed, defined, and explained in tables, only some will be discussed due to the very many biases and limited space. Moreover, biases in the bioethics literature are identified to come in three kinds: (1) the description of biases in the field of interest described in articles on bioethical issues, e.g., how health professionals may have specific cognitive biases in clinical decision-making; (2) biases as explanations of positions or arguments in bioethics, for example that the withholding-withdrawal-distinction is a result of loss aversion; (3) biases in doing bioethics (e.g., in arguments or reasoning). As the latter two are the most important for the quality of bioethics work, the emphasis will be on those.

Cognitive biases

The psychology and behavioral economics literature have identified a wide range of cognitive biases [9, 11, 29]. Many of these are relevant for bioethics, as they influence the cognitive aspects of ethical judgments and decision making [30, 39]. Table 1 provides a selection of cognitive biases that can be relevant in bioethics and are worth considering, a definition and/or short description of these biases as well as in which type of bioethics work the bias mainly relates to.

Table 1 Cognitive biases relevant for bioethics listed in alphabetical order with explanations and an indication for what type of bioethics this bias may be most relevant to assess

One bias that can be observed in the bioethics literature is the extension bias. For example, it is frequently thought that more blood tests and radiological examinations are better than few [40]. Correspondingly, in the enhancement debate it has been argued that more intelligence is better than little or normal intelligence [41]. Moreover, such cognitive biases are relevant in the ethics of priority setting where providing many low-value services erroneously is considered to be of great value [6]. Moreover, we sometimes tend to think that the more arguments we can find for a decision, the better (neglecting quality). Hence, the general tendency to think that more is better than less appears to have some relevance in bioethics as well.

According to the so-called focusing illusion we tend to focus too much on certain details, ignoring other factors [42]. This bias is particularly relevant in complex cases where we may come to base ethical analyses on specific aspects and premises (such as facts, values, principles). Moreover, it may be relevant for ethical arguments, where we can come to focus solely on specific principles, e.g., on the principle of personal autonomy in assessing prenatal screening [43]. The focusing illusion is related to the prominence effect (see below) and the anchoring effect, where we tend to rely too much on initial information, and ignore high quality evidence (or context information) that may be more difficult to obtain [8].

Confirmation bias is the tendency to focus on information in a way that confirms one's preconceptions or expectations and is related to what has been called the “self-serving bias,” i.e., the tendency unwittingly to assimilate evidence in a way that favors a conclusion that we stand to gain from [44]. Confirmation bias may not be restricted to evidence, but may also include intuitions, arguments, and judgments that support a specific bioethical perspective or conclusion.

Another bias worth paying careful attention to is the endowment effect according to which we can come to overvalue what we already have got (or obtained) compared to alternatives. While bioethicists do not obtain or depend on things (as in experiments on the endowment effect), the same psychological mechanism may be relevant for our relationship with argument, perspectives, lines of reasoning, theoretical positions etc. In the same way that we tend to demand much more to give up an object than we would be willing to pay to acquire it [45], we may cling to a specific perspective or position in bioethics. Once we have an insight or a view, we may not be willing to give it a way or replace it by another, even if it may be better. As such, an “endowment effect” in bioethics can spur conservativism [46].

The tendency to overestimate the accuracy of one's own’s judgments, i.e., the illusion of validity, appears to be as relevant in bioethics work as elsewhere [47]. The same goes for the tendency to rely on familiar methods, ignoring or devaluing alternative approaches [48]. Bioethicists narrowly following one approach, be it (rule-)utilitarianism or deontology, are subject to the law of the instrument.

Other general biases may be relevant in bioethics work as well, such as the implicit bias which is described as the tendency to let underlying attitudes and stereotypes attributed to persons or groups of people affect how we understand, judge, and engage with them (without being aware of it) [49]. This has also been labelled “unconscious bias” and is related to the synecdoche effect, where one specific characteristic comes to signify the whole person [50], e.g., where persons with certain disabilities are addressed in terms of their disability, and not as a person.

Also, in bioethics we may be subject to present bias, e.g., when we show a stronger preference for addressing more immediate issues, outcomes, or solutions compared to more long-term problems, outcomes, or solutions. When we face with topical cases in the clinic or in the media and are expected to suggest solutions, more long-term and principled issues may be overshadowed [51, 52].

Probability neglect is the tendency to neglect probabilities when making decisions under uncertainty. This seems to be a general psychological bias that may be relevant when we assess potential outcomes of decisions or actions [53, 54]. Empirical premises are crucial to many types of bioethics work, and we may come to neglect small risks or come to totally overrate them. For example, bioethicists arguing for germline gene editing (GGE) may downplay off-target effects: “it is plausible that as GGE develops the rate of off-target mutations will become negligible. The rates of off-targets mutations in animal models have been declining rapidly, and such mutations are now considered ‘undetectable’ in some applications” [55]. Others may overrate such effects.

The tendency to have an excessive optimism towards innovations (pro-innovation bias) is also known in healthcare [56, 57], where some bioethicists are known to be very positive to specific innovations, such as CRISPR-cas9, and others are optimistic with respect to technological innovations in general [58]. The problem is that they may ignore limitations and weaknesses. The opposite is also true, of course. See status quo bias below.

The relevance of the rhyme and reason effect can be illustrated with John Harris’ elegant argument:”I have a rational preference to remain nondisabled, and I have that preference for any children I may have. To have a rational preference not to be disabled is not the same as having a rational preference for the nondisabled as persons” [59]. While catchy, it is not clear that the claim holds [60].

Implicit biases “involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender” and are prevalent amongst health professionals [2] and bioethicists [61, 62]. Because they can operate to the disadvantage of those who are already vulnerable such biases are relevant in bioethics. Examples include minority ethnic populations, immigrants, poor people, low health-literacy individuals, sexual minorities, children, women, the elderly, the mentally ill, persons with overweight and disability. Even more, anyone may be rendered vulnerable given a certain context [63].

Common to all the cognitive biases is that they may distort our reasoning in bioethics. Moreover, while all are relevant in bioethics, they may be more or less relevant in different types of bioethics work as indicated in the table.

Affective biases

While the distinction between cognitive and affective biases is debatable, several scholars prefer to differentiate between them. The readers who dispute this distinction, can add the following to the cognitive biases above. Table 2 provides a brief overview of the affective biases with definitions/descriptions and indications of which type of bioethics work the bias mainly relates to. Then follows a brief discussion of some of the biases.

Table 2 Affective biases relevant for bioethics listed in alphabetical order with explanations and an indication for what type of bioethics this bias may be most relevant to assess

In bioethics individual cases can be paradigmatic, such as the cases of Karen Ann Quinlan, Terri Shiavo, and Charlie Gard. However, identified individuals, conditions, or groups of persons can induce special sympathy and empathy (or the opposite). This can engender unwarranted attention and priority towards specific groups in bioethics due to what has been called the identifiability bias and “the singularity effect” [64] and “bias towards identified victims,” [65] but also to “compassion fade” [66, 67]. Hence, identification can induce importance in biased manners.

Affective forecasting is another type of bias where one’s emotional state and conception is projected to future events [68]. Examples from bioethics is cases where hopes and desires flavor the ethical assessment of emerging technologies [7]. Related to affective forecasting is the impact bias, which is the tendency to overestimate the impact of a future event [69]. It can be observed in bioethics debates on novel technologies where future benefits are taken for granted, e.g., on gene editing, personalized/precision medicine, BigData, and Artificial Intelligence, and relates to projection bias (see Table 2).

On the other hand, as bioethicists we may let aversions to dangers or uncertainties influence our work and be subject to biases such as aversion to risk [70, 71] and aversion to ambiguity [8]. These biases may make bioethicists promote excessive diagnostics (overdiagnosis) and therapeutics (overtreatment) [6].

Loss aversion means that the perceived disadvantage of giving up an item is greater than the utility associated with acquiring it [72]. As with the endowment effect (and others) bioethicists do not obtain things or items. Nonetheless, the same psychological mechanism may be relevant for our relationship with argument, perspectives, lines of reasoning, and theoretical positions. We have invested many years of studies, research, and work experience in specific approaches or positions, and leaving them could result in the same affective effects. On the other hand, the bias of loss aversion has been applied to explain (or undermine) a bioethical argument, such as the difference between withholding and withdrawing treatment [73]. Hence, the same bias may be used in explanation or argumentation as being relevant for us as bioethicists.

While bioethics address complicated and complex issues, we may come to simplify and let a few or even one dominant factor determine our final analyses, which resembles what is defined as the prominence effect, which relates to what has been called “the scope neglect,” “scope insensitivity,” and “opportunity cost neglect.” The point is that we lose important aspects by narrowing our scope.

The yuck factor has been extensively discussed in the ethics literature [74,75,76,77] and there are different conceptions of whether it directs or distorts moral reasoning. The point here is not to provide the final answer to this question, but more modestly to point out that it can influence reasoning in bioethics in covert manners.

Hence, the affective biases may distort our reasoning in bioethics as do cognitive biases. Accordingly, being able to identify them is the first step to addressing and handling them—and thereby to improve bioethics.

Imperatives

Another type of distortion of judgments are imperatives, which are also oftentimes called biases. Imperatives are actions that are felt needed despite dim outcomes. They are immediate reflections of long-established doctrine or belief, and can be rooted in deontology [78]. Status quo bias [46] and progress bias [58] are but two examples presented in Table 3.

Table 3 Imperatives relevant for bioethics listed in alphabetical order with explanations and an indication for what type of bioethics this bias may be most relevant to assess

Status quo bias is described as an irrational preference for an option only because it preserves the current state of affairs [79]. This may result from people’s aversion to change (conservativism), making them avoiding changing practice [8], or system justification, i.e., the need to have stability in bioethical theory and practice, even despite these being dysfunctional or hampering improvements [80]. The Status quo bias is also associated with the cognitive bias called the endowment effect, according to which we tend to overvalue what we already have got compared to alternatives. See above.

Contrary to the status quo bias there also is a progress bias, according to which persons experience a strong propensity to promote what is considered to be progressive [58]. It is also related to what has been called pro-innovation bias and optimism bias (see above). Additionally, progress bias is related to what has been called adoption addiction, according to which we appear to have a tendency to be more interested in assessing and investing in new and shiny technologies than reassessing and disinvesting in old and inefficient technologies [81]. In bioethics status quo bias and progress bias are particularly relevant with respect to the assessment of biotechnologies [58].

In ethical debates on genomic analysis, incidental findings, return of results, newborn screening, and prenatal screening we often encounter the argument that people have the right to know [82] or that not providing a test (or its results) is denying them crucial information [83]. Certainly, holding back information can undermine respect for autonomy, but this is not always the case in the mentioned examples [84]. This indicates that the imperative of knowledge is relevant in bioethics debates as it is in healthcare and society in general [7].

Correspondingly, there may be a competency effect in bioethics, as there is a tendency to think that ethicists with better formal competency will produce better bioethics work which results in better decision-making. Again, this may be the case, but it is certainly not always so. Prominent bioethicists may be extremely busy not having enough time to apply the full capacity of their competency for all purposes.

Again, these (and other imperatives) [7] may influence, undermine the quality, and even distort our work in bioethics.

So far, this review illustrates that a wide range of cognitive and affective biases as well as imperatives are relevant in bioethics work. Clearly, as general mental mechanisms they influence bioethicists in the same manner as the general population. What I have tried to investigate in addition is whether the same psychological mechanisms may have any particular relevance to bioethics work. In addition to the “mental mechanisms” identified above, the literature reveals that there are “moral mechanisms” that can (negatively) influence bioethics work.

Moral bias

In the same manner as our thoughts, affections, and imperatives may influence or even distort our moral judgments, so may various moral mechanisms. Our moral judgments may be influenced by our ethical positions, religious beliefs, methodological preferences, and moral inclinations. Accordingly, moral bias can be defined as moral beliefs, attitudes, perspectives, or behavioral tendencies that unwittingly tend to influence our moral judgment in specific directions.

Again, space does not allow for an exhaustive review of all kinds of moral bias. Only biases that can be important to acknowledge and address and that can contribute to improve the quality of bioethics work are included. To facilitate reading, the biases are grouped in five groups: (1) Framings, (2) Moral theory bias, (3) Analysis bias, (4) Argumentation bias, and (5) Decision bias. The table at the end provides a summary of the biases and indicates for what type of bioethics they may be most relevant. Please note that the groupings are not absolute.

Framings

A moral framing effect can be defined as a bias where people’s moral judgments are influenced by how the options or arguments are framed or by the (ethical) framings of moral situations or challenges.

One type of moral bias that can be understood as a framing is standpoint adherence, according to which we are not willing to change standpoint despite solid evidence. Empirical research shows that strong positions are difficult to change, even with good evidence [85]. This relates to cognitive biases such as the ostrich effect and overconfidence effect. Bioethicists that change their standpoint, method, or perspective are rarely heard of. However, it is worth noting that some experiments have shown that some of our preferences are easy to influence (choice blindness and preference change) [86].

Moreover, there may be framing effects in the terminology we apply. Although bioethicists have increasingly become aware of the normative relevant difference between “epileptics” and “persons with epilepsy,” we have used terms such as “hypochondriacs,” “diabetics,” and “downs children” etc. [87]. The moral relevance of this has been discussed in relation to the synecdoche effect [50] (discussed above).

One reason for the terminology problem may result from how bioethicists are personally, socially, and culturally embedded: “bioethics is an embedded socio-cultural practice, shaped by the everchanging intuitions of individual philosophers, and cannot be viewed as an intellectual endeavour detached from the particular issues that give rise to, and motivate, that analysis”[88].

Corresponding to what in business ethics research has been called the social desirability response bias [89] and what in the Science and Technology Studies (STS) literature has been coined tacit commitments and narrative bias [90] there can be an expectation bias in bioethics making the social expectations influence the bioethics work. In clinical ethics several such biases have been identified in terms of “bias towards the interests of hospital management,” “bias towards laws and regulations,” “bias towards individuals’ perspectives and interests,” and “bias towards the perspectives and interests of health-care professions”[12]. Such biases stem from conflicts of interest and seem especially relevant when bioethicists work in or for expert groups. Thus, expectation bias is related to social mechanisms and motivations.

As illustrated by the research of Don A. Moore and colleagues, self-interest tend to operate via automatic processes in conflict of interests [91,92,93] and Mahdi Kafaee and colleagues have experimentally demonstrated how conflict of interest might shape the perception of a situation in a subconscious manner [94]. Consequently, conflicts of interest can (unconsciously) bias bioethics work [95]. Clearly, ethicists are hired by stakeholders, and can have conflict of interests as other researchers [96, 97], especially in settings where bioethics has become a business [98]. Moreover, there may be professional conflicts of interest, e.g., between ethicists and jurists or policy makers [99, 100]. Ethicists may also have strong opinions in controversial issues biasing their judgments [101] or be subject to political attention (“political bias”) [102]. As acknowledged, conflicts of interests have been identified as biases in clinical ethics committee work [103].

According to the impartiality illusion, we may think that we are impartial while closer analysis (by independent and blinded reviewers) may reveal specific tendencies, inclinations, or partiality. Everett provides one interesting example about endorsing consequentialism [104].

Another well-known way to frame a bioethical debate is by defining what is (not) the issue (“that’s not the question”) and identifying what is an ethical problem [105]. Such delimiting claims seem to be common [106,107,108,109,110,111] and can easily result in biased bioethics.

Hence, there may many types of unconscious framings that direct bioethics work. Being aware of and addressing these framings may contribute to improve bioethics work.

Moral theory bias

There are also biases with respect to moral theories, i.e., where moral theories may direct specific moral challenges are perceived, defined, deliberated, and solved. One such bias is the theory dominance according to which one theoretical perspective inherently dominates the analysis, ignores other relevant perspectives, adequate objections, or the context of where the problems arise. Accordingly, not being “practical in approach, philosophically well grounded, cross disciplinary” or not being performed by “good people” or skilled professionals [112] may bias bioethics. The same goes for using ideal theories to tackle problems in non-ideal context [113] or the lack of specifying principles [114]. This does not make for example an explicitly stated virtue-ethical analysis of euthanasia biased, because the moral theory is explicitly declared. However, if the author uses the outcome from such an analysis to draw general conclusions, one could argue that the work is biased.

Yet another type of bias inherent in moral theories is what may be called conceptual bias. For example, it has been argued that there is a basic asymmetry in ethics, making some concepts, such as bad, easier to define and grasp than others, like good [115]. The same goes for disease versus health [116]. If there are structural asymmetries in moral concepts and ethical theories, this can bias our judgments in bioethics.

Furthermore, it has been pointed out that certain biases are more likely in specific moral theories. For example, it has been argued that there is a potential bias in casuistry, e.g., in describing, framing, selecting and comparing cases and paradigms [117]. The reason is that in order to assess relevance (of a case), we rely on general views, which may be biased. Correspondingly, it has been argued that the use of (constructed) case studies may mislead moral reasoning [118]. According to these lines of thought, it may be possible to assess various kinds of moral theories for their “characteristic biases.”

On the other hand, various types of cognitive biases may distort bioethical reasoning (in many theories). Dupras and colleagues identify three such cognitive biases that may impede ethical reasoning: exceptionalism, reductionism, and essentialism [119] where genetic exceptionalism, genetic reductionism, and genetic essentialism serve as examples.

Another theory type of bias is bias towards inadequate moral perspectives, i.e., the tendency to the rely on arguments from an erroneous or inadequate moral theory or perspective or to rationalize of a preferred conclusion by appeal to arguments that underpin a preferred conclusion, which e.g., has been identified in clinical ethics [12]. On the other hand, lack of theoretical foundation (in moral philosophy) [120], lacking specific theoretical foundations (such as utilitarianism + decision theory) [121], or not being principle based [122, 123] may also hamper and bias bioethical analysis, it is argued. Others have pointed out that the lack of “sensitivity to the problem of the multiplicity of moral traditions” [99] could bias bioethics work. While interesting, there may be very many views on what an “inadequate moral perspective” is and difficult to decide what counts as adequate. Nonetheless, there may be some agreement on adequacy, e.g., that ethics of proximity may be less relevant for the assessment of cost-effectiveness than utilitarian calculus.

The point here is that there may be implicit theoretical assumptions that may bias bioethics work. The same seems to be the case for our analyses.

Analysis bias

There are also potential biases related to ethical analysis in a broader sense. Myside bias is one example of this, according to which we can have a tendency to evaluate or generate evidence, test hypotheses, or analyze or address moral issues in a manner biased toward our own prior perspectives, opinions, attitudes, or positions [124]. At the same time it has been shown that we may consider one-sided arguments to be better than balanced arguments (even if they are opposite to one’s opinion) [125]. The way we assess arguments, weigh the various factors, and synthesize a topic may certainly be biased by unconscious mechanisms.

Moreover, the processes of specifying [126], interpreting [114], or balancing [127, 128] moral norms, values, and/or principles may be biased. Ethical work can also contain “moral fictions” biasing the analysis [129]. Moral fictions have been defined as “false statements endorsed to uphold cherished or entrenched moral positions in the face of conduct that is in tension with these established moral positions” [130]. However, labelling something as a “moral fiction” can itself introduce bias (see terminology bias above).

It has also been suggested that we can make “moral errors” or “moral fallacies” [131] due to various biases, such as psychic numbing, the identifiable victim effect, and victim-blaming [132].

Again, implicit assumptions or tendencies in our analyses may bias bioethics work.

Argumentation bias

Flawed arguments and fallacies in argumentation can also bias bioethics work. (Most) bioethicists are trained in detecting and avoiding flawed arguments, such as fallacies of vagueness, ambiguity, relevance, and vacuity [133]. However, the reviewed literature identifies flawed moral reasoning [134] or bad arguments that do not fall under the groups of illogical or flawed arguments [135]. Some of these can be characterized as rhetoric, deception, or argumentative techniques. The list of logical fallacies and bad arguments is long [133] and beyond the scope of this article. Here only some examples of how they may bias bioethics work are included to illustrate their profound importance for biasing bioethics work.

False analogies can bias arguments if there are morally relevant differences between the case and the analog. One example in bioethics is revealing false analogies in the argument for coercive measures against alcohol consumption during pregnancy, where it has been argued that using court orders to medically treat women (for alcohol dependency) during pregnancy is analogue to coercion by “physically abusive partners” [136].

Moreover, reasoning from is to ought can bias bioethics work. This is related to what has been called “Hume’s law,” “Hume’s guillotine,” or “the is-ought-fallacy,” and to”the naturalistic fallacy” attributed to George Edward Moore (1873–1958) [137]. It is also related to the reasoning from quantity to quality, e.g., in the enhancement debate where it is argued that more intelligence is better [41].

Accordingly, inference from description to prescription is a well-known challenge where ethical conclusions are based on opinion polls [138]. The number of empirical work in bioethics has increased substantially the last decade [139] improving the empirical premises for ethical analyses, but also posing challenges [140]. As knowledge about people’s attitudes towards biotechnologies, such as genetically modified human germlines, are used to inform policy making [141] they may also come to influence ethical analyses and argumentation.

Relatedly, the experience paradox is an appeal to experience which represents a “wide-ranging and under-acknowledged challenge for the field of bioethics” according to which personal experience is a liability in bioethics debates when they express vested interests or are not representative of those involved [142]. This relates to epistemic (testimonial) injustice [143].

Related to some of the framing effects that have been discussed above, we may in bioethics use vague, unclear, or ambiguous concepts, which can confuse, obfuscate, or frame the argument in unwarranted ways. One example of this is the use of the concept “naturalness” (for example in the enhancement debate) which has been shown to be used in a number of ways confusing rather than clarifying arguments [144]. Admittedly, vagueness can be beneficial in bioethics [145,146,147]. However, it can also confuse arguments or stop them, e.g., in statements such as “that is not natural” or “that breaches with personal autonomy."

Related to the yuck-factor (see above) bioethicists can also appeal to effects of revulsion, repugnance, abhorrence, or repulsion [148] in their work. While moral disgust may play a role in bioethics, it may also be used in a manipulative way and bias an analysis or argument.

"Begging the question," or petitio principii, is the tendency to assume the conclusion of an argument. This form of argument can result in bias in bioethics, for example in debates on the beneficial outcomes of proton therapy for the treatment of cancer, it has been argued that it is unethical to waste time in assessing its outcome by high-quality trials as its outcomes obviously must be beneficial [149]. This relates to progress bias, discussed above.

Bias can also result from assuming controversial premises (without justification), drawing conclusions beyond premises, using obscure or controversial examples, analogies, or thought experiments [150], or concluding without assessing the truthfulness or plausibility of crucial premises [16]. The same goes for straw man arguments (refuting something else), argument selection, and not addressing relevant counterarguments.

While clearly not exhaustive, the examples illustrate that there many ways that flawed arguments can distort biases in bioethics work.

Decision bias

Many biases also appear in moral decision-making, and many of them have been mentioned under cognitive and affective biases. While biases in decision-making merit a separate study [30], three main types of decision biases should be mentioned [151].

First, simplification biases can be observed when decisions are made based on selected and limited empirical evidence, e.g., when they are insensitive to base rates or are based on illusory correlations—or when only some of the empirical premises are taken into account.

Second, verification biases occur when decisions are made to stick to status quo or maintaining consensus in the group, e.g., when decisions are made to maintain consistency in a group and the experience of control.

Third, regulation biases are tendencies to avoidance temperaments in ethical debates for example in “rationalizing or downplaying the seriousness of ethical dilemmas and avoiding taking personal responsibility due to feelings of discomfort” [30].

The moral biases are summarized in Table 4.

Table 4 Moral biases relevant for bioethics with type of bias, short description, and an indication for what type of bioethics this bias may be most relevant to assess

Measures to address or avoid bias

As there are many biases there are also many ways to address them. For example, some call for a ‘critical bioethics’ where the ethical analyses in bioethics are empirically rooted [152], others argue for providing a reflexive autoethnographic account of arguments in bioethics applying “confessional tales” [88], and some urge to acknowledge the importance of (framing of) stories [153].

Special (reverse) tests have been suggested to avoid specific biases, such as the status quo bias [46] and progress bias [58]. Adhering to criteria for good ethical argumentation, such as the Rapoport rules (after the Russian-born American game theorist Anatol Rapoport) [154] or the many sets of criteria for “good bioethics” [112, 122, 123, 130, 150, 155,156,157,158,159,160,161,162,163] may help avoiding the negative effects of biases in bioethics. Correspondingly, declarations of biases together with (or as part of) declarations of conflicts of interest may also reduce (the effects of) biases.

Moreover, in the general literature on biases there are many advice on debiasing [164, 165] and compensatory measures (such as nudging). Such suggestions also exist for health decisions in the clinical setting [31, 166,167,168]. “Moral debiasing” has also been suggested [169]. Clearly, several of these approaches and attempts may be relevant for bioethics as well. However, to decide which biases should be addressed and in what manner and how to address mistaken moral judgments (or moral heuristics) [170] is another big issue which is beyond the scope of this study. Here the point has been to provide an overview of biases that are relevant to bioethics work, to suggest a way to classify them, and to stimulate reflection, debate, and further research.

Discussion

This review provides an overview over biases that can be relevant to bioethics work. While most of the biases are well-known from other fields, such as psychology and behavioral economics, their relevance has been investigated in the context of bioethics work and with examples from bioethics. Moreover, the article has suggested a way to classify biases, i.e., in terms of cognitive and affective biases, imperatives, and moral biases that can bias bioethics work. The aim with the overview has been to increase awareness, reflection, and debate on biases in bioethics. The overall and more distal goal is to promote actions towards avoiding or reducing biases, and thereby hopefully to contribute to improve the quality of bioethics.

Biases by other names

At the outset of this article, I noted that biases have not gained much explicit attention in bioethics. There may certainly be many reasons for this. One reason why we are not that preoccupied with explicitly addressing biases as in other fields can of course be that we do not apply a unified or clearly defined method (that is subject to specific biases) in the first place [171], or that we think that biases are “productive in human understanding” [172] or are shortcuts to effective decisions [173]. Yet another reason may be that bioethics deals with norms and values, which can be thought to introduce “evaluative bias” [174]. Hence, bioethics is inherently biased and haunting biases is counterproductive or even self-destructive. Be that as it may, as I have indicated, there may still be biases that we can and should address and avoid in bioethics.

Another reason for the lack of attention to biases in the bioethics literature can of course be that biases are classified and discussed differently in the bioethics literature. For example, the distinction between withholding versus withdrawing treatment is extensively debated as a philosophical question [175], and not as a bias, such as the omission bias (the tendency to judge harmful commissions as worse than equally harmful omissions) [176].

Moreover, bioethicists may be subject to phenomena that are characterized by other words, e.g., “partial moral blindness” [177], “cognitive islands” [178] or to “expert shopping” effects [179]. Moral sensitivity [180], moral judgments, and moral courage [181] may also be interpreted as biases [182]. For example, moral judgments are claimed to be based either on intuitive, default automatic emotional process (deontological) or on conscious, controlled reasoning processes (utilitarian) [183]. Additionally, bioethicists may be subject to misinformation and the spread of false belief [184]. How these phenomena relate to biases is the topic for a separate study.

Yet another reason could be that bioethicists are working with biases in the reasoning and argumentation of important issues, but call them by other names, such as “fallacies” or “flawed argumentation.” Accordingly, we would not need to discuss biases as they are addressed by our methods as such. However, as indicated in this study, it is far from obvious that all biases can be discussed or addressed as philosophical issues. Hence, it still seems important to investigate to what extent biases (as understood and studied in other fields) are relevant to bioethics.

Other areas for reporting, synthesizing, and assessing research results and providing input for decision-making have elaborate systems for assessing bias, such as the ‘Preferred Reporting Items for Systematic Reviews and Meta-Analyses’ (PRISMA) statement [185] and ‘Enhancing transparency in reporting the synthesis of qualitative research’ (ENTREQ). This is much less pronounced in bioethics. While there is some quality assessment in some approaches for literature synthesis in ethics [36], and the upcoming PRISMA Ethics—Reporting Guideline for Systematic Reviews on Ethics Literature mentions the problems of bias (https://osf.io/g5kfb/), there seems to be very limited systematic assessment of bias in ethics. Hence, as long as biases can undermine the quality of research, distort decision-making, and have unjust implications they are ethically relevant. Biases are also demonstrated in ethics in general [30, 39], and it appears reasonable to assume that ethicists are subject to biases as other researchers and persons. In addition to general biases, such as cognitive and affective biases, bioethicists may be subject to specific "moral bias," e.g., related to various types of professional characteristics in our field.

Psychological versus moral mechanisms

One important objection to this study is that not all biases can be transferred from the field where they are studied (psychology and behavioral economics) to bioethics. Psychological mechanisms may not be relevant for moral phenomena. As already acknowledged, many biases are strongly related to things, while bioethics deal with ideas, norms, values, intuitions, judgments, and arguments. Hence, they may not be of any particular relevance. However, the main line of reasoning in this article has been that: (1) general psychological mechanisms influence bioethicists in the same manner as the general population and may thus be relevant in bioethics work because they can undermine good judgment. (2) the psychological mechanisms described as biases for one type of entities (e.g., decisions on giving away or selling items) may be relevant for the entities or phenomena in another area (e.g., the reluctance to give up ideas, positions, or perspectives in bioethics). (3) providing examples illustrating the relevance of the specific biases for bioethics.

Clearly, I have not provided empirical evidence for these “moral mechanisms” and their relationship to the “psychological mechanisms of bias” [186]. That is beyond the scope of this study and an important topic for further research. However, there clearly exist experiments and studies that measure bias in bioethics [30, 186].

Biases and bad bioethics

I have taken as a point of departure that biases may distort bioethics work and that identifying and addressing them may improve the quality of our work. I have also briefly said something about potential measures to address or avoid biases. I have left it open for debate and future research to decide on which biases that should be addressed or avoided in what type of bioethics work. The obvious reason is that it would have been beyond the scope of this article. Nonetheless, I hope that the overview and taxonomy provided in this study can stimulate future debates and studies.

One important issue to consider, and which makes this particularly interesting, is that some types of bioethics work may be very biased, and still be of high quality. For example, a study on abortion by a Catholic academic or by a feminist may be biased in many of the ways described in this study, but still be of high quality. Accordingly, Jansen and Sulmasy argue that “good arguments can come from any quarter and that no argument should be dismissed or discounted simply because of its source”[28].

Nonetheless, as revealing conflicts of interests can distort the value of evidence, so can the uncovering of biases undermine the power of an argument. A bias is an indicator of, but not a sufficient condition for poor quality bioethics.

Certainly, there are many things that can undermine good bioethics other than biases [150]. Hence, avoiding biases will not guarantee the quality of bioethics. However, it can contribute to improving quality together with many other measures. While other quality criteria have got quite some attention [112, 122, 123, 130, 155,156,157,158,159,160,161,162,163], many of the biases identified in this study have not.

Accordingly, we need to qualify the relationship between biases and quality in bioethics. I hope that this study can provide a good starting point for such work.

Limitations

The topic of this study is extensive and there are certainly many limitations. First and foremost, this is a narrative (overview) review with limitations [37]. It is not exhaustive. The selection of biases included in the study is limited and can itself be criticized for being biased. As stated openly at the outset, there are very many biases identified in the literature. All cannot be included, and admittedly I have made a selection. Thereby, many types of biases may have been excluded. For example, bioethicists are subject to pressure to publish and may be motivated by to pose controversial claims that may make them cited and renown [88]. Moreover, many other specific biases, such as the bandwagon effect, can be relevant also in bioethics [20].

Yet another bias that is not included, is the so-called moral credential effect, which occurs when someone who does something good gives themselves permission to be less good in the future. Although morally interesting, it may not be that relevant to bioethicists. Yet another topic that is not covered properly is biased decision-making processes [187]. Moreover, many of the common mechanisms and drivers that concern health professionals, researchers, and policy- and decision-makers in general will also apply to bioethicists.

There are also various kinds of biases that can occur due to ethical requirements. One such example is “consent bias” which is a bias in those that enter a research study because the consent process results in bias making the participants not being representative for the general population [188]. Such biases are also beyond the scope of this article although they are ethics-related biases.

Although there are relevant arguments for including moral intuitions into biases [189], this would render this study too comprehensive. Moreover, the relationship between moral intuitions and biases is complex [190, 191] and warrants a separate study.

While admittedly having excluded relevant biases, such as "practitioner bias” [182], it may also be claimed that I have been too inclusive, as I have included the is-ought-fallacy, which is a form of invalid deductive argument. Including invalid arguments (in propositional logic) would arguably be to stretch the term bias too far. I fully agree that invalid arguments in bioethics should be addressed with the terminology and methodology of logic and argumentation theory. I have rather included some examples of how flawed arguments can bias bioethics. Formal, informal, inductive, and other fallacies as well as flaws in moral reasoning are dealt with in more length and detail elsewhere [133, 134].

There are also crucial philosophical questions following from the various types of biases that have been presented and (partly) discussed here, for example what our responsibility for the biases are [192]. While such issues are important, they could not be included in this study.

Another important caveat is that the various biases do not apply all the time or in all contexts. While I have tried to indicate in which type of bioethics work they may be most relevant, their relevance may vary from country to country and from topic to topic. For example, conflict of interests due to stakeholder expectations may vary from place to place.

One profound objection, that has already been mentioned, is that biases are basic to human beings and that it is impossible to eliminate biases [20, 193]. Accordingly, unbiased bioethics is an utopian aim that should be abandoned [172]. As already admitted, deciding whether biases can be eliminated, reduced, or mitigated and how is beyond the scope of this study. However, implicit in the study lies a belief (a bias) that identifying biases and increasing the awareness of them can be helpful for bioethics work. For example, one area where it can be helpful is in the appraisal of quality of normative literature [194]. Even if all biases may not be eliminable, some may very well be, while others may be reducible. Hence, there are good reasons to acknowledge and address biases in order to promote transparent and disciplined ethical work.

There are several challenges with revealing biases. First, the baring of biases may itself be biased. Second, revealing biases may require knowledge about the bioethicist, e.g., with respect to perspective, position, inherent inclinations etc., which we may not have access to. Third, there may be disputes about biases as all other things in bioethics. Hence, while revealing, measuring, and assessing biases may be difficult, awareness and open discussions on biases may still be valuable and improve the quality of bioethics work.

As the identification, assessment, explanation, and classification of biases may itself be subject to biases, we may have higher order biases. As this study presents an overview of biases, it is open for others to analyze, assess, and criticize them. The study does not provide evidence of the prevalence and impact of the biases, which invites to further research.

Correspondingly, the various types of biases (cognitive, affective, imperatives, and moral biases) are not natural kinds, and the borders between them are blurred. Admittedly, the same goes for the grouping of biases. As shown, the various types of biases are also interconnected. Hence, the classification is open for discussion. The same is the indication of how relevant the biases are for the various kinds of bioethics work, as already pointed out.

Moreover, there are many relationships between psychological biases, imperatives, and moral biases. For example, biases here classified as moral biases, such as the framing effects, analysis bias, and decision bias are rooted in or related to corresponding cognitive biases. Correspondingly, there are relationships between various cognitive biases, such as the status quo bias and the endowment effect and between progress bias and pro-innovation bias, as already mentioned.

Additionally, the term “moral bias” is by no means new. It has been mentioned in the literature before [191] and relates to terms like “moral sentiment,” “moral distortion,” “moral blindness,” “moral dogmatism” [195], “moral prejudice,” and “moral intuition” [189], as already mentioned. Moreover, moral biases may be interpreted in many ways, e.g., in terms of moral heuristics. According to Gerd Gigerenzer our moral decisions are fast and frugal and do not depend on our preferences or deliberate reasoning [189]. The issue of moral heuristics is a topic of its own merits. The point here has not been to enter the general debates on moral bias, but only to use the term to refer to a class of biases that can distort bioethics work.

Another interesting topic for further research is the difference between biases and doing wrong. For example, it could be argued that doing moral theory wrong is not a bias, it is just being a poor bioethicist. Both such errors and biases are unconscious, and both may be recognized by the person having or doing them. However, errors appear to be more easily acknowledged and corrected than biases. In any case, the issue deserves more attention than given here.

Yet another issue that is beyond the scope of this article, but which is highly relevant for further research, is whether biases in bioethics can be measured. In the field of evidence production there are many approaches to classify and measure biases, and one could envision that such approaches could be established in bioethics as well in order to make proper quality assessment of bioethics work. While measuring biases is not well developed in bioethics, data programs with syntactic and semantic tests for detecting bias have been suggested and demonstrated [196]. Moreover, a self-report measure that assesses an individual's propensity to express specific types of decision biases in ethics, i.e., the Biased Attitudes Scale (BiAS), has been validated [30]. This still is in its infancy and needs further development and future research.

Conclusion

A wide range of biases have been identified as relevant for bioethics work: cognitive biases, affective biases, imperatives, and moral biases. In particular, moral biases have been discussed in terms of (1) Framings, (2) Moral theory bias, (3) Analysis bias, (4) Argumentation bias, and (5) Decision bias. Based on the general literature on bias and the bioethics literature, specific biases with relevance to various types of bioethics work have been recognized, classified, and explained. Selected examples are provided to help identify and address them.

There are two important reasons to identify and address biases in bioethics. First, analyzing biases in bioethics can help us assess the quality of bioethics work. Second, and consequently, identifying and addressing biases can be used to improve the quality of our work.

Accordingly, the provided overview can be useful to draw attention to biases, as awareness is the first step to addressing and handling biases in bioethics. The overview can also be developed into more formal bias checklists and risk of bias assessments. However, this needs careful deliberation, as there is no simple relationship between biases to quality. Nonetheless, I hope that this overview can be a helpful starting point for further research that will be valuable for bioethics work.

Availability of data and materials

All data are available in publication.

References

  1. Toet A, Brouwer A-M, van den Bosch K, Korteling J. Effects of personal characteristics on susceptibility to decision bias: a literature study. Int J Humanit Soc Sci. 2016;5:1–17.

    Google Scholar 

  2. FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18(1):19.

    Article  Google Scholar 

  3. Hall WJ, Chapman MV, Lee KM, Merino YM, Thomas TW, Payne BK, Eng E, Day SH, Coyne-Beasley T. Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. Am J Public Health. 2015;105(12):e60–76.

    Article  Google Scholar 

  4. Banuri S, Dercon S, Gauri V. Biased policy professionals. World Bank Econ Rev. 2019;33(2):310–27.

    Article  Google Scholar 

  5. Chapman GB. Cognitive processes and biases in medical decision making. In: Chapman G, Sonnenberg F, editors. Decision making in health care: Theory, psychology, and applications. New York: Cambridge University Press; 2003. p. 183–210.

    Google Scholar 

  6. Hofmann BM. Biases distorting priority setting. Health Policy. 2020;124(1):52–60.

    Article  Google Scholar 

  7. Hofmann BM. Biases and imperatives in handling medical technology. Health Policy Technol. 2019;8:377–85.

    Article  Google Scholar 

  8. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16(1):138.

    Article  Google Scholar 

  9. Kahneman D. Maps of bounded rationality: psychology for behavioral economics. Am Econ Rev. 2003;93(5):1449–75.

    Article  Google Scholar 

  10. Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619.

    Article  Google Scholar 

  11. Baron J. Thinking and deciding. 3rd ed. New York: Cambridge University Press; 2000.

    Google Scholar 

  12. Magelssen M, Pedersen R, Førde R. Sources of bias in clinical ethics case deliberation. J Med Ethics. 2014;40:678–82.

    Article  Google Scholar 

  13. Arekapudi S, Wynia MK. The unbearable whiteness of the mainstream: should we eliminate, or celebrate, bias in bioethics? Am J Bioethics. 2003;3:18.

    Article  Google Scholar 

  14. Anderson W. The whiteness of bioethics. J Bioethical Inquiry. 2021;18(1):93–7.

    Article  Google Scholar 

  15. Petersen A. From bioethics to a sociology of bio-knowledge. Soc Sci Med. 2013;98:264–70.

    Article  Google Scholar 

  16. Wolf SM. Introduction: gender and feminism in bioethics. Feminism Bioethics Beyond Reprod. 1996; 3–43.

  17. Ten Have H, Gordijn B. The diversity of bioethics. Med Health Care Philos. 2013;16(4):635.

    Article  Google Scholar 

  18. Jecker NS. Uncovering cultural bias in ethics consultation. Am J Bioeth. 2001;1(4):49–50.

    Article  Google Scholar 

  19. Borry P, Schotsmans P, Dierickx K. How international is bioethics? A quantitative retrospective study. BMC Med Ethics. 2006;7(1):1–6.

    Article  Google Scholar 

  20. Blumenthal-Barby J. Biases and heuristics in decision making and their impact on autonomy. Am J Bioeth. 2016;16(5):5–15.

    Article  Google Scholar 

  21. Blumenthal-Barby JS. Good ethics and bad choices: the relevance of behavioral economics for medical ethics. MIT Press; 2021.

    Book  Google Scholar 

  22. Schwab AP. Applying heuristics and biases more broadly and cautiously. Am J Bioeth. 2016;16(5):25–7.

    Article  Google Scholar 

  23. Chattopadhyay S, Myser C, De Vries R. Bioethics and its gatekeepers: does institutional racism exist in leading bioethics journals? J Bioethical Inquiry. 2013;10(1):7–9.

    Article  Google Scholar 

  24. Smith J, Noble H. Bias in research. Evid Based Nurs. 2014;17(4):100–1.

    Article  Google Scholar 

  25. Foss C. Gender bias in nursing care? Gender-related differences in patient satisfaction with the quality of nursing care. Scand J Caring Sci. 2002;16(1):19–26.

    Article  Google Scholar 

  26. Narayan MC. CE: addressing implicit bias in nursing: a review. AJN Am J Nurs. 2019;119(7):36–43.

    Article  Google Scholar 

  27. Gino F, Moore D, Bazerman M. No harm, no foul: The outcome bias in ethical judgments. Harvard Business School NOM Working Paper (08–080). In. Harvard, MA: Harvard Business School; 2020.

  28. Jansen LA, Sulmasy DP. Bioethics, conflicts of interest, the limits of transparency. Hastings Cent Rep. 2003;33(4):40–3.

    Article  Google Scholar 

  29. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185(4157):1124–31.

    Article  Google Scholar 

  30. Watts LL, Medeiros KE, McIntosh TJ, Mulhearn TJ. Decision biases in the context of ethics: initial scale development and validation. Personal Individ Differ. 2020;153:109609.

    Article  Google Scholar 

  31. Scott IA, Soon J, Elshaug AG, Lindner R. Countering cognitive biases in minimising low value care. Med J Aust. 2017;206(9):407–11.

    Article  Google Scholar 

  32. Caplan AL, Arp R. Contemporary debates in bioethics. Wiley; 2013.

    Google Scholar 

  33. Holm S. Classification and normativity: Some thoughts on different ways of carving up the field of bioethics. Camb Q Healthc Ethics. 2011;20(2):165–73.

    Article  Google Scholar 

  34. Haidt J. The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev. 2001;108(4):814.

    Article  Google Scholar 

  35. Fletcher JC, Siegler M. What are the goals of ethics consultation? A consensus statement. J Clin Ethics. 1996;7(2):122–6.

    Article  Google Scholar 

  36. Mertz M, Kahrass H, Strech D. Current state of ethics literature synthesis: a systematic review of reviews. BMC Med. 2016;14(1):1–12.

    Article  Google Scholar 

  37. Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–17.

    Article  Google Scholar 

  38. Blawatt KR: Appendix A: list of cognitive biases. In: Marconomics. Emerald Group Publishing Limited; 2016.

  39. Jones TM. Ethical decision making by individuals in organizations: an issue-contingent model. Acad Manag Rev. 1991;16(2):366–95.

    Article  Google Scholar 

  40. Hofmann B. Biases and imperatives in handling medical technology. Health Policy Technol. 2019;8(4):377–85.

    Article  Google Scholar 

  41. Moen OM. Bright new world. Camb Q Healthc Ethics CQ Int J Healthc Ethics Comm. 2016;25(2):282–7.

    Article  Google Scholar 

  42. Schkade DA, Kahneman D. Does living in California make people happy? A focusing illusion in judgments of life satisfaction. Psychol Sci. 1998;9(5):340–6.

    Article  Google Scholar 

  43. Milligan E. The ethics of consent and choice in prenatal screening. Cambridge Scholars Publishing; 2011.

    Google Scholar 

  44. Rosenbaum L. Understanding bias: the case for careful study. N Engl J Med. 2015;372(20):1959–63.

    Article  Google Scholar 

  45. Kahneman D, Knetsch JL, Thaler RH. Anomalies: the endowment effect, loss aversion, and status quo bias. J Econ Perspect. 1991;5(1):193–206.

    Article  Google Scholar 

  46. Bostrom N, Ord T. The reversal test: eliminating status quo bias in applied ethics. Ethics. 2006;116(4):656–79.

    Article  Google Scholar 

  47. Duckett S. Challenges of economic evaluation in rare diseases. J Med Ethics. 2022;48(2):93–4.

    Article  Google Scholar 

  48. Sonnenberg A, Pohl H. ‘Do no harm’: an intuitive decision tool to assess the need for gastrointestinal endoscopy. Endosc Int Open. 2019;7(03):E384–8.

    Article  Google Scholar 

  49. FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017;18(1):1–18.

    Article  Google Scholar 

  50. Asch A. Why I haven’t changed my mind about prenatal diagnosis: reflections and refinements. In: Parens E, Asch A, editors. Prenatal testing and disability rights. Washington: Georgetown University Press; 2000. p. 234–58.

    Google Scholar 

  51. Halpern SD, Truog RD, Miller FG. Cognitive bias and public health policy during the COVID-19 pandemic. JAMA J Am Med Assoc. 2020;324(4):337–8.

    Article  Google Scholar 

  52. Marteau TM, Ashcroft RE, Oliver A. Using financial incentives to achieve healthy behaviour. BMJ. 2009;338:b1415.

    Article  Google Scholar 

  53. Siegal G, Siegal N, Bonnie RJ. An account of collective actions in public health. Am J Public Health. 2009;99(9):1583–7.

    Article  Google Scholar 

  54. Daly M, Hevey D, Regan C. The role of perceived risk in general practitioners’ decisions to inform partners of HIV-infected patients. Br J Health Psychol. 2011;16(2):273–87.

    Article  Google Scholar 

  55. Gyngell C, Douglas T, Savulescu J. The ethics of germline gene editing. J Appl Philos. 2017;34(4):498–513.

    Article  Google Scholar 

  56. Greenhalgh T. Five biases of new technologies. Br J Gen Pract J R College Gen Pract. 2013;63(613):425.

    Article  Google Scholar 

  57. Chalmers I. What is the prior probability of a proposed new treatment being superior to established treatments? BMJ Br Med J. 1997;314(7073):74.

    Article  Google Scholar 

  58. Hofmann B. Progress bias versus status quo bias in the ethics of emerging science and technology. Bioethics. 2020;34(3):252–63.

    Article  Google Scholar 

  59. Harris J. One principle and three fallacies of disability studies. J Med Ethics. 2001;27(6):383–7.

    Article  Google Scholar 

  60. Hofmann B. You are inferior! Revisiting the expressivist argument. Bioethics. 2017;31(7):505–14.

    Article  Google Scholar 

  61. Estime S, Williams B. Systemic racism in America and the call to action. Am J Bioeth. 2021;21(2):41–3.

    Article  Google Scholar 

  62. Ho A. Racism and bioethics: are we part of the problem? Am J Bioeth. 2016;16(4):23–5.

    Article  Google Scholar 

  63. Martin AK, Tavaglione NSH. Resolving the conflict: clarifying’ vulnerability’ in health care ethics. Kennedy Inst Ethics J. 2014;24:51–72.

    Article  Google Scholar 

  64. Wiss J, Andersson D, Slovic P, Vastfjall D, Tinghog G. The influence of identifiability and singularity in moral decision making. Judgm Decis mak. 2015. https://doi.org/10.1017/S1930297500005623.

    Article  Google Scholar 

  65. Daniels N. Reasonable disagreement about identifed vs. statistical victims. Hastings Cent Rep. 2012;42(1):35–45.

    Article  Google Scholar 

  66. Västfjäll D, Slovic P, Mayorga M, Peters E. Compassion fade: affect and charity aregreatest for a single child in need. PLOSONE. 2014;9(6):e100115.

    Article  Google Scholar 

  67. Slovic P. The prominence effect: confronting the collapse of humanitarian values in foreign policy decisions. Numbers Nerves Inform Emot Meaning World Data (2015); 53–61.

  68. Wilson TD, Gilbert DT. Affective forecasting. Adv Exp Soc Psychol. 2003;35(35):345–411.

    Article  Google Scholar 

  69. Morewedge CK, Buechel EC. Motivated underpinnings of the impact bias in affective forecasts. Emotion. 2013;13(6):1023.

    Article  Google Scholar 

  70. Shrader-Frechette KS. Risk and rationality: philosophical foundations for populist reforms. Univ of California Press; 1991.

    Book  Google Scholar 

  71. Hofmann B. Ethical considerations in the use of technology in the cardiac intensive care unit. In: Romanò M, editor. Palliative care in cardiac intensive care units. Cham: Springer; 2021. p. 173–82.

    Chapter  Google Scholar 

  72. Gross JA. Trying the case against bioethics. Am J Bioeth. 2006;6(3):71–3.

    Article  Google Scholar 

  73. Wilkinson D, Butcherine E, Savulescu J. Withdrawal aversion and the equivalence test. Am J Bioeth. 2019;19(3):21–8.

    Article  Google Scholar 

  74. Yuck DK. The nature and moral significance of disgust. Cambridge: MIT Press; 2011.

    Google Scholar 

  75. Niemelä J. What puts the ‘yuck’in the yuck factor? Bioethics. 2011;25(5):267–79.

    Article  Google Scholar 

  76. George A. The yuck factor: disgust’s surprising power. New Scientist. 2012;215(2873):34–7.

    Article  Google Scholar 

  77. Salles A, de Melo-Martin I. Disgust in bioethics. Camb Q Healthc Ethics. 2012;21(2):267–80.

    Article  Google Scholar 

  78. Mazarr MJ. Rethinking risk in national security: lessons of the financial crisis for risk management. New York: Springer; 2016.

    Book  Google Scholar 

  79. Kahneman D, Tversky A. Choices, values, and frames. In: MacLean LC, Ziemba WT, editors. Handbook of the fundamentals of financial decision making: In 2 parts. World Scientific; 2013. p. 269–78.

    Chapter  Google Scholar 

  80. Jost J, Hunyady O. The psychology of system justification and the palliative function of ideology. Eur Rev Soc Psychol. 2003;13(1):111–53.

    Article  Google Scholar 

  81. Bryan S, Mitton C, Donaldson C. Breaking the addiction to technology adoption. Health Econ. 2014;23:379–83.

    Article  Google Scholar 

  82. de Jong A, Dondorp WJ, de Die-Smulders CE, Frints SG, de Wert GM. Non-invasive prenatal testing: ethical issues explored. Eur J Hum Genet. 2010;18(3):272–7.

    Article  Google Scholar 

  83. Chervenak FA, McCullough LB. Ethical dimensions of ultrasound screening for fetal anomalies. Ann N Y Acad Sci. 1998;847(1):185–90.

    Article  Google Scholar 

  84. Hofmann B. Incidental findings of uncertain significance: To know or not to know–that is not the question. BMC Med Ethics. 2016;17(13):1–9.

    Google Scholar 

  85. Nyhan B, Reifler J. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine. 2015;33(3):459–64.

    Article  Google Scholar 

  86. Johansson P, Hall L, Tärning B, Sikström S, Chater N. Choice blindness and preference change: you will like this paper better if you (believe you) chose to read it! J Behav Decis Mak. 2014;27(3):281–9.

    Article  Google Scholar 

  87. Martindale AM. Cosmetic procedures: ethical issues. 2016.

  88. Ives J, Dunn M. Who’s arguing? A call for reflexivity in bioethics. Bioethics. 2010;24(5):256–65.

    Article  Google Scholar 

  89. Randall DM, Fernandes MF. The social desirability response bias in ethics research. J Bus Ethics. 1991;10(11):805–17.

    Article  Google Scholar 

  90. Williams R. Compressed foresight and narrative bias: pitfalls in assessing high technology futures. Science as Culture. 2006;15(4):327–48.

    Article  Google Scholar 

  91. Moore DA, Loewenstein G. Self-interest, automaticity, and the psychology of conflict of interest. Social Justice Res. 2004;17(2):189–202.

    Article  Google Scholar 

  92. Moore DA, Tanlu L, Bazerman MH. Conflict of interest and the intrusion of bias. Judgm Decis Mak. 2010;5(1):37.

    Article  Google Scholar 

  93. Moore DA, Tetlock PE, Tanlu L, Bazerman MH. Conflicts of interest and the case of auditor independence: moral seduction and strategic issue cycling. Acad Manag Rev. 2006;31(1):10–29.

    Article  Google Scholar 

  94. Kafaee M, Kheirkhah MT, Balali R, Gharibzadeh S. Conflict of interest as a cognitive bias. Account Res. 2021. https://doi.org/10.1080/08989621.2021.1938556.

    Article  Google Scholar 

  95. Macklin R. Conflict of interest and bias in publication. Indian J Med Ethics. 2016;1(4):219–22.

    Google Scholar 

  96. Sharp RR, Scott AL, Landy DC, Kicklighter LA. Who is buying bioethics research? Am J Bioeth. 2008;8(8):54–8.

    Article  Google Scholar 

  97. Epstein M. ‘Tell us what you want to do, and we’ll tell you how to do it ethically’—academic bioethics: routinely ideological and occasionally corrupt. Am J Bioeth. 2008;8(8):63–5.

    Article  Google Scholar 

  98. Biller-Andorno N. The bioethics biz. J Med Ethics. 2009;35(8):462.

    Article  Google Scholar 

  99. Brody BA. Quality of scholarship in bioethics. J Med Philos. 1990;15(2):161–78.

    Article  Google Scholar 

  100. Jackson E. The relationship between medical law and good medical ethics. J Med Ethics. 2015;41(1):95–8.

    Article  Google Scholar 

  101. Caplan AL. No method, thus madness? Hastings Cent Rep. 2006;36(2):12–3.

    Article  Google Scholar 

  102. Kahn JP. What happens when politics discovers bioethics? Hastings Cent Rep. 2006;36(3):10.

    Article  Google Scholar 

  103. Magelssen M, Pedersen R, Førde R. Sources of bias in clinical ethics case deliberation. J Med Ethics. 2014;40(10):678–82.

    Article  Google Scholar 

  104. Everett JA, et al. The costs of being consequentialist: social inference from instrumental harm and impartial beneficence. J Exp Social Psychol. 2018;29:200–16.

    Article  Google Scholar 

  105. Salloch S, Ritter P, Wäscher S, Vollmann J, Schildmann J. Was ist ein ethisches Problem und wie finde ich es? Theoretische, methodologische und forschungspraktische Fragen der Identifikation ethischer Probleme am Beispiel einer empirisch-ethischen Interventionsstudie. Ethik in der Medizin. 2016;28(4):267–81.

    Article  Google Scholar 

  106. Lewis S. Full surrogacy now: Feminism against family: Verso Books; 2019.

  107. McKinnon K. Negotiating with Babies. In: Birthing Work. Springer; 2020. p. 19–35.

    Chapter  Google Scholar 

  108. Cassell EJ. Doctoring: The nature of primary care medicine. USA: Oxford University Press; 2002.

    Google Scholar 

  109. Ohno-Machado L. To share or not to share: that is not the question. Sci Transl Med. 2012;4(165):165cm115.

    Article  Google Scholar 

  110. Fan R, Chan H. Opt-in or opt-out: that is not the question. Hong Kong Med J. 2017;23(6):658–60.

    Article  Google Scholar 

  111. Levy N. Rethinking neuroethics in the light of the extended mind thesis. Am J Bioeth. 2007;7(9):3–11.

    Article  Google Scholar 

  112. Farsides B. What is good medical ethics? A very personal response to a difficult question. J Med Ethics. 2015;41(1):52–5.

    Article  Google Scholar 

  113. Luna F. Medical ethics and more: ideal theories, non-ideal theories and conscientious objection. J Med Ethics. 2015;41(1):129–33.

    Article  Google Scholar 

  114. Richardson HS. Specifying, balancing, and interpreting bioethical principles. J Med Philos. 2000;25(3):285–307.

    Article  Google Scholar 

  115. Tranøy KE. Asymmetries in ethics: on the structure of a general theory of ethics. Inquiry. 1967;10(1–4):351–72.

    Article  Google Scholar 

  116. Hofmann B. Simplified models of the relationship between health and disease. Theor Med Bioeth. 2005;26(5):355–77.

    Article  Google Scholar 

  117. Kopelman LM. Case method and casuistry: the problem of bias. Theoret Med. 1994;15(1):21–37.

    Article  Google Scholar 

  118. Pattison S, Dickenson D, Parker M, Heller T. Do case studies mislead about the nature of reality? J Med Ethics. 1999;25(1):42–6.

    Article  Google Scholar 

  119. Dupras C, Hagan J, Joly Y. Overcoming biases together: normative stakes of interdisciplinarity in bioethics. AJOB Empir Bioethics. 2020;11(1):20–3.

    Article  Google Scholar 

  120. Savulescu J. Bioethics: why philosophy is essential for progress. J Med Ethics. 2015;41(1):28–33.

    Article  Google Scholar 

  121. Baron J. Against bioethics. MIT Press Cambridge; 2006.

    Google Scholar 

  122. Macklin R. Can one do good medical ethics without principles? J Med Ethics. 2015;41(1):75–8.

    Article  Google Scholar 

  123. Finlay IG. What is it to do good medical ethics? From the perspective of a practising doctor who is in Parliament. J Med Ethics. 2015;41(1):83–6.

    Article  Google Scholar 

  124. Stanovich KE, West RF, Toplak ME. Myside bias, rational thinking, and intelligence. Curr Dir Psychol Sci. 2013;22(4):259–64.

    Article  Google Scholar 

  125. Baron J. Myside bias in thinking about abortion. Think Reason. 1995;1(3):221–35.

    Article  Google Scholar 

  126. Richardson HS. Specifying norms as a way to resolve concrete ethical problems. Philos Public Affairs. 1990;19:279–310.

    Google Scholar 

  127. Beauchamp TL, Childress JF. Principles of biomedical ethics, vol. 8. New York: Oxford University Press; 2019.

    Google Scholar 

  128. Gert B, Culver CM, Clouser KD. Bioethics: a return to fundamentals. Oxford University Press; 2006.

    Book  Google Scholar 

  129. Miller FG, Truog RD, Brock DW. Moral fictions and medical ethics. Bioethics. 2010;24(9):453–60.

    Article  Google Scholar 

  130. Brock DW. Good medical ethics. J Med Ethics. 2015;41(1):34–6.

    Article  Google Scholar 

  131. Herman MH. Subjective moral biases & fallacies: developing scientifically & practically adequate moral analogues of cognitive heuristics & biases. 2019.

  132. Herman M. Moral heuristics and biases. Journal of Cognition and Neuroethics. 2014;73(1):127–42.

    Google Scholar 

  133. Sinnott-Armstrong W, Fogelin RJ. Understanding arguments: an introduction to informal logic. Stamford: Cengage Learning; 2014.

    Google Scholar 

  134. Harman G. Moral reasoning. In: Oxford TM, editor. Oxford studies in normative ethics. Oxford University Press; 2011.

    Google Scholar 

  135. Tindale CW: Fallacies and argument appraisal. Cambridge University Press; USA. 2007.

  136. Chervenak FA, McCullough LB. A case study in junk bioethics run amok. Am J Bioeth. 2011;11(12):59–61.

    Article  Google Scholar 

  137. De Vries R, Gordijn B. Empirical ethics and its alleged meta-ethical fallacies. Bioethics. 2009;23(4):193–201.

    Article  Google Scholar 

  138. Salloch S, Vollmann J, Schildmann J. Ethics by opinion poll? The functions of attitudes research for normative deliberations in medical ethics. J Med Ethics. 2014;40(9):597–602.

    Article  Google Scholar 

  139. Wangmo T, Hauri S, Gennet E, Anane-Sarpong E, Provoost V, Elger BS. An update on the “empirical turn” in bioethics: analysis of empirical research in nine bioethics journals. BMC Med Ethics. 2018;19(1):1–9.

    Article  Google Scholar 

  140. Garrard E, Wilkinson S. Mind the gap: the use of empirical evidence in bioethics. In: Bioethics and social reality. Brill; 2005. p. 77–91.

    Google Scholar 

  141. Weisberg SM, Badgio D, Chatterjee A. A CRISPR new world: attitudes in the public toward innovations in human genetic modification. Front Public Health. 2017;5:117.

    Article  Google Scholar 

  142. Nelson RH, Moore B, Lynch HF, Waggoner MR, Blumenthal-Barby J. Bioethics and the moral authority of experience. Am J Bioeth. 2023;23(1):12–24.

    Article  Google Scholar 

  143. Fricker M. Epistemic injustice: power and the ethics of knowing. Oxford University Press; 2007.

    Book  Google Scholar 

  144. Nuffield Council on Bioethics: Ideas about naturalness in public and political debates about science, technology and medicine. In. Edited by Bioethics NCo. London: Nuffield Council on Bioethics; 2015.

  145. Kwiatkowska M. The beauty of vagueness. In: Seising R, Trillas E, Moraga C, Termini S, editors. On fuzziness: a homage to Lotfi A Zadeh. Berlin: Heidelberg; 2013. p. 349–52.

    Chapter  Google Scholar 

  146. Van Deemter K. Not exactly: In praise of vagueness. USA: OUP Oxford; 2010.

    Google Scholar 

  147. Hofmann BM. Vagueness in medicine: on disciplinary indistinctness, fuzzy phenomena, vague concepts, uncertain knowledge, and fact-value-interaction. Axiomathes 2021, Online before print. pp 1–18.

  148. Hauskeller M. Moral disgust. Ethical Perspect. 2006;13(4):571–602.

    Article  Google Scholar 

  149. Hofmann B. Fallacies in the arguments for new technology: the case of proton therapy. J Med Ethics. 2009;35(11):684–7.

    Article  Google Scholar 

  150. Hofmann B, Magelssen M. In pursuit of goodness in bioethics: analysis of an exemplary article. BMC Med Ethics. 2018;19(1):60.

    Article  Google Scholar 

  151. Oreg S, Bayazit M. Prone to bias: development of a bias taxonomy from an individual differences perspective. Rev Gen Psychol. 2009;13(3):175–93.

    Article  Google Scholar 

  152. Hedgecoe AM. Critical bioethics: beyond the social science critique of applied ethics. Bioethics. 2004;18(2):120–43.

    Article  Google Scholar 

  153. Chambers T. The fiction of bioethics. Routledge; 2015.

    Book  Google Scholar 

  154. Dennett DC. Intuition pumps and other tools for thinking. WW Norton & Company; 2013.

    Google Scholar 

  155. Bowman D. What is it to do good medical ethics? Minding the gap (s). J Med Ethics. 2015;41(1):60–3.

    Article  Google Scholar 

  156. Callahan D. What is it to do good ethics? J Med Ethics. 2015;41(1):68–70.

    Article  Google Scholar 

  157. Caplan A. Done good. J Med Ethics. 2015;41(1):25–7.

    Article  Google Scholar 

  158. Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics. 2015;41(1):111–6.

    Article  Google Scholar 

  159. Harris J. What is it to do good medical ethics? J Med Ethics. 2015;41(1):37–9.

    Article  Google Scholar 

  160. Kong WM. What is good medical ethics? A clinician’s perspective. J Med Ethics. 2015;41(1):79–82.

    Article  Google Scholar 

  161. Oakley J. Good medical ethics, from the inside out—and back again. J Med Ethics. 2015;41(1):48–51.

    Article  Google Scholar 

  162. Rhodes R. Good and not so good medical ethics. J Med Ethics. 2015;41(1):71–4.

    Article  Google Scholar 

  163. Serour GI. What is it to practise good medical ethics? A Muslim’s perspective. J Med Ethics. 2015;41(1):121–4.

    Article  Google Scholar 

  164. Cantarelli P, Belle N, Belardinelli P. Behavioral public HR: experimental evidence on cognitive biases and debiasing interventions. Rev Public Pers Adm. 2018;40:56.

    Article  Google Scholar 

  165. Arkes HR. Costs and benefits of judgment errors: implications for debiasing. Psychol Bull. 1991;110(3):486.

    Article  Google Scholar 

  166. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf. 2013. https://doi.org/10.1136/bmjqs-2012-001712.

    Article  Google Scholar 

  167. Wilson TD, Brekke N. Mental contamination and mental correction: unwanted influences on judgments and evaluations. Psychol Bull. 1994;116(1):117.

    Article  Google Scholar 

  168. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 2: impediments to and strategies for change. BMJ Qual Saf. 2013;22(Suppl 2):ii65–72.

    Article  Google Scholar 

  169. Herman M.: Towards enhancing moral agency through subjective moral debiasing. In. Edited by APA E; 2020.

  170. Nadurak V. Why moral heuristics can lead to mistaken moral judgments. Kriter J Philos. 2020;34(1):99–113.

    Article  Google Scholar 

  171. Turner L. Does bioethics exist? J Med Ethics. 2009;35(12):778–80.

    Article  Google Scholar 

  172. Metselaar S, Meynen G, Widdershoven G. Reconsidering bias: a hermeneutic perspective. Am J Bioeth. 2016;16(5):33–5.

    Article  Google Scholar 

  173. Gigerenzer G. Why heuristics work. Perspect Psychol Sci. 2008;3(1):20–9.

    Article  Google Scholar 

  174. House E, Howe KR. Values in evaluation and social research. Sage Publications; 1999.

    Book  Google Scholar 

  175. Ursin L. Withholding and withdrawing life-sustaining treatment: ethically equivalent? AJOB. 2019;19(3):10–20.

    Google Scholar 

  176. Baron J, Ritov I. Omission bias, individual differences, and normality. Organ Behav Hum Decis Process. 2004;94(2):74–85.

    Article  Google Scholar 

  177. Gillett G. Reasoning in bioethics. Bioethics. 2003;17(3):243–60.

    Article  Google Scholar 

  178. Nguyen CT. Cognitive islands and runaway echo chambers: problems for epistemic dependence on experts. Synthese. 2020;197(7):2803–21.

    Article  Google Scholar 

  179. Contessa G. Shopping for experts. Synthese. 2022;200(3):1–21.

    Article  Google Scholar 

  180. Kekes J. Moral sensitivity. Philosophy. 1984;59(227):3–19.

    Article  Google Scholar 

  181. Martinez W, Bell SK, Etchegaray JM, Lehmann LS. Measuring moral courage for interns and residents: scale development and initial psychometrics. Acad Med. 2016;91(10):1431–8.

    Article  Google Scholar 

  182. Erel M, Marcus E-L, Dekeyser-Ganz F. Practitioner bias as an explanation for low rates of palliative care among patients with advanced dementia. Health Care Anal. 2022;30(1):57–72.

    Article  Google Scholar 

  183. Singer P. Ethics and intuitions. J Ethics. 2005;9(3):331–52.

    Article  Google Scholar 

  184. O’Connor C, Weatherall JO. The misinformation age: how false beliefs spread. New Haven: Yale University Press; 2019.

    Book  Google Scholar 

  185. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

    Article  Google Scholar 

  186. de Quintana MJ. Acceptability of nudges as public policy tools: a theoretical and empirical analysis. Universitat Autònoma de Barcelona; 2021.

    Google Scholar 

  187. Albisser Schleger H, Oehninger NR, Reiter-Theil S. Avoiding bias in medical ethical decision-making: lessons to be learnt from psychology research. Med Health Care Philos. 2011;14(2):155–62.

    Article  Google Scholar 

  188. Rothstein MA, Shoben AB. Does consent bias research? Am J Bioeth. 2013;13(4):27–37.

    Article  Google Scholar 

  189. Gigerenzer G. Moral intuition fast and frugal heuristics. In: Moral psychology. MIT Press; 2008. p. 1–26.

    Google Scholar 

  190. Woodward J, Allman J. Moral intuition: its neural substrates and normative significance. J Physiol Paris. 2007;101(4–6):179–202.

    Article  Google Scholar 

  191. Kirby JD. Moral bias and social change. Am J Econ Sociol. 1957;16(2):195–207.

    Article  Google Scholar 

  192. Brownstein M, Saul JM. Implicit bias and philosophy: moral responsibility, structural injustice, and ethics. Oxford University Press; 2016.

    Book  Google Scholar 

  193. Wieringa S, Engebretsen E, Heggen K, Greenhalgh T. Rethinking bias and truth in evidence-based health care. J Eval Clin Pract. 2018;24:930.

    Article  Google Scholar 

  194. Mertz M. How to tackle the conundrum of quality appraisal in systematic reviews of normative literature/information? Analysing the problems of three possible strategies (translation of a German paper). BMC Med Ethics. 2019;20(1):81.

    Article  Google Scholar 

  195. Eriksen C. Moral distortion. In: Moral change. Springer; 2020. p. 109–21.

    Chapter  Google Scholar 

  196. Toth-Fejel T, Dodsworth C, Lahl J. Syntactic measures of bias (and a perspective on the essential issue of bioethics). Am J Bioeth. 2007;7(10):40–2.

    Article  Google Scholar 

Download references

Acknowledgements

I am most thankful for wise comments and constructive suggestions from three excellent reviewers and the Editor. The manuscript was submitted 04 Aug 2021.

Funding

Open access funding provided by Norwegian University of Science and Technology No funding bodies had any role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

I am the sole author of this article. The author read and approved the final manuscript.

Corresponding author

Correspondence to Bjørn Hofmann.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

I certify that there is no actual or potential conflict of interest in relation to this manuscript, and there are no financial arrangements or arrangements with respect to the content of this comment with any companies or organizations. Conflict of interests is none.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hofmann, B. Biases in bioethics: a narrative review. BMC Med Ethics 24, 17 (2023). https://doi.org/10.1186/s12910-023-00894-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12910-023-00894-0

Keywords