A fallacy is an argument that uses poor reasoning. An argument can be fallacious whether or not its conclusion is true. A fallacy can be either formal or informal. An error that stems from a poor logical form is sometimes called a formal fallacy or simply an invalid argument. An informal fallacy is an error in reasoning that does not originate in improper logical form. Arguments committing informal fallacies may be formally valid, but still fallacious.
Fallacies of presumption fail to prove the conclusion by assuming the conclusion in the proof. Fallacies of weak inference fail to prove the conclusion due to insufficient evidence. Fallacies of distraction fail to prove the conclusion due to irrelevant evidence, like emotion. Fallacies of ambiguity fail to prove the conclusion due to vagueness in words, phrases, or grammar.
Some fallacies are committed intentionally (to manipulate or persuade by deception), others unintentionally due to carelessness or ignorance.
A formal fallacy is a pattern of reasoning that is always wrong. This is due to a flaw in the logical structure of the argument which renders the argument invalid.
The presence of a formal fallacy in a deductive argument does not imply anything about the argument's premises or its conclusion. Both may actually be true, or may even be more probable as a result of the argument, but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described. By extension, an argument can contain a formal fallacy even if the argument is not a deductive one: for instance an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy.
Aristotle was the first to systematize logical errors into a list. Aristotle's "Sophistical Refutations" (De Sophisticis Elenchis) identifies thirteen fallacies. He divided them up into two major types, those depending on language and those not depending on language. These fallacies are called verbal fallacies and material fallacies, respectively. A material fallacy is an error in what the arguer is talking about, while a verbal fallacy is an error in how the arguer is talking. Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words.
Whately's grouping of fallacies
Richard Whately divided fallacies into two groups: logical and material. According to Whately, logical fallacies are arguments where the conclusion does not follow from the premises. Material fallacies are not logical errors because the conclusion does follow from the premises. He then divided the logical group into two groups: purely logical and semi-logical. The semi-logical group included all of Aristotle's sophisms except:ignoratio elenchi, petitio principii, and non causa pro causa, which are in the material group.
Sometimes a speaker or writer uses a fallacy intentionally. In any context, including academic debate, a conversation among friends, political discourse, or advertising, the arguer may use fallacious reasoning to try to persuade the listener or reader, by means other than offering relevant evidence, that the conclusion is true.
In humor, errors of reasoning are used for comical purposes. Groucho Marx used fallacies of amphiboly, for instance, to make ironic statements; Gary Larson employs fallacious reasoning in many of his cartoons. Wes Boyer and Samuel Stoddard have written a humorous essay teaching students how to be persuasive by means of a whole host of informal and formal fallacies.
In philosophy, the term logical fallacy properly refers to a formal fallacy: a flaw in the structure of a deductiveargument which renders the argument invalid. Logic is the use of valid reasoning; A fallacy is an argument that uses poor reasoning. Therefore, the clause is contradictory... However, the same terms are used in informal discourse to mean an argument which is problematic for any reason. A logical form such as "AandB" is independent of any particular conjunction of meaningful propositions. Logical form alone can guarantee that given true premises, a true conclusion must follow. However, formal logic makes no such guarantee if any premise is false; the conclusion can be either true or false. Any formal error or logical fallacy similarly invalidates the deductive guarantee. The so-called fallacy is a failure to understand that all bets are off unless the argument is formally flawless and all premises are true.
Paul Meehl's Fallacies
In Why I Do Not Attend Case Conferences (1973), psychologist Paul Meehl discusses several fallacies that can arise in medical case conferences that are primarily held to diagnose patients. These fallacies can also be considered more general errors of thinking that all individuals (not just psychologists) are prone to making.
Barnum effect: Making a statement that is trivial, and true of everyone, or, in the case of medical conferences, of all patients, and is thus useless for discussion. Everyone will agree on it, but it will not provide any incremental help in diagnosis.
sick-sick fallacy ("pathological set"): The tendency to have our own stereotypes of what is "healthy", based on our own experiences and ways of being, and identifying others who are different from ourselves as "sick". Meehl emphasizes that though psychologists claim to know about this tendency, most are not very good at correcting it in their own thinking.
"me too" fallacy: The opposite of sick-sick, thinking that "anybody would do this". Minimizing a symptom without considering the objective probability that a mentally healthy person would experience it. Is this really a "normal" characteristic?
Uncle George's pancake fallacy: A variation of "me too", this refers to minimizing a symptom by calling to mind a friend/relative who exhibited a similar symptom, thereby implying that it is normal and common. Meehl points out that the proper conclusion in this comparison is not that the patient is healthy by comparison, but that your friend/relative is unhealthy by comparison.
Multiple Napoleon's fallacy: "It's not real to us, but it's 'real' to him." A theoretical turn that Meehl sees as a waste of time. There is a distinction between reality and delusion, and it is important to make this distinction when assessing the patient. Pondering the patient's reality can be misleading and distracting from the importance of their delusion in making a diagnostic decision.
hidden decisions: Meehl identifies the decisions we make about patients that we do not explicitly own up to and do not often challenge. For example, placing middle- and upper-class patients in long term therapy, while lower-class patients are more likely to be medicated. This is related to the implicit ideal patient—young, attractive, verbal, intelligent, and successful (termed YAVIS)—that we would much rather have in psychotherapy, in part because they can pay for it long term and in part because they are potentially more enjoyable to interact with.
the spun-glass theory of the mind: The belief that the human organism is so fragile that minor negative events, such as criticism, rejection, or failure, are bound to cause major trauma to the system. Essentially not giving humans, and sometimes patients, enough credit for their resilience and ability to recover.
Fallacies of Measurement
Increasing availability and circulation of big data are driving proliferation of new metrics for scholarly authority, and there is lively discussion regarding the relative usefulness of such metrics for measuring the value of knowledge production in the context of an "information tsunami." Where mathematical fallacies pinpoint subtle mistakes in reasoning leading to invalid mathematical proofs, measurement fallacies isolate unwarranted inferential leaps involved in the extrapolation of raw data to a measurement-based value claim. The ancient Greek Sophist Protagoras was one of the first thinkers to propose that humans can generate reliable measurements through his "human-measure" principle and the practice of dissoi logoi (arguing multiple sides of an issue). This history helps explain why measurement fallacies are informed by informal logic and argumentation theory.
Anchoring fallacy:Anchoring is a cognitive bias, first theorized by Amos Tversky and Daniel Kahneman, that “describes the common human tendency to rely too heavily on the first piece of information offered (the ‘anchor’) when making decisions.” In measurement arguments, anchoring fallacies can occur when unwarranted weight is given to data generated by metrics that the arguers themselves acknowledge is flawed. For example, limitations of the Journal Impact Factor (JIF) are well documented, and even JIF pioneer Eugene Garfield notes, “while citation data create new tools for analyse of research performance, it should be stressed that they supplement rather than replace other quantitative-and qualitative-indicators.” To the extent that arguers jettison acknowledged limitations of JIF-generated data in evaluative judgments, or leave behind Garfield’s “supplement rather than replace” caveat, they court commission of anchoring fallacies.
Naturalistic Fallacy: In the context of measurement, a naturalistic fallacy can occur in a reasoning chain that makes an unwarranted extrapolation from "is" to "ought," as in the case of sheer quantity metrics based on the premise "more is better" or, in the case of developmental assessment in the field of psychology, "higher is better."
False Analogy: In the context of measurement, this error in reasoning occurs when claims are supported by unsound comparisons between data points, hence the false analogy's informal nickname of the "apples and oranges" fallacy. For example, the Scopus and Web of Science bibliographic databases have difficulty distinguishing between citations of scholarly work that are arms-length endorsements, ceremonial citations, or negative citations (indicating the citing author withholds endorsement of the cited work). Hence, measurement-based value claims premised on the uniform quality of all citations may be questioned on false analogy grounds.
Argumentum ex Silentio: An argument from silence features an unwarranted conclusion advanced based on the absence of data. For example, Academic Analytics' Faculty Scholarly Productivity Index purports to measure overall faculty productivity, yet the tool does not capture data based on citations in books. This creates a possibility that low productivity measurements using the tool may constitute argumentum ex silentio fallacies, to the extent that such measurements are supported by the absence of book citation data.
Ecological Fallacy: An ecological fallacy is committed when one draws an inference from data based on the premise that qualities observed for groups necessarily hold for individuals; for example, "if countries with more Protestants tend to have higher suicide rates, then Protestants must be more likely to commit suicide." In metrical argumentation, ecological fallacies can be committed when one measures scholarly productivity of a sub-group of individuals (e.g. "Puerto Rican" faculty) via reference to aggregate data about a larger and different group (e.g. "Hispanic" faculty).
Other systems of classification
Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i. J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies (1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic (1847) ; A. Sidgwick, Fallacies (1883) and other textbooks.