Welcome guest, is this your first visit? Create Account now to join.
  • Login:

Welcome to the Online Debate Network.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed.

Results 1 to 6 of 6
  1. #1
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Lightbulb Not so fallacious theory?

    This thread argues that some common logical fallacies can actually be seen as valid arguments, if some hidden premises (enthymemes) are clearly stated. I will be using the Nizkor Project to define and give examples for fallacies.

    Ad Hominem

    This is a category of fallacies in which an attack on the qualities of a proposer is used to support a counter-argument against the proposer's claim(s). This is the form of the argument:

    1. Person A makes claim X.
    2. Person B makes an attack on person A.
    3. Therefore A's claim is false.

    The nugget of truth

    This fallacy can be made into valid argument. Firstly I'll write out the argument in quantifier logic, then I'll explain it.

    Domain: Unrestricted.
    Let Cxyz = x makes claim y in circumstances z.
    Let Px = x has property P.
    Let Wx = x is likely to be wrong.

    1. Cabc P
    2. Pa P
    3. ∀xy[(Px & Cxyc) → Wy] P
    4. ∀y[(Pa & Cayc) → Wy] 3 UI
    5. (Pa & Cabc) → Wb 4 UI
    6. Cabc & Pa 1,2 Conj
    7. Wb 5,6 MP

    I started with 3 premises:
    1. A (the proposer) makes claim B in circumstances C.
    This is clear.
    2. A (the proposer) has property P.
    This property could be stupidity, it could be a negative characteristic like being an impulsive liar, it could be having a certain profession (being a priest).
    3. If anyone with property P makes any claim in circumstances C, then it is likely that their claim is wrong.
    Time for an example. If P is the property of being a scientologist, and the circumstances are in a debate over whether scientology is correct, then we might all agree that the claims the scientologist makes are likely to be wrong. I could refine this premise further by specifying a type of claim (e.g. specifically to do with scientology) but for now I think this should suffice.

    From these premises, it logically follows that the claim B is likely to be wrong. Note that this does not mean that B is definitely wrong. However, most people agree that if you can show that it is likely (or maybe very likely) that a claim is wrong, then you are justified in assuming that it is wrong. So I have shown how the ad hom can in fact be turned into a valid argument.

    Slippery Slope

    The slippery slope argument boils down to saying "A happened, therefore B must happen!" or "If A happens, then B must surely happen!" without justification. This fallacy is quite vague, for it is unclear exactly what constitutes a valid justification for "If A then B". This is the form of the argument:

    1. Event X has occurred (or will or might occur).
    2. Therefore event Y will inevitably happen.

    The nugget of truth

    Again, I'll start in quantifier logic, then move to an explanation with examples.

    Domain: Actions undertaken by humans.
    Let Ix = humans legally do x.
    Let Sxy = x is similar to y.
    Let Ax = x becomes more acceptable.
    Let Lx = x is more likely to occur.

    1. Ia P
    2. Sba P
    3. ∀xy[(Ix & Syx) → Ay] P
    4. ∀x[Ax → Lx] P
    5. ∀y[(Ia & Sya) → Ay] 3 UI
    6. (Ia & Sba) → Ab 5 UI
    7. Ia & Sba 1,2 Conj
    8. Ab 6,7 MP
    9. Ab → Lb 4 UI
    10. Lb 8,9 MP

    I started with 4 premises:
    1. Humans legally do a.
    "a" could be any action humans legally undertake. For instance, murder of combatants. I could refine this premise by saying humans of certain country or culture.
    2. b is similar to a.
    "b" stands for another action similar to a. Of course, it would be hard to prove that a and b are similar. Let us say, for instance, murder of non-combatants.
    3. If humans legally do any action, and there is another action similar to the legal one, then the other action becomes more acceptable.
    Again, this is a hard premise to prove (but I'm trying to show validity, not soundness). For example, if humans legally murder combatants, and if murder of non-combatants is similar to murder of combatants, then murder of combatants becomes more acceptable.
    4. If an action becomes more acceptable, then it becomes more likely to occur.
    For instance, if murder of non-combatants becomes more acceptable, then it is more likely to occur.

    From these premises I argue validly that action b becomes more likely to occur. This conclusion is only useful if action b was already somewhat likely to occur. And this conclusion is not the same as saying action b will definitely occur.

    My position and conclusion for now

    I will attempt to do similar attacks on other logical "fallacies" later. I hope to stimulate discussion about the branding of arguments as "fallacious". I take the position that - because of the above - we cannot blindly brand an argument with a logical fallacy and leave it there. In fact, I question the use of the labels for the various logical fallacies. I encourage the use of more formal logic for both the presentation and criticism of arguments. Finally, I hold that instead of branding with labels, criticisers should instead encourage the proposers of arguments to be clearer with their use of enthymemes.

  2. #2
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Re: Not so fallacious theory?

    Appeal to Authority

    The appeal to authority is widely recognised as a vague fallacy, because there are lots of cases in which it clearly isn't fallacious. The difficulty with this fallacy is deciding when someone is likely to be correct, and when they are not. Due to this, there are many differing conditions proposed for when it is valid to argue from authority, and if these conditions are not met the argument is normally considered fallacious. Before I get into those conditions, I will start by presenting the valid argument in quantifier logic, then explain which premises should be questioned. In this fallacy there isn't a "nugget of truth" because the whole thing is so vague.

    Domain: Unrestricted.
    Let Lx = x is more likely to be true.
    Let Ax = x has property A.
    Let Px = x is a person.
    Let Cxy = x makes claim y.
    Let FAx = x is in A's field of expertise.

    1. ∀xy[((Px & Ax) & Cxy) & FAy → Ly] P
    2. Pa & Aa P
    3. Cab P
    4. FAb P
    5. ∀y[((Pa & Aa) & Cay) & FAy → Ly] 1 UI
    6. ((Pa & Aa) & Cab) & FAb → Lb 5 UI
    7. (Pa & Aa) & Cab 2,3 Conj
    8. ((Pa & Aa) & Cab) & FAb 4,7 Conj
    9. Lb 6,8 MP

    The four premises:

    1. For every person who has property A and makes a claim y where y lies in A's field of expertise, that claim y is more likely to be true.
    2. Person a has property A.
    3. Person a makes claim b.
    4. b lies within A's field of expertise.

    From these premises it logically follows that claim b is more likely to be true. This argument could be repeated with the same person a and the same claim b, but by changing the property A throughout the argument. For instance, say that Nigel (person a) claims that e^(i*pi) = -1 (claim b). We could say that every person who has a degree in maths and makes a claim y where y lies in math graduate's field of expertise, that claim y is more likely to be true. And so if Nigel is a math graduate and if his claim lies within math graduate's expertise, we can judge his claim as more likely to be true. We could also say that every person who has read G.H.Hardy's book on Pure Mathematics and makes a claim y where y lies in having read that book's field of expertise, that claim y is more likely to be true etc. So by repeating the argument for different properties, we can repeatedly arrive at the conclusion that the claim is more likely to be true. By taking all this together, we might say that the more times we can conclude that the claim is likely to be true (with different properties), the more likely to be true the claim is.

    We do this regularly in day-to-day conversation; if our friend tells us a crazy fact, we judge them on various properties: do they lie often, are they an expert in that field of science, have they read a book on it, etc. Weighing up all of this evidence allows us to arrive at a conclusion, where we are more or less likely to put our trust in them. However, this process of weighing up is all very subjective, and thus different people may arrive at different conclusions given the same evidence.

    In my logical argument, premise 4 and 1 should be questioned closely as to the meaning of field of expertise and more likely. The Nizkor Project provides 6 conditions for judging when we can call this argument from authority wrong/right. Please check out these interesting conditions.

    Conclusion

    The appeal to authority fallacy is difficult to judge because it is judged subjectively. Although there are some attempts at objective measures, ultimately it is up to the individual to decide whether he will accept an argument from authority or call it fallacious. In the end, I believe that it is always best to provide the reasoning yourself if possible rather than quote an authoritative source. For instance, in my maths example it is much better to prove that e^(i*pi) = -1 rather than quote someone who says so. Proving stops problems with appeals to authority, and makes the argument your own rather than someone else's. And finally, providing your own reasoning is (in the end) more fun

  3. Thanks GoldPhoenix thanked for this post
  4. #3
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Re: Not so fallacious theory?

    No True Scotsman Fallacy

    This fallacy has already been discussed somewhat here at ODN, but I wanted to discuss how this fallacy can actually contain truth, so I am doing so here. For more background information on the fallacy, please see the Rational Wiki here.

    The fallacy is well presented by @WhoamI in the ODN thread:

    -Apok replies to threads on Monday
    -No true debater replies to threads on Monday
    Therefore:
    -Apok is not a debater
    Therefore:
    -Apok is not a counter-example to the claim that no debater replies to threads on Monday's
    This is clearly fallacious reasoning. Or is it?

    Miraculous proof?

    Domain: People.
    Let Cx = x is/was a Christian
    Let Sx = x is/was a slave-owner
    Let a = Saint Augustine

    1. ∀x[Cx → ~Sx] P
    2. Sa P
    3. Ca → ~Sa 1 UI
    4. ~Ca 2,3 DS

    The two premises:
    1. If anyone is/was Christian, then they are/were not a slave-owner.
    2. Saint Augustine was a slave owner.
    With these two premises we logically arrive at the conclusion that Saint Augustine was not a Christian. And this is a valid argument! What's going on?

    Inconsistency not proof

    The fallacy occurs because debaters attempt a proof. A proof is not the correct tool in testing the rule. An inconsistency test is required. The two premises we want to test are the general rule (If anyone is/was Christian, then they are/were not a slave-owner) and the supposed counter-example (Saint Augustine was a slave owner). However, we also know that Saint Augustine was a Christian. So our truth tree looks like this (I put the numbers in to show the order in which I did things):

    x[Cx → ~Sx] TICK 4
    Sa & Ca TICK 1
    Sa CIRCLE 2
    Ca CIRCLE 3
    Ca → ~Sa TICK 5

    Our tree then splits into two branches:

    1
    ~Ca CIRCLE 6
    CROSS 7
    or
    2
    ~Sa CIRCLE 8
    CROSS 9

    Both branches end in contradiction, so our two original premises are inconsistent. So it is a fallacy after all!

    The nugget of truth

    What if we use the "true" modifier though? Lets say that there are Christians, but then there are also "true Christians" who form a subset of Christians (this is the third premise). So now we're saying that all true Christians aren't slave-owners. Is this inconsistent with Saint Augustine being a Christian and a slave-owner?

    Domain: People.
    Let Cx = x is/was a Christian
    Let Sx = x is/was a slave-owner
    Let Tx = x is/was a true Christian
    Let a = Saint Augustine

    x[Tx → ~Sx] TICK 4
    Sa & Ca TICK 1
    x[Tx → Cx] TICK 5
    Sa CIRCLE 2
    Ca CIRCLE 3
    Ta → ~Sa TICK 6
    Ta → Ca TICK 10

    Now we're branching again:

    1
    ~Sa CIRCLE 7
    CROSS 8

    or

    2
    ~Ta CIRCLE 9

    This branch is still open, so we're going to keep with this branch and branch again:

    2i)
    Ca CIRCLE 11

    2ii)
    ~Ta CIRCLE 12

    Both these branches are still open, and we've circled, ticked or crossed everything, so we've found an interpretation under which all the premises are true. This means that the premises are consistent when Saint Augustine is a Christian and a slave-owner but not a true Christian. So we've defeated the No Scotsman fallacy! Or have we?

    Definitions

    We haven't really defeated the fallacy, because at the moment we are defining true Christians as the subset of Christians who aren't slave-owners. This is a circular and unlikely definition. What we really need to do is find a proper definition for true Christians, and prove that all true Christians aren't slave owners. Lets give that a go.

    Lets start by defining a true Christian as someone who reads the Bible correctly. Firstly I need to introduce the terms again:

    Domain: People.
    Let Cx = x is/was a Christian
    Let Sx = x is/was a slave-owner
    Let Tx = x is/was a true Christian
    Let Rx = x reads the Bible correctly
    Let a = Saint Augustine

    The definition takes the form of a premise:

    x[Tx ⇔ Rx] So all who read the Bible correctly are true Christians, and all who are true Christians read the Bible correctly.

    Now, to prove that all true Christians aren't slave owners, we need the premise that all people who read the Bible correctly aren't slave owners. So here's our proof:

    1. ∀x[Rx → ~Sx] P
    2. ∀x[Tx ⇔ Rx] P
    3. Rx → ~Sx 1 UI
    4. Tx ⇔ Rx 2 UI
    5. (Tx → Rx) & (Rx → Tx) 4 Equiv
    6. Tx → Rx 5 Simp
    7. Tx → ~Sx 3,6 HS
    8. ∀x[Tx → ~Sx] 7 UG

    So we proved it, which means (as before) that the statement "true Christians aren't slave owners" is consistent with Saint Augustine being a slave owner. However it necessitates that Saint Augustine doesn't/didn't read the Bible correctly.

    Conclusion

    So in general, the No Scots fallacy isn't a fallacy when we can define "true X" sufficiently to prove that "all true X aren't Y". However, coming up with a sufficient definition has its own difficulties. In our example, we would find it hard to defend the premise that all people who read the Bible correctly aren't slave-owners. So although the No Scotsman fallacy sometimes applies, we have seen how we can make it work validly.

  5. #4
    Walsingham
    Guest

    Re: Not so fallacious theory?

    Just read your ad hominem. Great explanation, I'm just starting up with logic. Just thinking though, ad hominem can still be proven an absolute fallacy if one of your premises is that the characteristic judged in the attack is unrelated to the question (I.e. You're a vegetarian, you're dumb) or something even more specific. Anyway it's late now but I can't wait to read the rest of your posts on this. Good work!

  6. Likes Caconym liked this post
  7. #5
    ODN Community Regular

    Join Date
    Dec 2005
    Location
    USA
    Posts
    5,617
    Post Thanks / Like

    Re: Not so fallacious theory?

    FYI, as the OP references, Nizkor is an excellent source for logical fallacies.
    "Those who can make you believe absurdities, can make you commit atrocities." --Voltaire

  8. #6
    Registered User

    Join Date
    May 2013
    Posts
    90
    Post Thanks / Like

    Re: Not so fallacious theory?

    Quote Originally Posted by Walsingham View Post
    Just thinking though, ad hominem can still be proven an absolute fallacy if one of your premises is that the characteristic judged in the attack is unrelated to the question (I.e. You're a vegetarian, you're dumb) or something even more specific.
    Thank you, Walsingham. And good point. In response, I think I'll try to modify my Ad hominem argument. I will start by stating clearly the meaning of a valid argument:

    Valid Argument: An argument in which if all the premises are true, then the conclusion must be true.

    Sound Argument: A valid argument in which the premises really are true in reality.

    Valid Ad Hominem (not a fallacy)

    Firstly, I will demonstrate what premises are necessary if an ad hom is to be a valid argument.

    Domain: Unrestricted.
    Let Cxyz = x makes claim y in circumstances z.
    Let Px = x has property P.
    Let Wx = x is likely to be wrong.
    Let Rxy = x is related to y.
    Let p = property P.

    1. Cabc P
    2. Pa P
    3. Rbp P
    4. ∀xy[((Px & Cxyc) & Ryp) → Wy] P
    5. ∀y[((Pa & Cayc) & Ryp) → Wy] 4 UI
    6. ((Pa & Cabc) & Rbp) → Wb 5 UI
    7. Cabc & Pa 1,2 Conj
    8. (Pa & Cabc) & Rbp 3,7 Conj
    9. Wb 6,8 MP

    Essentially, I've added in another criteria for deciding when a claim is likely to be wrong. These are the criteria now:

    • The person making the claim has a particular property
    • The claim is made in particular circumstances
    • The claim is related to the particular property


    Time for an example:

    Premise 1: Wittgenstein claims that logic is a pig riding on a tortoise's back whilst drunk.
    Premise 2: Wittgenstein is a crazy philosopher.
    Premise 3: Claiming that logic is a pig riding on a tortoise's back is related to being a crazy philosopher.
    Premise 4: If any crazy philosophers make claims related to being crazy philosophers whilst drunk, then their claim is likely to be wrong.
    Conclusion: It is likely wrong that logic is a pig riding on a tortoise's back.

    This is a valid argument. However, for this ad hom to be a sound argument, all the premises must be proved/justified. Premise 3 may be slightly tricky to prove. But premise 4 is by far the hardest premise to prove in this argument.

    Invalid Ad Hominem (a fallacy)

    Now I will demonstrate the conditions under which an ad hom can be an absolute fallacy. Firstly, on @Walsingham's recommendation, we could include the premise that the claim is unrelated to the property of the person. Or in QL: ~Rbp. Certainly if we include this premise, then the ad hom is a fallacy, because we cannot have the premise Rbp which is vital to the ad hom being a valid argument. What this means is that you can be sure that any ad hominem attack on person A, in which the claim person A makes is unrelated to the property person A possesses (which is the subject of derision), is both an invalid and an unsound argument.

 

 

Similar Threads

  1. The Homosexual Theory
    By HotPancakes in forum Conspiracies
    Replies: 142
    Last Post: July 2nd, 2012, 10:44 AM
  2. Fallacious theory section.
    By Vandaler in forum Site Feedback
    Replies: 3
    Last Post: June 21st, 2008, 03:26 AM
  3. The Haulocaust is just a THEORY
    By Zhavric in forum Social Issues
    Replies: 126
    Last Post: September 20th, 2006, 01:37 PM
  4. Theory about nothing...
    By vance101 in forum Hypothetical Debates
    Replies: 34
    Last Post: November 11th, 2004, 08:02 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •