Showing posts with label fallacies. Show all posts
Showing posts with label fallacies. Show all posts

Saturday, June 6, 2015

Popularity contest

(This is not in reference to any particular incident.)

When big name bloggers argue with each other, one of the ways they can insult each other is by belittling the size of the other's audience.  You do realize, that most readers of your blogs have smaller audiences still?

I guess the thrust of the insult is to say that someone has little power, and their arguments have failed to persuade any significant number of people.  But yeah, this insult has problems, and speaks to the bias that popular bloggers have as a class of people.

Now, when the same insults are used by the blog commenters... I don't even know what to say about that.

Tuesday, May 19, 2015

Abusing skeptical tropes: case in point

The other day, I wrote something about the word "groupthink" and how it's often used in a rather meaningless way.  I didn't have any examples in mind, but what do you know, Ron Lindsay (head of the Center for Inquiry) provided an example just yesterday:
Unfortunately, at least in my experience, some humanists do treat certain views and principles as “sacred.” These principles appear to be adopted more out of reflex, emotion, or groupthink than evidence-based reasoning.
This also runs into what I said a couple months ago about how "sacred" is abused.  Over half of the essay appears to be abusing this sort of rhetoric, claiming his opponents are failing to question everything, blindly accepting ideological principles, using empty rhetoric etc. etc.  I could probably write a series about these sorts of tropes, but I would begin to repeat myself very quickly.

The worst part is that most of these are recognizably skeptical/atheist tools.  Part of skepticism 101 is learning about lots of logical fallacies.  Part of atheism 101 is questioning everything and rejecting faith and dogma.  The tools are meant to be applied across the board in hopes that we can better reach the correct conclusion of any argument.  But do they really work?  Or do they just lead to extraneous bloviation?

This, here, is why I all but stopped naming fallacies.  I use them to aid my thinking, but I try not to use them explicitly as argumentative shortcuts.  I am unconvinced that these skeptical tropes help me to avoid being wrong, and unconvinced that it's effective at persuading others.

It's not that fallacies and cognitive biases aren't good to know about.  They just really need some quality standards.  The practices of naming fallacies or accusing opponents of bias is not really conducive to maintaining standards. Anyway, those are my current thoughts, though I realize I'm just asserting an opinion at this time.


(Via Pharyngula.  In case you're wondering, I oppose the death penalty, but I mostly don't care because it affects a very small number of people relative to other problems with California's prison system.)

Thursday, August 28, 2014

The dangers of meta

In the context of a discussion, going meta means to talk about how we're doing the discussion.  For example, in skeptical discourse, a common way of going meta is to talk about fallacies, and styles of argumentation.  In social justice discourse, a common way of going meta is to talk about the words we use in discussion, and whether these words are problematic.  Or we talk about "derailing", which is the way that people shut down discussions about social justice.

This post, of course, is going meta one level deeper.  I'm not talking about how we talk about things.  I'm talking about how we talk about how we talk about things.

For someone like me, there's a strong draw towards meta conversations.  And that's not a bad thing.  Meta-discussions are often what separate the advanced discussion and the basic discussion.  Meta-discussions inject a sense of self-awareness into our normal discussions.

Unfortunately, advanced discussion is not for everyone, and there's a reason the basic discussion is needed.  I know, for instance, that this blog will never have popular appeal.  And this doesn't just have to do with me.  Anything that involves reading thousands of words on such narrow topics will never have popular appeal.

And yet--I'm speaking for myself here--meta-discussions have a strong hold on me.  I can't let go of my self-awareness.  And self-awareness is just one step away from self-consciousness.  I am self-conscious about using arguments that I know are bullshit or political expedients (which is nearly everything).  As I've mentioned before, I am self-aware about concern trolling.  The self-awareness doesn't always stop me from doing it, but changes how I do it (and possibly not for the better).

Another problem with meta-discussions is that it feels like meta is the answer to everything, and that everything needs meta, but this is not true.  Meta-discussion is a general discussion.  Most problems require case-by-case judgments, and knowledge of specific details.  For instance, many skeptical topics, even the small and pointless ones like bigfoot, require specific knowledge to effectively address.  They require research.  Pointing out logical fallacies will only get you so far.

Meta is not the answer to everything, nor does everything need meta.  In many of my discussions on the internet, I have strong opinions on the meta aspects.  Like, say I'm talking with someone, and they declare that bringing up a particular topic is "derailing".  I have strong opinions on whether certain things count as derailing or not.  But even if I disagree with someone, bringing up a whole meta-discussion about derailing is itself honest to goodness actual derailing.  It's distracting from the main point, and frankly condescending.

I see this happen with other skeptics.  I see people bring up logical fallacies when it's inappropriate.  Whether or not I agree with the point on fallacies, it comes off as condescending, and a distraction from the point.  Like Process Man we're in danger of putting the arcane details of the process above the results.

Nonetheless, I would say that on some level meta-discussion really is indispensable.  The default, for many people is to fall along partisan lines.  Not necessarily partisan political lines (as in left or right), but things like the partisan skeptical line, the partisan atheist line, or the partisan social justice line.  One of the best ways to break out of these partisan lines is to have strong opinions about what sort of arguments are valid or invalid, regardless of who makes the argument.

Friday, August 1, 2014

Richard Dawkins is a Vulcan

Richard Dawkins posted a series of tweets that set off a firestorm.
X is bad. Y is worse. If you think that’s an endorsement of X, go away and don’t come back until you’ve learned how to think logically.

Mild pedophilia is bad. Violent pedophilia is worse. If you think that's an endorsement of mild pedophilia, go away and learn how to think.

Date rape is bad. Stranger rape at knifepoint is worse. If you think anybody who said that would thereby be endorsing date rape, go away and learn how to think.
Twitter is such a terrible format, I'd rather not even engage with it.  Instead, I will respond to Richard Dawkins' longer post on the subject, "Are there emotional no-go areas where logic dare not show its face?"  In effect, I am being charitable to Dawkins by considering his fuller justification, regardless of whether he actually deserves such charity.
Some people angrily failed to understand that it was a point of logic using a hypothetical quotation about rape. They thought it was an active judgment about which kind of rape was worse than which. Other people got the point of logic but attacked me, equally furiously, for choosing the emotionally loaded example of rape to illustrate it.  To quote one blogger, prominent in the atheist movement, ‘What would have been wrong with, “Slapping someone’s face is bad, breaking their nose is worse”? Why need to use rape?’
...
I hope I have said enough above to justify my belief that rationalists like us should be free to follow moral philosophic questions without emotion swooping in to cut off all discussion, however hypothetical. I’ve listed cannibalism, trapped miners, transplant donors, aborted poets, circumcision, Israel and Palestine, all examples of no-go zones, taboo areas where reason may fear to tread because emotion is king. Broken noses are not in that taboo zone. Rape is. So is pedophilia. They should not be, in my opinion. Nor should anything else.
In short, Dawkins was trying to make a point about logic.  But he wasn't really trying to make a point about logic, he was trying to make a point about how certain subjects are so taboo that we fail to apply logic.  According to Dawkins, people angrily failed to understand that he was just trying to make a point about logic.  How can they disagree with logic?  Except, apparently he wasn't just making a point just about logic.

I appreciate that there are taboos that block rational discussion of important topics.  Perhaps the one most important to Dawkins is the taboo against criticizing religious beliefs.

On the other hand, Dawkins is being that guy.  The guy who dumbly pretends that they only understand literal meanings, and who act shocked when people interpret their statements in any sense other than the literal one.  I'm not sure what the cool kids call it these days, but when I was a kid, we called this being a "smart alec".  Smart alecs were assholes.  Even smart alecs themselves knew they were being assholes.

I know Richard Dawkins has a science popularizer background, and not a Skeptical background (and I mean capital S Skepticism, as in the community of people actively interested in the subject).  And in some ways, capital S Skepticism is dying, or it is to me, because of dissatisfaction with the community.  But there was at least one advantage to the community, which was that it made us think about our collective image as skeptics.  We knew that we were perceived of as smart alecs, as Vulcans, as people who only cared about logic.  We knew the stereotype that skeptics don't care about feelings, neither our own nor other people's.  We made sure to counter this stereotype whenever possible.

But Dawkins.  Dawkins is trying to be that guy.  He's trying to be the Vulcan.

Sorry, but being a Vulcan is neither endearing, nor is it the correct approach to critical thinking.

As for the fallacy that Dawkins mentioned, is it ironic that the politically-correct witch-hunting feminazis actually discuss this particular fallacy in greater depth than Dawkins ever could?  This fallacy is commonly discussed under the heading of "oppression olympics".

"Oppression olympics" is when people disingenuously compare how bad X and Y are, in order to just shut people up about X.  For instance, if people are talking about sexism in the UK, someone might dismiss the whole discussion by referring to sexism in the Middle East.*  But that's stupid.  Just because Y is worse than X doesn't mean we should ignore X.  X may still be pretty bad.

*I use this particular example because it's something Dawkins himself has infamously done.

At this point, the person who invoked the comparison gets defensive.  "I wasn't trying to say that X isn't bad.  I wasn't endorsing X."  But if what they said has no implications on X, why did they bring it up in a conversation about X?  Probably they subconsciously believe it has implications on X, even though they won't admit it.  In other words, people can still believe in the fallacy even when they say they don't.

Case in point, Dawkins says he doesn't believe in the fallacy, but people still remember his comments from last year when he said he "can’t find it in [him] to condemn" the "mild pedophilia" he experienced in his youth.  Just because it was "milder" than other people's experiences doesn't mean it wasn't bad, and doesn't mean we can't condemn it.  Does Dawkins himself understand the lesson of his own tweet?

Again, I appreciate the value of breaking taboos.  It's actually a pet peeve of mine when, because of taboos, people don't understand that different cases of sexual assault or rape may result in different degrees of trauma. But it's not so simple as dividing rape and assault into different "kinds", it's not so deterministic.  There are a lot more factors involved than just whether it was "date rape" or "violent rape".  The important message here is that people can react in different ways, and their feelings are valid.  And I will say that regardless of how taboo it is.

Thursday, December 19, 2013

Can analogies ever be arguments?

When I was at that gaming conference, populated as it was by English academics and the like, I was bothered by the overuse of analogies.  Sure, I liked the analogy between gamer shame and queer shame, for what it was worth, but that was one hit among several misses.  I felt much less enlightened by the analogy between video games and candy (and subsequently between candy and sex).

Indeed, one of my initial reactions was, "Wait, can analogies even be used as arguments ever?"  You can introduce and illustrate new ideas through analogies, but can you really demonstrate anything with an analogy?  If you observe that X and Y are similar to each other in some ways, this does not demonstrate that there are further similarities.  When two things are analogous to each other, you can never say where the analogy ends and the disanalogy begins--not without observing directly.

On the other hand, it would be wrong to categorically dismiss analogies as arguments in all circumstances.  I'm sure there are examples of proper arguments by analogy out there, even if I can't think of them in that moment when I'm blinded by English professors.  It would be helpful to consider a few recent examples where I used analogies on this blog, and critically examine how I used them.

---------------------------------

In Oppression Olympics: A balanced perspective, I make an analogy between the way that people get excited about that one trading card they found in a booster pack, and the way that people get excited about that one argument that they were able to think up on their own.

This analogy fails as an argument.  It does not demonstrate that people have a tendency to get attached to arguments, the way they do to trading cards.  I was merely asserting that this is the way things are, and hoped that readers would agree.  If I wanted to present a real argument, I would refer to psychological research.

---------------------------------

In Negative may be better than the alternative I made an analogy between the "negative" labels atheism and asexuality.  I said that people have come up with "positive" alternatives to atheism, and that these alternatives have had both costs and benefits.  I then argued that the costs and benefits would also apply to asexuality.

I think this comes closer to a valid argument from analogy.  The key point is that there are underlying patterns in the way we interact with identity labels.  Therefore we can predict a certain amount of similarity between them.  This is by no means a perfect argument, but then any argument about social trends is going to be messy.

---------------------------------

In Why video games are so flammable, I have a brief simplistic discussion the economics of video game consoles.  I model it as monopolistic competition with a strong economy of scale effect.

In a way, every argument based on a model is an argument from analogy, because I'm analogizing it to that model.  Even if I'm doing something as simple as adding up money, I'm making an argument by analogy because I'm analogizing money to the abstract mathematical concept of numbers.  This is a valid argument, because economic models are built on a certain number of premises, and we know those premises are approximately correct.

---------------------------------

Based on these few examples, here I will draw some conclusions.  Analogies are often not used as arguments at all, but rather as tools to illustrate concepts.  However, there are cases where analogies can be used as arguments.  Arguments from analogy are at their best when they most resemble arguments from models.  If you want to argue that two things are similar, you can't just observe a few similarities and hope that other similarities follow.  Rather, you argue that the underlying patterns or laws are similar, and therefore the consequences of these laws should be similar.

Monday, April 8, 2013

If everything looks one-sided...

Earlier in the comments of The Uncredible Hallq, the topic was same-sex marriage, and someone was saying, "Policy Debates Should Not Appear One-Sided."  The point being that if same-sex marriage looked like such a one-sided issue, then we're obviously biased.

My reaction was, "Possibly true, definitely unhelpful."

I put "Policy Debates Should Not Appear One-Sided" in quotes, because this is the title of a well-known piece on Less Wrong.  In brief, you might expect issues of fact to be one-sided, because most of the strongest evidence will converge (though by chance there may be some weak contrary evidence).  But issues of policy generally should not be one-sided.*  Any proposed policy has costs and benefits.  The good policies are the ones where the benefits outweigh the costs.  But it would be very strange if it appeared that a policy literally had no costs whatsoever.  If policy issues look one-sided, then we must be biased.

*I recall one time Leah Libresco said "Arguments shouldn't look one-sided" in the context of atheism vs Catholicism.  This was an incorrect application of the idea, since atheism vs Catholicism is analogous to an issue of fact rather than an issue of policy.

I agree with the general point, but is this really something that can used in any specific argument?  As I've said before, accusing your opponent of cognitive bias is a pretty shitty argument, even if it's true.  I mean, we all know that we have our own cognitive biases.  But that doesn't mean that any particular belief is wrong.

The argument is made worse by the fact that "Policy Debates Should Not Appear One-Sided" is not a universal rule.  You'd expect that most policies would have costs and benefits, but who is to say that this is true of any particular policy?  One could imagine, for instance, an issue of policy that reduces to a single issue of fact.  Or you could imagine an issue of policy that only has a few distinct effects, so it's not so outlandish that the effects could all happen to be positive.  Or you could imagine an issue of policy which has one effect which is so big and important, that it's not really necessary to consider the other smaller effects.

Even when you're just trying to evaluate a policy for yourself, rather than trying to argue with someone else, "Policy Debates Should Not Appear One-Sided" still seems unhelpful.  If a policy debate looks one-sided, what are you supposed to do about it?  You can search for new evidence and arguments that oppose you.  But if you've already seen the opposing arguments and found them wanting, that's that.  There's no point in trying to fill a quota of disadvantages to your own side, just because you have the prior expectation that your side should have disadvantages.  You might as well just believe your prior expectations and ignore evidence.

Thursday, March 28, 2013

The fallacious slippery slope

I find that the best way to talk about fallacies and biases is to wait for a good example to come around.  Recent discussion of same-sex marriage offers some excellent examples of slippery slope arguments.  Chris Hallquist recently highlighted an example:
By turning marriage into a socially constructed reality that doesn’t have a nature, marriage can then be whatever you want it to be. Not just the union of a man and another man, but also even two men and a woman–three partners in marriage. Or it could be a man and a child. Or maybe even a man and his dog, if he feels close enough to his pet to want to marry it.

--William Lane Craig
(In context, William Lane Craig is arguing that long-term relationships are so uncommon among gay men, that they couldn't really be fighting for marriage for its own sake.  Instead, their real goal must be to "deconstruct marriage".  But I will ignore this context to focus on the slippery slope only.)

I remember learning about logical fallacies in grade school English, and they would always include the "slippery slope fallacy" among others.  I think this is wrong.  A slippery slope argument is not necessarily fallacious.  It depends on how it's used.  Therefore, I call this a fallacious slippery slope argument to distinguish from those slippery slope arguments which are not fallacious.

First, I wish to divide slippery slope arguments into two categories (which are my own creation):

1. The slippery slope of reasoning
2. The slippery slope of consequences

Example of a slippery slope of reasoning: Suppose I claimed that all the best things are green.  You could counter, "But if you follow that slippery slope, you must also believe that cats (which are not green) are not among the best things!  Clearly this is absurd."

Another example: "If you believe that same-sex marriage should be legal, then you should also believe that man-dog marriage should be legal.  This is clearly absurd."

This kind of slippery slope argument is no different from an argument ad absurdum.  In a mathematical argument, we'd call it proof by contradiction.  It's a logically valid argument, it's just that it's often unsound (ie the conclusions follow from the premises, it's just that the premises are wrong).  In the green example, the person forgot to show that it is absurd to believe that cats are not the best things.  In the example of same-sex marriage, the person forgot to show that if same-sex marriage is legal, then man-dog marriage should be legal.  But if we granted the premises, the conclusions would follow.

William Lane Craig uses a slippery slope of consequences.  He does not say that if we support same-sex marriage, we must logically also support man-dog marriage.  Rather, he says that if same-sex marriage is legal, this would lead people to also legalize man-dog marriage in the future.  The argument is not about what people should conclude from prior beliefs, it's about what people will do, how people will behave.

It's basically a moral argument.  Lane Craig doesn't want people to marry dogs, so from his perspective it's worth taking actions to avoid this.

But there's something rather strange about this argument.  We are free agents.  We can either choose to allow man-dog marriage or not.  We will choose according to our preferences.  If we prefer to allow man-dog marriage, why should we prevent getting what we prefer?  If we prefer not to allow man-dog marriage, then why would we choose to allow man-dog marriage?

Mind you, the slippery slope of consequences still isn't necessarily fallacious.  There are some situations where we might worry that our future selves will be irrational.  Or that other people will be irrational.  Or that we'll have different preferences in the future.  And there are game-theoretic situations where it's better to have fewer options (I've recently been reading about decision theory).

But none of those situations are relevant here.  Making laws is a very deliberative process, not something that's decided on the spot on an irrational whim.  And it's hard to imagine a non-trivial game-theoretic situation.  Usually there are people who want a law, and people who don't, and that's that.

So despite the slippery slope argument being an acceptable argument in general, William Lane Craig manages to use a form that is completely fallacious.

--------------------------

On a related note, my boyfriend pointed out a slippery slope argument made by Supreme Court Justice Scalia, when he argued in favor of anti-sodomy laws in 2003:
"Today’s opinion dismantles the structure of constitutional law that has permitted a distinction to be made between heterosexual and homosexual unions, insofar as formal recognition in marriage is concerned. If moral disapprobation of homosexual conduct is 'no legitimate state interest' for purposes of proscribing that conduct; and if, as the Court coos (casting aside all pretense of neutrality), '[w]hen sexuality finds overt expression in intimate conduct with another person, the conduct can be but one element in a personal bond that is more enduring,' what justification could there possibly be for denying the benefits of marriage to homosexual couples exercising '[t]he liberty protected by the Constitution'?"

-Lawrence v. Texas, 539 U.S. 558, 605-06 (2003) (Scalia, J., dissenting) (internal citations omitted).
I'm sure by now Scalia has thought of a few justifications for denying same-sex marriage that do not rely on anti-sodomy laws.

Tuesday, February 19, 2013

A fallacy diagram

For some reason I was inspired to draw a diagram of fallacies as I understand them.


The main point of the diagram is that there are really two kinds of fallacies.  First, there are formal fallacies, which are inferences that are not deductively correct.  Second, there are fallacies of induction, which are inductive arguments that are faulty or especially weak.

Formal fallacies are a category that technically includes induction.  This can lead to errors when you hold an inductive argument up to deductive standards.  For example, it is correct to argue that since the sun regularly has come up every morning, it will probably come up tomorrow morning as well.  But it's not deductively true, so you could call it a fallacy if you wanted to be a smartass.

The boundaries between proper induction and fallacious induction is somewhat fuzzy, and I tried to represent this in the diagram.  For example, argument from authority is a bit of a fallacy, since authorities are frequently wrong.  However, if the authority is shown to represent expert opinion, and if our resources are limited enough that we cannot investigate very deeply for ourselves, the expert opinion might be acceptable.  It could be very difficult to determine when inductive reasoning crosses that line into fallacious reasoning.

I also put a third category in there, assertions.  Assertions are not fallacies or deductions, because they involve no inferences.  Assertions can be useful in saving resources, since there's no point in advancing evidence or arguments for points we all agree on.  They are also useful to understand the differences between our positions.  They are not substitutes for arguments.

Earlier I mentioned mounting a critique of "name that fallacy"-style arguments.  This is where my critique would begin.  If all you have is a list of fallacies, and you don't distinguish between fallacies of induction and formal fallacies, then you're going to spot fallacies everywhere, even in good inductive arguments.  Or rather, you're going to spot fallacies everywhere you cast a critical eye, which will mostly be on your opponents.  This is a good way to get entrenched in your own beliefs, regardless of whether those beliefs are true or not.

Monday, January 14, 2013

What fallacies are most common in real life?

When we talk about logical fallacies, there's a "canonical" set of fallacies we spend most of our time on.  But are these the same fallacies that occur most often in real life?  Are there any kinds of fallacies that you see in your day to day life that doesn't get talked about much?

I believe that by far the most common is the argument from vehement assertion.  Seriously, most people don't properly argue at all, they just state their opinions at each other.  Then they state them again more loudly.  Then they struggle to find another way to state their opinion so that other people can correctly understand it (because if they disagree, surely they've misunderstood).

Then there's the hasty generalization.

I also think the sunk-cost fallacy gets short shrift.  I believe in completely finishing the food I pay for, even if I don't want any more, but this is almost certainly irrational.

What do you think?

Tuesday, December 4, 2012

Privilege as bias

Some months ago, there was a comment scuffle on my other blog.  One person, let's call them Alice, complains about transphobia and ablism in a particular forum.  Another person, let's call them Bob, says that the allegations are too vague.  Bob asks for a link and talks about recollection bias.  Alice responds angrily, and among other things accuses Bob of being part of a privileged group.  Bob asserts that this is ad hominem, and confesses dislike for the very idea of "privilege".

Bob is likely not alone in this perception, that the main function of "privilege" is to discount people's opinions based on who they are.  When a privileged person says something an underprivileged person doesn't like, they can claim that the privileged person's opinion comes from privilege, and should therefore be discounted.  This allows you to reject other people's views independently of their content, and thus it cannot help you achieve a greater approximation of truth.

It's hard to argue with that, if "privilege" is just used to indiscriminately dismiss people that you disagree with.

But there's also a way to translate "privilege" into skeptical terms.  When someone is called out on their privilege, the translation is that they've been accused of bias.  The paradigmatic case is when a white person expresses their personal impression that racism isn't a big deal these days.  Of course, impressions come from personal experiences, and it hardly needs saying that people have different personal experiences.  People also suffer from inattentive blindness, and thus are unlikely to notice comments or actions which don't hurt them personally.

That said, accusing people of privilege-induced blindness is a pretty shitty argument, because accusing people of cognitive bias is a shitty argument.  Or at least, it's very hard to make persuasive case out of it.

For example, suppose someone believes in chemtrail-related conspiracies, and your response is to talk about the systematic bias which causes people perceive agenticity where there is none.  It may be interesting to discuss, especially to third parties, but your debate opponent will likely remain unconvinced.  For one thing, simply asserting a cognitive bias doesn't mean that it's there.  Why should your opponent accept your assertion just on your say so?  And for another, just because they have a statistically higher probability of having a certain set of false beliefs does not mean that this particular belief is false.

Another example: suppose someone believes that a particular forum is terrible, and your response is to talk about recollection bias.  You're obviously not going to convince someone that their memory is wrong just because you've cited the fact that memories can be wrong.

(In general, I support the idea of privilege, but at some point I'd also like to enumerate its many problems and failings.  Also at some point I'd like to discuss the failings of "name that fallacy" style argumentation, but my thoughts aren't fully formed.)

Saturday, August 4, 2012

Atheism's foil

One common way atheists cope with stigmatization is to distinguish themselves from those atheists.  You know, the bad ones.  I call this creating a foil.  I'm using "foil" in the sense of a character foil.  When you create a foil, you describe a position that contrasts with your own, often to highlight what you think are your positive qualities.

One classic example is Richard Dawkins' scale of belief from 1 to 7 (from The God Delusion). A 1 means you believe there is 100% probability of God, and a 7 means a 0% probability of God.  Dawkins describes himself as a 6, and notes that "category 7 is in practice rather emptier than its opposite number, category 1, which has many devoted inhabitants."  Category 7 is a foil, used to explain that he is not certain that there is no god.

From another point of view, when you create a foil, you create a straw man.  After all, what is strawmanning, but attacking a position that no one holds?  Or perhaps there is more to it than that.  I propose that there is an additional component to a straw man: it must be an explicit or implied attempt to represent a real opponent.  Dawkins does not misrepresent anyone with category 7, because he's quite upfront about the fact that category 7 describes few people.  Therefore, Dawkins' foil is not a straw man.

There are some things I don't like about the foil strategy, but it is undeniably useful.  People have so many misconceptions about atheists: they're certain, they're dogmatic, they have faith in science, they're always getting up in your business, etc.  But even though people hold these misconceptions, they often don't put them into words.  So it's up to the atheist to put the misconceptions into words, and create foils out of them.

Take, for instance, the time it was reported in major newspapers that Dawkins isn't 100% certain, as if this were surprising. People are incredibly ignorant, and foils are necessary

But while foils are useful to spread a low-level understanding of atheism, they just aren't that good beyond that.

It could mislead people into thinking that the main difference between different atheists is the degree of certainty.  In reality, most people in the movement don't care about that, (and to the extent that they do care about it, I don't think they should).  What people actually argue about are goals and strategies.

Foils also set up a hierarchy of atheism.  Rather than thinking about our different backgrounds and motivations, the foil draws all attention towards a single dimension of atheism.  To our right is our fabricated foil, the absolutely certain atheists.  To our left are people less atheisty than us.  And then the people to our left will use us as a foil.  Their foil implicitly attempts to represent us, but they don't do it very accurately, because their purpose is to create a foil, not to actually argue with us.  Yep, it's a straw man!

This is frustrating, and magnifies divisions.  I don't know what we can do about it, but I hope that everyone is at least aware of what's going on.

Monday, July 9, 2012

Freethought and other non-literal words

Freethought Blogs kicked out two bloggers who were misbehaving.  Some people pithily said that it goes against freethought to fire people for thinking freely.  (See comments here for examples.)  Plenty of people have already piled on them for making such a stupid argument, but I wanted to put this in context.

"Freethought" is not the same as "free thought".  Freethought is a collection of ideas, a movement of people.  There's actually some substance to freethought beyond its name.

This is likewise true of words like "conservative" and "progressive".  In a political context, conservative doesn't actually mean cautious, slow-moving.  Progressive doesn't actually mean moving towards the future.  They describe a set of political stances, some of which have nothing to do with change vs tradition.  I'm sure that conservatives have at times supported forward-looking policies, and liberals supported the continual of existing policies.

It is a shallow argument indeed to attack conservatism on the basis of its caution, or to defend conservatism on the same basis.  It is totally missing the point to attack "new atheism" for claiming to be new when it's not.  It is a lazy person's argument to say that the skeptical movement is right or wrong because doubt is right or wrong.  It represents a fundamental misunderstanding to act as if homophobia really means a fear of gays and lesbians.

Long story short, if all you can talk about is the name, I'm going to assume that you are incapable of making a substantive argument, and are instead substituting your impressions of words.

Monday, May 28, 2012

Hypothetical fallacies

My boyfriend pointed out these two fallacies, and now I feel like I see them everywhere!

Argument from future majority:  "In a few decades, everyone will look back and see how wrong you were."

Hidden assumptions: Will people in the future in fact see how wrong you were?  Is the majority opinion relevant?  Are future people's opinions necessarily better than present people's opinions?

Combines: appeal to future evidence and argument from majority

Argument from hypothetical hypocrisy: "If she were a Republican, the right-wing would be dismissing this scandal as a distraction."

Hidden assumptions: Would the right-wing in fact do that?  Does that necessarily mean that the right-wing's current actions are wrong, or could it just mean that their hypothetical actions are wrong?  Do hypothetical wrongs of the right-wing justify similar wrongs of the left-wing?

Combines: begging the question and tu quoque

The nature of these arguments is that even if all parties were to agree on the hypothetical, the conclusions are still fallacious.

Friday, February 17, 2012

Lying with pyramids

Blogging pace may be slow, as I am busy and/or just not investing the time.  So here's a short one.

This image was taken from Facebook, from the Being Liberal page.

There is a special place in math hell for people who use pyramids (or similar 3D objects) to represent percentages.  Observe, on the left, meat and dairy makes up 73.8% of the height of the pyramid.  But it takes up 98.2% of the volume.  This is because the base of the pyramid contains greater volume than the tip.

It's a classic way to lie with graphics.  It makes percentages appear different from what they really are.  I could have made a graphic with the order reversed, and it would lie in the opposite direction.
Or I could have just used a bar graph, as would have been appropriate.

I wasn't going to comment on the content of the graph, except that that is a lie too.  Those are not the Federal Nutrition Recommendations, not since 2005.  The old food pyramid was replaced in 2005, and replaced again in 2011.  This seems to be an oversight rather than an intentional lie, since I think the new recommendations are even more different from the subsidy percentages.

There are probably other problems with the graphic too, but I leave those as an exercise to the reader.

Thursday, September 15, 2011

Projection

I think the first time I ever heard of the idea of "projection" was in the context of a burn.

"You're such a hypocrite!"
"Nuh-uh, you're just projecting your own hypocrisy onto me!"

Since that's excessively silly, I think it was some time before I took projection seriously as an idea.

Projection is a fallacy in which we attribute certain characteristics to others because we see them in ourselves.  Most of the time, this is a very reasonable assumption to make, and one we make very often.  If you ever want to understand other humans, the first and best place to look is in the one human whose experiences you have most access to: yourself.  But we can take this too far, and when we do it's called projection.

I feel that it's almost too clinical to call projection a mere logical fallacy or cognitive bias.  Projection is pretty much a way of life.  No, the way of life.  Everyone does it nearly every day. (I know this because I do it every day.)

This was illustrated in a recent Subnormality comic: We Assume of Others What We Know of Ourselves.

I recognize projection as a major source of my own irrationality.  For example, I believe that if people only listened to my music, they would like it as much as I do.  Never mind that even I don't like much of the music I liked in the past.  I believe that because I don't get angry, neither does anyone else.  Sometimes, my initial reaction to anger is laughter, since surely they are expressing anger ironically.  I believe that since I don't like fashion or formalities, nobody else does either (they're just playing along).  I believe that people are much less sexually active than they are.  I have been enlightened by some statistics, but they still surprise me.  The surveyor says many of his students expressed surprise at how common virginity was, but since I'm less interested in sex, my own biases are in the opposite direction.

What about you?  What sort of things do you project onto others?

Tuesday, July 5, 2011

Large-scale critical thinking

There is a particular aspect of critical thinking that I will introduce by way of an analogy. Critical thinking is a way of spotting and fixing errors in an argument for a claim.  Debugging is a way of spotting and fixing mistakes in a computer program, such as a computer simulation.

For those who have never done any debugging, imagine this.  Writing computer code is never the hard part; 90% of programming is debugging.  After writing any moderately complex code, it has a pretty good chance of failing.  The best kind of failures are the ones that stop the program and tell you where it went wrong.  The worst failures are when the program appears to be fine, but gives incorrect results.

A major problem is that you don't always know where the bug is.  Now you could locate the bug by reading the code line by line and checking the all the syntax, control loops, and so forth.  But any program of moderate complexity will have thousands of lines of code.  And if we could spot the bug that easily, we would have spotted it as we wrote it in the first place.  A more effective way of locating the bug is to use tests to isolate it within a small region, and then look line by line.

Most arguments don't involve thousands of steps, so most of the time you can find errors by just going through step by step.  But there are exceptions.

Many arguments, for instance, involve scientific papers.  As a blogger, I think laziness is a perfectly valid excuse to not examine a scientific paper in detail, but if you don't accept laziness, there are other reasons.  If there are problems in a paper, they may be impossible for a lay person to spot.  Or they may just be plain impossible to spot.  They may involve details omitted by the authors, or details in other papers.  We could also be cherry-picking a single study, but we'll never know unless we look outside the study.

Another example are conspiracy theorists and physics cranks.  Such people construct a whole universe of details to support their view.  The worst part is that often each universe is unique.  Even if you spent all that effort to locate the errors, you've only debunked the claims of... one person.

In these situations, and others, we may need to resort to larger-scale critical thinking.  How can we examine an argument for a claim without going into the details?  How do we find a bug without looking at code line by line?

A common technique is to execute the code with simpler inputs and watch what goes wrong with the output.  This would be analogous to using reductio ad absurdum.  For example, if I accept the argument that homosexuality is wrong because the Bible says so, mustn't I accept the same argument against wearing clothes with mixed fabrics?  Clearly there is an error in the argument, but we still need more work to pinpoint it.  Pinpointing the error might consist of discussing where the Bible came from, and why this is an inappropriate source.

Another example that I have actually blogged about is the ontological argument.  Most treatments of the ontological argument simply say that it is absurd to prove the existence of something with mere logic and without any investigation of the real world.  But this does not pinpoint the exact error in the ontological argument.  Knowing this, I thought it would be an interesting exercise in modal logic to pinpoint the error.  But there are other times on my blog where I take the opposite approach and look at the big picture only.

There are more critical thinking techniques that may not have any analogy to debugging.  Appealing to experts comes to mind.  Why locate an error when you can instead locate an expert who will locate it for you?  Of course, you have to trust that the expert really does know how to spot errors, and that the expert is not omitting errors in the other direction.  A good critical thinker tries to understand the nature of experts, and does not use them indiscriminately.

One major difference between critical thinking and debugging is that the debugger must locate the precise error to fix it.  The critical thinker only needs to show that there is an error somewhere, and does not need to pinpoint it.  And yet, if you can pinpoint the error, or show that there is none, this trumps all large-scale analysis.  It doesn't matter if an expert made the argument, if we can show the argument is incorrect.

And yet, small-scale critical thinking does not really trump large-scale critical thinking.  In such a detailed analysis, it is easy to make mistakes.  To miss errors or see errors that aren't there.  Also, as with investigations of paranormal phenomena, these details can be completely lost to time.  Even in absence of laziness, large-scale critical thinking is an important component of any analysis.

I think this is another one of my posts where the conclusion is, "Critical thinking is hard."

Thursday, June 2, 2011

Privilege is not ordered

One of the words commonly thrown around by social justice advocates is "privilege".  A privilege is simply some sort of benefit or advantage that a certain group has.  It's a fairly basic concept, but I'm not sure I'm a fan.  It seems to breed a lot of misconceptions, like the idea that privileges are always bad.

As an example, one privilege straight people have is the ability to go through life without labels for their sexual identity.  Arguably, this privilege is unavoidable, as long as straight people are in the majority.  All the same, if you are aware of this privilege, you should understand why it is insensitive to tell queer people not to bother with labels.

Another big misconception is the idea that groups are ordered from most privileged to least privileged.  In truth, two groups can each be privileged over the other in different ways.  For example, consider white women and black men.  It may be the case that one group has more privileges than the other (supposing that you found some way to quantify "more privileges"), but nonetheless, each group has at least a few privileges that the other does not.

As another example, consider aromantic and romantic asexuals.*  Romantics are privileged over aromantics because people are less likely to think they are devoid of all emotion.  Aromantics are privileged over romantics because in non-romantic relationships they generally aren't expected to be sexual.

*If you don't recall, romantic asexuals are the ones who are interested in romantic relationships, and aromantics are the ones who are not.

I feel this is a fairly obvious point, and if people miss it, it's because they just haven't taken a moment to think about it.  I guess this will be a short post!

For the fallacy geeks: What kind of logical fallacy is this?  I'm thinking it's a false dilemma: either group A has privileges over group B, or group B has privileges over group A.  Or maybe it's tu quoque:  "I have privileges?  You have privileges too!"  This is a fallacy because pointing out another person's privileges does nothing to refute the existence of one's own privileges.

Monday, March 14, 2011

On taking offense and derailing

I don't know how obvious it is, but over the past year, my views have been skewed towards a form of political correctness.  I know that PC is used nearly universally as a pejorative, and thus has come to describe many truly negative things.  This is especially true in skeptical circles, where it is thought to prioritize politeness over truth.  But sometimes PC is appropriate precisely because we value truth.

So that's my extraordinary claim, that critical thinking and political correctness sometimes align.  It is my new long-term blogging project to support this claim, and also attack it so that we may know its limits.  I'm very interested to know where my readers stand on this so I know what direction to go in future posts.

Exhibit A: The "easily offended"

On The Thinker, Jeffrey said, "It is easier for a camel to pass through the eye of a needle than for the 'easily offended' to become good critical thinkers."  Based on his personal experience, he proposed three (non-exhaustive) categories of easily offended people.
First are those who tend to use being offended as a manipulation tool to stifle discourse on a topic, thereby avoiding arguments they don’t wish to face.
...
The second form is related to emotions. Some people are thinkers; others are feelers. Feeler-types tend to become emotionally attached to their opinions.
...
The third category is comprised of people who actively seek opportunities to get offended by anything that appears (or can be made to appear) to run counter to their pet cause.
Now, I can see where Jeffrey is coming from.  I don't think Jeffrey is an atheist, but this is a pretty common response to atheists.  Do something as simple as put up a sign advertising an atheist student group, and it will offend people.

And yet these categories don't sit well with me, because they echo standard derailing arguments used against marginalized groups whenever they complain about said marginalization.  Just to name a few of these arguments:
You're Being Overemotional
You're Just Oversensitive
You Just Enjoy Being Offended
Being Offended Is Great For You
You're Taking Things Too Personally
I asked Jeffrey what he thought the difference was between the arguments he made and the ones appearing on "Derailing for Dummies", and he thought it was a matter of whether there was a legitimate basis for being offended.  Well yes.  People who have a legitimate basis for being offended have a legitimate basis for being offended, and those who don't don't.  But I think this is failing to get at the heart of the issue.

Instead, I would try to draw a distinction based on the question: Who is doing the derailing?  Is it the offended person who says, "I don't want to talk about it, and you ought to be ashamed of yourself for bringing this up in public"?  Or is it the other person who says, "You're taking this way too personally!  Talk to me after you've calmed down."

Guess what?  Both sides can derail the discussion.  It's not just the side that gets emotional about it (with or without a legitimate basis).  It can also be the person who can't see past their opponent's emotion, and decide to make that emotion the focus of the argument.

(For the fallacy geeks: This is a form of the Fallacy Fallacy.  That's when you spot something resembling a fallacy, such as an emotional argument, and proceed to ignore any real arguments they make.)

But let's take it one step further.  Riddle me this: Is it really so counterproductive to derail an argument that would itself be unproductive?  I slam down trolls all the time, saving me time and energy for the discussions that at least have a chance of being productive.  Of course, this depends on me having a realistic view of which discussions would be productive and which would not.  I wouldn't want to fool myself into thinking that all discussions attacking my beliefs are unproductive.

What can we get out of this?  Not much, since we're talking pure generalities.  If someone is getting offended and trying to derail, you should say something to persuade them that the discussion will not be so unproductive as they think.  Start by avoiding cliched arguments.  And if you're going to derail the argument by focusing on your opponent's emotions, at the very least do it with some self-awareness.

Wednesday, January 19, 2011

Unattainable goals

In an earlier post, I commented that one goal many atheists have is to end religion.  I'm keenly aware that this is a goal that many object to.  One of the stranger objections people make is, "You don't actually think that's an attainable goal, do you?" as if to suggest that we should just give up any effort of any kind.  It's a form of concern trolling, advising people on how to accomplish their goals without actually sharing those goals.

I hesitate to label this as a logical fallacy, but it is a peculiar line of reasoning that I see applied in many contexts.  Why should we try to eliminate poverty if there will always be poor people?  Why try to eliminate war if there will always be war?  Why try to eliminate superstition if there will always be superstition?

It's a distracting argument.  We could easily get caught up in the issue of just how attainable our goals are.  It's quite possible that I really do think the reduction of religion is more attainable than some of my adversaries do.  But that's besides the point.  The point is that this is black-and-white thinking.  Superstition is not an all or nothing thing.  Superstition can have different degrees of prevalence.  There can be different degrees of disparity between poor and rich.  War can be reduced.

If I say, "I would want to end superstition", it's not an attempt to make a black-and-white statement.  I'm really saying that the closer in degree we are to ending superstition, the better.  I also happen to think that we can get at least a little closer to that goal if we try, but who knows, maybe we can't.  Maybe we're just running on a hopelessly fast treadmill.  That doesn't mean we should stop running.

There's one situation where the unattainable goal argument might work.  I think it could be used, at least at first, against a communist's goal of revolution.  The benefits of revolution don't exist until revolution is actually achieved.  And with communists being as few as they are, revolution seems like an unattainable goal.  But I'm sure a number of objections could be raised (and will be raised in the comments).  I could see it argued that revolution is attainable after all, though perhaps after a long sustained effort.  Revolution has happened countless times throughout history.  I could also see it argued that there is some way to approach revolution by degree, and that the closer the better.  My point is that the unattainable goal argument cannot be used to bypass substantive arguments, even if the subject is revolution.

A related argument is to say the end goal is undesirable.  If we were to completely eliminate religion, we would have unhealthy levels of uniformity of opinion, and effectively no freedom of religion.  I'm actually sympathetic to these arguments, but it's all moot.  Complete elimination is unattainable, and in the event it becomes attainable we can reconsider the issue.  Right now, the relevant issue is whether small reductions in religion relative to our current state are desirable.

Saturday, February 20, 2010

Privileged priors and God

On Less Wrong, someone thought up a new kind of fallacy, called "privileging the hypothesis".
Suppose that the police in Largeville, a town with a million inhabitants, are investigating a murder in which there are few or no clues - the victim was stabbed to death in an alley, and there are no fingerprints and no witnesses.

Then, one of the detectives says, "Well... we have no idea who did it... no particular evidence singling out any of the million people in this city... but let's consider the possibility that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln."
I would describe this fallacy as "questionable choice of prior probabilities".

Prior probabilities are something you need to consider if you wish to compare hypotheses with Bayesian statistics.  The prior probability of a hypothesis is the probability you assign before you consider the relevant evidence.  Prior probabilities are fundamentally arbitrary.  It's supposed to indicate your personal degree of belief.  Therefore, you can pick whatever numbers you like, and so can I.

And yet, some choices of prior probabilities are just more reasonable than others.  In the example of Largeville, we can reasonably say that there is a prior probability of one in a million that Mortimer Q. Snodgrass is the murderer.  Or perhaps we'd choose a higher probability, because we want to exclude young children.

But the detective has a much less reasonable choice of prior probabilities.  By simply pointing Mortimer out, the detective has biased our investigation significantly.  Because we have trouble conceptualizing numbers as small as one in a million, we can't help but overestimate the probability that Mortimer is the murderer.  The detective has wrongly privileged the hypothesis that Mortimer is the murderer.

For fun, here's a gratuitously controversial example of "privileging the hypothesis": God.  Specifically, let us consider the cosmological argument for God (ie God was required for the universe to exist).

If we accept the cosmological argument, then we can conclude that something ("the first cause") allowed for the universe to exist.  We can also conclude (depending on which kind of cosmological argument we're using) that the first cause is eternal, non-contingent, and so forth.  But there's not really anything that requires the first cause to be a god, much less God with a capital G.

I propose that the first cause, if it exists, can be all sorts of things.  The vast majority of those things are not deities, indeed the vast majority don't even have consciousness or intentionality.  But we still know it as the cosmological argument for God.  The hypothesis that the first cause is God is a wrongly privileged hypothesis.

Furthermore, I discourage the use of "god" when it does not refer to any of the gods of major religions.  When we talk about a god, it could be all sorts of things.  But most religious people instantly think of their own god.  By simply using the same word, "god", we inadvertently privilege the hypothesis that the god in question is the god of a major religion.

(via The Thinker)