Monday, August 31, 2015

Between rationality and politics

Last year, there was an incident where Arthur Chu criticized LessWrong/Rationalists for abandoning political effectiveness in favor of being "rational".  This has been on my mind increasingly, partly because Arthur Chu went on to become a popular columnist whom I like, and I went on to learn more and more about how pathological the Rationalist community really is.

This has some personal significance, and is somewhat disillusioning.  A decade ago, I became interested in skepticism primarily because I liked thinking about how we think, and how to improve upon the process of thinking on a meta level.  I was also on board with the political project of skepticism (fighting bad beliefs), but became less interested in it over time, which left just the critical thinking component of skepticism.

But if I just have the critical thinking, these are basically the same values held by the Rationalist community.  In fact, some of the things I like to write about, the Rationalist community addresses better than I do.  And then, to discover how the Rationalist community behaves...

But let's go back to the issue of rationality vs politics.

Lies and Shunning

At first, it sounded like "politics" meant lying for a cause.  Part of the original context related to false or misleading statistics.  That seems difficult to justify.  I am in favor of, say, infographics which popularize information while eliding truth just a little.  Certainly there are problems with clickbait pop science, and with partisan "research", but we also can't spend forever getting exactly every fact right, sacrificing all accessibility to get every detail.  So I see both sides and, okay, it's really hard to justify misleading statistics.

But I think I see now that it's not really about lying or eliding truth.  It's about other rational values, such as...
  • Always hear out an argument and consider it in its strongest form before calmly coming to a conclusion.
  • Never shun people no matter how bad they may be.
The idea of not shunning people seems fairly innocuous, or even positive, until you see it taken to its extreme.  Take for instance, the Neoreactionaries (NRx).  They're a small and inconsequential group who believe that we should get rid of democracy and return to the days of white supremacy (no, seriously).  For some reason, many NRx are welcomed in LessWrong, to the point that LessWrong enjoys a space in this map of neoreactionaries.

Scott Alexander, the second most popular LessWrong writer, has talked about how NRx have inspired his writing, even though he thinks they're wrong.  He contrasts them with feminists, who he says has correct "object-level" beliefs but bad meta-beliefs.  On its face, he's basically saying he likes neoreactionaries because they talk the Rationalist talk.  This is funny because there's a Rationalist saying, "rationality is about winning", which means that rationality isn't about how you sound, it's about the ultimate consequences.  Valuing people who talk the talk is basically a bias towards the in-group.  And what an in-group they've chosen!

It's hard to tell exactly what effect it's had for the community to have extended exposure to NRx ideas.  In my limited experience, my impression is Rationalists are perfectly willing to argue for racist things, more than the general public.  I think NRx may have moved the whole Overton window of the community, and maybe Rationalists just think they're immune.  To them, the only valid way to reject NRx ideas is by considering them at great length, and if you absorb some of their ideas in that time, hey, maybe they were good ideas worth adopting, because Rationalists couldn't possibly come to a wrong conclusion on an issue they've thought about.

EA and AI

I should mention that there appears to have been some sort of Rationalist diaspora.  From what I've heard, the community used to be more centralized on the LessWrong website, but has now spread out to new websites and to new ideas.  It is near certainty that what I criticize does not necessarily apply to the entire diaspora.

Probably one of the best things to come out of Rationalism is the Effective Altruism movement (EA).  They believe in figuring out which charities do the most good for your dollar and then donating lots of money to them.  They're associated with organizations like GiveWell and Giving What We Can.

They're pretty hard to fault.  I mean, we can criticize the details, such as the particular things that they prioritize in their calculations.  I'm also really iffy on the idea of "earning to give".  But one of the problems with EA is that telling people that their donations are ineffective sometimes just discourages them from donating at all.  Likewise, if I criticize EA, I think that might just discourage people from donating at all.

More recently, EA came under fire because their EA Global conference prominently featured AI risk as one of their causes.  That means people were talking about donating to the Machine Intelligence Research Institute (MIRI) to do artificial intelligence research in order to prevent extinction by a malevolent AI.  Said research involves trying to build a benevolent AI.  In response to criticism, Jeff Kaufman, who is known for advocating against AI research within EA, called for pluralism.  (Scott Alexander, for his part, argued that AI risk was at least somewhat reasonable and anyway less than a third of his donations go to MIRI.  How inspiring.)

So this is another case of not shunning people, but instead welcoming them.  And as a consequence, some people in the community begin to regard it as correct, and most regard it as somewhat reasonable.  But really, what place does AI risk have in an evidence-based charity group?  It seems to be based more on philosophy--a very idiosyncratic take on utilitarianism, and a bunch of highly questionable probability estimates.

Incidentally, that particular kind of utilitarianism is the kind advocated by LessWrong, and more specifically, its founder Eliezer Yudkowsky.  Eliezer Yudkowsky has long argued for the importance of AI risk, and is the founder of MIRI.  In some ways, convincing people to donate to MIRI was underlying motivation for teaching people his ideas about rationality.  He wanted to show people that his own cause was right, despite being contrarian.  And it worked!  Not only do a lot of Rationalists accept the value of MIRI, there's also a preponderance of other strange beliefs held by Eliezer, including his favorable views towards cryonics and paleo diets.

So basically, the EA movement is weighed down by the history of the Rationalist community and its particular in-group biases.

Political rationality?

Given the way the Rationalist community has turned out, I'm glad I never got involved, despite my intellectual values clearly leaning in that direction.  One question is whether I can synthesize anything better.

I wish I could, but I don't think I can.  I feel conflicted about the whole thing.

On the one hand, I have these rational values.  Arguments should be treated based purely on content.  It's easy to be blanket-skeptical about things which are actually reasonable.  Ideas that sound too crazy to entertain can be right.  Even if something is too crazy, rebutting it point by point can be helpful to other people who find it somewhat reasonable.

On the other hand, I also believe arguments are about power.  If you stick to purely rational arguments, you'll lose your audience and miss the point.  And rational arguments aren't even very effective to help yourself come to the correct conclusions.  I believe in the Overton window, and I believe it's something we need to fight over--actually fight, not just debate.  I believe anger is so useful that I'll fake it if necessary.  Finally, I believe in the goodness of shunning people, and shutting down arguments.

I don't think I am always consistent about which tack I take.  And I don't think I have the ability, or commitment, to map out consistent rules for it.  Better to take it on a case-by-case basis and spend that time on other things.

This all makes me glad that I'm changing my blog title in a month.

Friday, August 28, 2015

A personal history with trans issues

Since I'm resolving, as a cis person, to talk about transgender issues more often, I wish to explain the personal trajectory that lead to my current perspective.

I first started identifying as queer in 2009.  More specifically, I'm gay gray-A, meaning that I'm on the boundaries between asexual and gay.  Back then, I was nominally accepting of trans people, but I didn't think about them that much. And the reality is that in this society, transphobia and ignorance are so pervasive that if you haven't thought about it you are almost certainly the holder of many problematic views and behaviors.

But I would say I was eager to learn, and here enters the asexual influence.   Asexuals were a small group that hardly anyone in the queer student groups understood or spoke of.  Transgender people were in the same boat.  Clearly we should be friends.

Essential commentators

I didn't actually have a personal friend who was trans until 2010.  We were both affiliated with queer-themed housing and were outcasts of sorts from the main cliques (which consisted of gay men and straight women, naturally).  We ended up talking a lot about trans issues, often so she could vent about things that other people in the house were saying.

This was of course very eye-opening.  But the most eye-opening stuff I learned was not trans 101, but learning about the turbulence of trans politics.  The very first thing I learned about were trans-exclusive feminists.  And then the real kicker was learning about transphobia among trans people.  The overall impression I had was of a bunch of people on a boat that's been hit with a missile.  As it sinks, everyone shouts over whose fault it was and then tries to throw each other off the boat.

That isn't what you'd think of as an ideal introduction to a subject.  But I tend to think inter- and intra-community conflict is intriguing and hashes out a lot of details that would otherwise go undeveloped.  And sometimes skipping to the advanced issues makes the basic issues seem all the more obvious and urgent.

Since then, I've found trans writers to be essential commentators.  I don't mean to treat trans people like magical social justice wizards, but over the years I happened to like a lot of social justice critics who were trans activists.  I don't know if I could really pin down the emotional reasons why.  I would say... progressive movements are absolutely essential to trans people, but trans people also tend to have a healthy degree of cynicism about the same movements.  I also need those progressive movements, and need that cynicism.

Non-binary aces

Most of the trans writers I'm thinking of are trans women.  But my central image of a trans person is someone who is non-binary.  Because being in the ace community, non-binary people are everywhere.  There are more non-binary people than there are men.

A few anecdotes might establish that non-binary people weren't simply present, but taking important roles.  Back in 2011, I wrote a short history of the Livejournal asexuality community.  That history was based on an interview I had with the founder, Nat Titman; Nat is non-binary.  They're like the dark knight of asexuality.  They played a very important role, but dropped out of public view for many years, partially out of concern that people would confuse asexuality with gender.

Also in 2011, I conducted an interview (not available online) with Charlie, one of the figures in the Transyada community. The Transyadas were a big deal in 2011.  They started out as a massive thread on the AVEN forums, but later decamped and moved to the Transyada forums.  Note the reason they decamped was because they were dissatisfied with the amount of transphobia on AVEN, so that's a hint while non-binary people have always been around, ace communities haven't always been friendly to them.

Aside from that, I've had many colleagues, cobloggers, copanelists, interviewees, and friends who were non-binary.

That said, I've never made any concerted effort to learn about non-binary issues.  What I know about non-binary people is mostly from osmosis over the years.  I understand pronouns, and much of the vocabulary, but that's very much on a different level from being able to blog about it extensively.

On blog focus

When you have a personal blog, your choice of topics is a very personal decision, and one that I don't need to defend.  But I'll briefly comment on why I haven't blogged much about trans issues in the past despite considering them important.

The basic reason is that I mostly use this blog to share original thoughts.  When it comes to trans issues, the most appropriate thing is not to share my original thoughts, but to amplify trans voices.  And I don't have very much power to amplify so what's the point?

I feel the same way about Black Lives Matter.  It's a very important movement but also I don't know what to say about it.  When I comment on an issue, I tend to complicate things and add nuance.  But do I really need to bring any nuance to the issue of police being violently racist?  That strikes me as straightforward.

But now I want to talk more about trans issues. Following my usual blogging style, that means adding nuance.  But I'm keenly aware that as a cis person, my ability to add nuance is limited, and I will ultimately make mistakes.  I hope I have enough trans readers around that they'll poke me if I say something wrong.

TL;DR

-Trans issues are important to me, and I also find trans women activists to be great social critics in general.
-In my experience with the ace community, I interact with a lot of non-binary people, but I don't necessarily understand their issues in great depth.
-As I begin to comment more on trans issues, I try to be aware of the limitations in my cis perspective.

Tuesday, August 25, 2015

Discontinuity of self

I like to observe the LessWrong community from the sidelines, because sometimes they have such strange consensus beliefs.  Roko's Basilisk is not believed by most LessWrongers, but it is a rather amusing introduction to some of their beliefs.

Roko's Basilisk is the idea that a benevolent AI could take over the world in the future, and then torture a clone of you unless you donate more money to building the AI now.  The idea is absurd on its face, but becomes even more absurd when you learn that it sort of makes sense, given a bunch of beliefs that many LessWrongers have:
  1. An AI takeover in the future is highly likely, and it will resemble LW predictions (e.g. it will follow their particular brand of utilitarianism, have the ability to clone people).
  2. If someone clones your state of mind, then you are the clone.
  3. It is rational to provide incentives for past actions that have already occurred.  This all part of Timeless Decision Theory, a utilitarian philosophy based on gazing deeply at Newcomb's Paradox and trying to rigorously justify the one-boxer position.
Note that this is unlike Pascal's Wager, in that the only people who get tortured are the true believers.  If you don't believe in Roko's basilisk or aren't aware of it, then no good could come out of the threat of torture.

There are good counterarguments to Roko's Basilisk, even within LessWrong assumptions, but for me it's all moot since I find the AI predictions to be implausible.

----------------------------------------------------

I also disagree with the idea that I am my clone, for idiosyncratic reasons.

I believe the me of right now and the me of a minute from now are different people.  We are in different space-time locations, we have different brain configurations, why would we be the same person?  Yes, clearly we are the same person, falling along the same continuous line, but we're not the same same, we're not identical.

Since I am unquestionably different from the person I was a minute ago, the question is why should I particularly care about this other person?  He's not so special, you see.  Maybe I shouldn't particularly care about him, maybe I should care about everyone equally.  But the fact of the matter is, curse this material body, I care a lot about future me even though he is not me.  I would act against the interests of everyone else to favor this one random guy, I really would.

If someone clones my exact state of mind, that clone is not me.  Like my future self, the clone would be a lot like me, but still not be identical.  But unlike my future self, I don't particularly care about my clone.  Why should I?  I may care a lot about my future self, but that favoritism is a necessary evil.  I see no reason to extend that evil any further to my clone.

Saturday, August 22, 2015

Flibanserin/Addyi: Potential problems for asexuals

The FDA recently approved Flibanserin/Addyi as a medication to treat low sexual desire in women.  Let's talk about what that means for asexuals.

Background

I apologize for repeating some background, but it's necessary because of the many misconceptions about Addyi.

Addyi is the very first drug approved for low sexual desire.  No, Viagra was never a treatment for low sexual desire.  Furthermore, Addyi is taken daily, while Viagra is taken as needed.  Addyi also has serious side effects.  Most significantly, it absolutely does not mix with alcohol.  It's also considered too dangerous to take in the daytime.

Additionally, the effects of Addyi are extremely marginal.  In a trial, women with sexual desire disorders started with a baseline of ~2.7 "sexually satisfying events" per month.  Placebo increased this number by once a month, and Addyi increased it by less than twice a month.  So we're talking about an increase of about 20%, which for all I know could be achieved by Addyi's sedative side effects.

Hypothetically, a drug like Addyi could do some good for some people, but let's talk about the drug we actually have.  And regardless of the good that it might hypothetically do, we also need to weigh that good against the bad.

Asexuals vs Desire Disorders.

The basic problem is that there may be some people with sexual desire disorders, but asexuals exist too, and not everyone can distinguish between them.  Asexuality as an orientation has only come to public awareness over the past decade or so, and it's extremely common for asexuals themselves to be unaware of it.  When confronted with asexuality, many doctors, asexuals, and people in general deny what is unfamiliar to them.

Even if we were to approach the problem from a clear-minded perspective, no simple rules can be used to make the distinction.  You can't use distress as a distinction because asexuals may be distressed about their orientation, and often are.  Even if the distress is due to society, the asexuals themselves may not make that connection until later.  You can't use a sudden change in sexual desire as a distinction, because sexuality can fluctuate.  Someone who was happy with higher sexual desire may also find happiness with low sexual desire.

Some people think that there are at least a few clear cases, such as women with low sexual desire due to other medical conditions, or as a side effect from other drugs.  But Addyi is explicitly not approved for such cases, because the risks haven't been assessed.

Predicted problems

These are my predictions for what problems will occur affecting people on the asexual spectrum.

Ad campaigns - There will likely be ads for Addyi which encourage people to view low sexual desire as a problem, even if they wouldn't otherwise.

Unaware asexuals - People who don't experience sexual attraction might be even less likely to learn about asexuality.

Disbelieving public - Partners, friends, relatives, and the general public are often already predisposed to disbelieve asexuality, and might be even more encouraged by Addyi and its marketing.

Pushy doctors - Doctors who do not recognize asexuality, or simply unaware, might encourage their patients to take Addyi without making them aware of the asexual spectrum.

Disbelieving doctors - Doctors who hare aware of Addyi may be less likely to believe patients who disclose their asexuality.

Pushy partners - People with partners with higher degree of sexual desire might be persuaded or even blackmailed into seeking Addyi.  It could be used as a tool for control in abusive relationships.

If you can think of any other predictions, I'd like to hear about them.

Proposed solutions

Since Addyi is already approved, nothing can be done about that for now.  But to counteract some of the negative consequences, I propose that:
  • Doctors certified to prescribe Addyi should be educated about asexuality.
  • Addyi marketing materials should be criticized and mocked.
  • There should be more mainstream articles thoughtfully addressing Addyi and sexual desire disorders in relation to asexuality.

Wednesday, August 19, 2015

Consistency and the material conditional


This is part of my series on debugging the ontological argument.

In the previous post of this series, I introduced Gödel's Ontological Argument (GOA) by discussing all the things Gödel got right.  I also use this discussion as a means of gradually breaking down the argument without throwing all of it in your face at once.1
The three major parts of the GOA are:
  1. God is consistent (proven in earlier steps, to be discussed in next post).
  2. If something is consistent, then it is possible.
  3. If God is possible, then God is necessary (for reasons already discussed).
 Here I will discuss the second step.  At first it is highly counter-intuitive, but upon understanding the logic, you will find that the reasoning is valid, and even trivial, but still unsatisfying.

Material consistency

The intuitive definition of consistency is that there are no contradictions.  To say proposition S is consistent is to say that S does not lead to any contradictions.  But there's a lot of work done by the phrase "lead to".  In logic, we might translate this to logical implication. $$\lnot \exists Q ( S \Rightarrow (Q \wedge \lnot Q) )\tag{1}\label{1}$$ In English, this is "There does not exist a proposition Q such that S implies both Q and not-Q."  Here, "implies" is logical implication, also sometimes called the "material conditional".

The material conditional is notorious for causing countless errors in students of math and logic.2  The statement "If S, then R" means that either R is true, or S is false (or both), and it says nothing about the conceptual connection between R and S.  For instance, I can say, "If Mars is a giant egg, then the moon is made of cheese." and it would be literally true, because Mars is not in fact a giant egg.

The material conditional makes for a particular definition of consistency, which I will call material consistency

If S is materially consistent, then S must be true!

The reasoning is actually quite trivial.  The only way for a material conditional "If S, then R" to be false, is if S is true and R is false.  Thus, if S does not imply a contradiction, then S must be true.  In fact, if there is anything whatsoever that S does not imply, then S must be true.

You should find this a rather unsatisfying definition of consistency.  Literally everything that is untrue is materially inconsistent.  It is inconsistent for me to live in Southern California.  It is inconsistent for you to be standing next to me.  It is inconsistent for my pen to be out of ink.  Is that really what we want to say?

Strict consistency

The major reason that so many people find the material conditional counterintuitive is that we are accustomed to so many other kinds of conditionals in natural language.  For example, I am much entertained by these examples documented by Language Log:
If you want to know, 4 isn't a prime number.
If Eskimos have dozens of words for snow, Germans have as many for bureaucracy.
It's all perfectly normal — if troublesome to varying degrees.
Language Log refers to these as the biscuit conditional, bleached conditional, and concessive conditional respectively.  In my personal experience, it's a standard joke among math and logic enthusiasts to naively interpret a conditional statement using the material conditional even when it doesn't make sense.

But even when people are using a more logical kind of conditional, they are often thinking of more than just the material conditional.  For example, the statement, "If Mars is a giant egg, then the moon is made of cheese," might mean that Mars being a giant egg might somehow physically cause the moon to be made of cheese.  Or perhaps it means that given the counterfactual universe where mars is a giant egg, then the moon would also be made of cheese.

Since this series is primarily concerned with what can be translated to symbolic logic, we will take a particular conditional called the "strict conditional", also called entailment.  Symbolically, I'll distinguish between implication and entailment by using different kinds of arrows: $$S \Rightarrow R\tag{2}\label{2}$$ $$S \rightarrow R \tag{3}\label{3}$$ Statement \ref{2} means "S implies R", while statement \ref{3} means "S entails R".  S is said to entail R if S implies R in all possible worlds.  We've already built modal logic to make sense of the concept of possibility, so we might as well use it.

The strict conditional leads to another definition of consistency, which I will call strict consistency.  "S is strictly consistent" means $$\lnot \exists Q ( S \rightarrow (Q \wedge \lnot Q) )\tag{4}\label{4}$$ If S is strictly consistent, then S must be possible.

This follows trivially, since if S does not entail a contradiction, then there must be at least one possible world where S does not imply a contradiction.  As argued before, if S does not imply a contradiction in some possible world, then S is true in that possible world.

Why this is unsatisfying

In my history of arguing about the ontological argument, I find that some people find it "obvious" that God is consistent.  There's simply no contradiction to be had in the definition of a perfect being.  And then, you hardly need the rest of the GOA, you just assert that God therefore exists.

The problem I have with this argument is, what kind of consistency do you think is so obvious?  The very idea of strict consistency has nothing whatsoever to do with the definition of God, so it doesn't matter how sensible (or insensible) the definition of God is.  All that matters is whether God exists in a possible world.  If not, then God implies a contradiction, because using the material conditional, God implies absolutely everything.

Furthermore, the idea of strict consistency relies on the idea of possibility, which in turn relies on our choice of modal logic semantics.

For example, consider the statement, "If the moon is made of cheese."  Let's consider "possibility" to refer to all possible pasts and futures.  I don't believe that the moon is made of cheese in any possible past or future, therefore, the very idea is strictly inconsistent.

Now, let's consider "possibility" to instead refer to all universes with the same physical laws.  I believe it's physically possible to have a moon with the same chemical makeup as cheese, and thus we would say that the idea is strictly consistent.  But which is it?  Is a moon made of cheese consistent or inconsistent?

In the next post, I will discuss how God's consistency is proven in the context of the GOA.

----------------------------------------------

1. If for some reason you really do want the entire argument thrown at you all at once, I wrote up Gödel's ontological argument step by step in 2009.

2. A good exercise, if you've never seen it, is to try the Wason selection task.  The task is, given a number of cards in front of you, to decide which cards need to be flipped to verify a particular hypothesis about them.

Tuesday, August 18, 2015

Meta-issues with petition

Earlier, I wrote about a petition to the FDA to disapprove the drug Flibanserin (brand name Addyi), and expressed my (lukewarm) support of it. The petition is no longer circulating, since the FDA has already made its decision. It was approved.

Previously, I made an error: I said that Flibanserin was a treatment for Female Sexual Interest/Arousal Disorder (FSIAD).  In fact, it was tested for Hypoactive Sexual Desire Disorder (HSDD), which is an out-of-date diagnosis.  That's bad because HSDD doesn't have a loophole for asexuals.  Furthermore, HSDD is defined to include people with "interpersonal difficulties", which means if their partner is unhappy with it they could get diagnosed.

So now that it's done, I have to say, I really don't get the point of the petition.  A drug should be approved or disapproved on scientific grounds.  And a petition is not exactly proper scientific protocol.

The basic problem with petitions is that they only contain information about how many people agreed, and not how many people disagreed.  Circulate a petition widely enough, you can get as many signatures as you want.  Even petitions signed by scientists are pretty useless, as parodied by Project Steve.

Many people signing the petition dwelt long on arguments over whether the clinical trials show that Flibanserin is effective.  I don't see how that is relevant to the petition.  The FDA already knows about the clinical trials.  It doesn't need thousands of people on the internet to offer their own opinion.  That's why, in my argument in favor of the petition, I waved away the results of clinical trials in favor of discussing the social ramifications of the drug.

At the same time, I'm highly doubtful that the FDA even considers social ramifications in its approval process. I'm also doubtful that they should.  Say that the FDA is deciding on a contraceptive drug, do I really want them to even consider arguments that birth control ruins our culture?  I don't think so.

The Ace Flibanserin Task Force was also plugging another petition which urges the FDA to stick to the science and ignore all the pro-Flibanserin PR.  The thing about that petition, its message is inconsistent with first petition.  One petition says, just look at the science and nothing else.  The other petition says, also look at these social ramifications.  Well.  That's politics I guess.

Really, the main point of this petition seems to be to get people in the asexual community to talk about Flibanserin.  Fine, it got me to talk about it.  It didn't get me to stop being a cynic.

Now that it's been approved, I suppose we'll see what its social ramifications are.

Sunday, August 16, 2015

What is idolatry?

There are some Christian ideas that are easily translatable into secular ideas.  For example, a sin is just something that's morally wrong.  And so when Christians say homosexuality is a sin but they're not judging, nobody is fooled.

Idolatry, on the other hand, is a mystery.  You shall have no other gods before YHWH... why?

I'm not up on my Biblical history, but my understanding is that at the time, polytheism was common, and early Jews believed their god was just one of many.  So the rule about idolatry basically expresses the jealousy and pettiness of their god.  It is also an expression of ethnocentrism--you shall never leave our group or ever adopt practices from other groups.  This law is basically awful and a force for evil.

In many modern interpretations, idolatry is not so much about other gods, but about other "gods".  You're not supposed to hold any idea higher than the one god.  For example, any form of addiction can be described as idolatry since the object of addiction is being held higher than God.  For another example, atheists are frequently described as idolatrous because they're supposedly replacing God with reason or science.

This frankly leads to a very poor understanding of addiction or atheism.  It's a peculiarly Christian-centric worldview, to say that everyone is just like you only sometimes they deviate from the platonic ideal in certain ways.  This is the sort of thing that makes people think it's appropriate for Alcoholics Anonymous to refer to God "as you understand him".  Have you considered that some of us just don't have a God-analogue in our lives?

I also find it strange to have a rule which basically says, the beliefs specific to our group are the most important.  Nothing else in your life should be as important.  This is just more ethnocentrism, and it should not be an explicit value.