Most of my explicit thought about moral philosophy is indebted to fellow blogger The Barefoot Bum, who argues for an underlying utilitarian philosophy, from which deontological principles emerge (eg see here). However, my views are my own. Since utilitarianism and deontology have come up in a recent post of mine, I should outline my perspective.
Utilitarianism and its problems
Consequentialism is the philosophy that actions should be judged by their consequences. The most common form of consequentialism is utilitarianism, which says that the particular consequence that matters is utility, the sum benefit to everyone.
I don't know of any way to "prove" or even argue in favor of a fundamental ethical philosophy, but there are definitely ways to argue against them. I will highlight two particular objections to utilitarianism:
- It is impossible to calculate the utility of any given action. If stepping on a butterfly causes a hurricane in a hundred years, then stepping that butterfly seems morally equivalent to causing all the damage of that hurricane. Clearly, we can't know the consequence. So the question is, how far in the future are we obliged to calculate?
- Utilitarianism does not seem to reproduce many of our moral intuitions. There is no distinction between consequences through action and consequences through inaction. There is no distinction between laudatory and obligatory actions. There are no situations where selfishness (or even favoring kin) is justified. There is no notion of rights. We must either bite the bullet and reject our intuition, or come up with utilitarian justifications for these ideas.
One appealing resolution is to say that these two problems solve each other. It is true that a naive utilitarianism does not account for uncertainty. But when we do account for uncertainty, then we will reproduce most of our moral intuitions. Although perhaps we will not reproduce every moral intuition, and so this provides a useful way to distinguish between intuitions which are correct and intuitions which are incorrect.
For example, whenever we make decisions, we are more certain of the consequences to ourselves than we are of consequences to people far away. In the face of uncertainty, consequences tend to be a wash, so it is good to prioritize ourselves.
Another example. Whenever we drop a brick off the roof of a building, we cannot distinguish beforehand the cases where the brick will hit someone and the cases where it won't do anything. Therefore, we must judge all brick-dropping the same way. We must make a rule against the action of dropping bricks in random places. This reproduces deontological ethics, which makes rules about particular actions based on the qualities of those actions.
This also neatly solves one of the problems with deontological ethics, which is that there isn't a clear way to generate new rules about actions. This framework suggests that the correct way to generate new rules is to consider the probabilistic consequences of a class of actions.
I like the ethical framework explained above, although I am not committed to it. There are also a number of complications which I won't go into right now. For now, I'd like to compare and contrast the emergence of deontology from utilitarianism to the many examples of emergence in physics.
How ethics is different from physics
Physics is reductionist, in the sense that if you were given all the basic laws of the universe as well as its initial state, then you could calculate what happens with a sufficiently advanced simulation. In practice, this is too hard to calculate for anything with more than a few particles, and we rightly make extensive use of emergent concepts such as temperature and atoms. However, as far as the universe is concerned, the "sufficiently advanced simulation" is in fact how it works. The universe simulates itself, down to every reductive detail.
Utilitarianism is also reductionist, in the sense that if you could calculate all the possible worlds given different actions, then you could exactly determine which actions are right or wrong. In practice, this is too hard to calculate for anything but the most artificial of dilemmas, and we make extensive use of emergent concepts such as rights and obligations.
But we cannot say that utilitarianism is in fact how the universe really works. Descriptively speaking, the way morality really works is that we have a bunch of intuitions which came from evolution, or society, or random variation. Sometimes we apply the intuitions directly to actions. Other times, we create moral philosophies which we "test" by trying to reproduce our intuitions.
Normatively speaking, the results of utilitarian calculations depend on how much calculating we do. That is to say, making successively precise calculations doesn't merely improve the precision of the results, but actively changes what is right and wrong. If we discover with certainty that dropping a brick at a particular time won't hurt anyone, and will instead kill a butterfly and stop a hurricane in a hundred years, then that action literally goes from unacceptable to acceptable.
The claim, then, is that if we achieve reasonable precision, but not total precision, utilitarianism will reproduce many intuitions about rights, obligations, etc.
However, given that I don't personally know how to derive most specific rights or obligations, I don't think I have even achieved this reasonable degree of precision. My state of ignorance may literally change what is right and wrong for me. For instance, I'm not sure it's good for me to apply utilitarian calculations given that my calculations are apparently so naive they cannot even reproduce rights.