Saturday, July 13, 2013

This div will be replaced

Thursday, July 5, 2012

Anti-natalism on a personal level

Let us suppose a set of conditions designed such that people with characteristic X of a certain value can live happily.  Suppose that X has some variance, but the conditions work reasonably well for people within about 2 standard deviations of the average X value.  Suppose also that X is heritable and that your particular X value is 3 standard deviations outside the normal range.

You expect regression to the mean in your children, but even if they wind up within 2 standard deviations of the norm they won't be as happy as someone with an average value of X.  Furthermore, you expect unhappiness to result from significant differences between yourself and your children, especially if characteristic X happens to be something you value highly.   But wait, how can X be something you highly value if it is maladaptive?  This isn't so surprising, it means that given the choice between altering yourself (X's value) or altering the world such that it is no longer maladaptive, you choose the latter.

If X is something negative you have further motivation not to inflict a negative condition on your children.

Someone with several such characteristics might consider it not worth while to have children.

Tuesday, June 26, 2012


I vaguely remember a quote along the lines of "humans do not prove anything, we merely decide which side of an argument we will hold to a higher standard of proof."   This comment by Vladimir_M on utilitarianism reminds me of it.  Let me unpack a bit.

Morality seems to be a way of signalling what sort of ally you would be in the absence of strong/feasible enforcement of social rules.  Moral arguments fall along the lines of "I will do things that are in the group's interest even when they harm me."  This is a very useful thing to convince a group of.  But smart monkeys know that they do this to improve their lot, so you must be doing the same.  Your argument must be subtle enough to convince the smartest monkey in the group.  Utilitarianism is very good at doing this, but is ultimately just another example of a smart monkey trying to convince another smart monkey that no, they really will help the group.