J

jasoncrawford

444 karmaJoined

Bio

Author, The Roots of Progress (rootsofprogress.org)

Comments
54

Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.

I wouldn't say speed limits are for no one in particular; I'd say they are for everyone in general, because they are a case where a preference (not dying in car accidents) is universal. But many preferences are not universal.

I know that egoism is technically an ethical framework, but I don't see how it could ever get meaningful rules to come out of it that I think we'd agree we'd want as a society. It would be hard to even come up with rules like "You shouldn't murder others" if your starting point is your own ego and maximizing your own self interest.

Thanks… I would like to write more about this sometime. As a starting point, think through in vivid detail what would actually happen to you and your life if you committed murder. Would things go well for you after that? Does it seem like a path to happiness and success in life? Would you advise a friend to do it? If not, then I think you have egoistic reasons against murder.

I'm not using purely deontological reasoning, that is true. I have issues with deontological ethics as well.

I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn't there at least be a lot of discussion about it, even if we don't yet see tractable approaches? Like, shouldn't there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.

Yes, but I don't see why we have to evaluate any of those things on the basis of arguments or thinking like the population ethics thought experiments.

Increased immigration is good because it gives people freedom to improve their lives, increasing their agency.

The demographic transition (including falling fertility rates) is good because it results from increased wealth and education, which indicates that it is about women becoming better-informed and better able to control their own reproduction. If in the future fertility rates rise because people become wealthy enough to make child-rearing less of a burden, that would also be good. In each case people have more information and ability to make choices for themselves and create the life they want. That is what is good, not the number of people or whether the world is better in some impersonal sense with or without them.

Policies to accelerate or decelerate the demographic transition could be good or bad depending on how they operate. If they increase agency, they could be good; if they decrease it, they are bad (e.g., China's “one child” policy; or bans on abortion or contraception).

We don't need the premises or the framework of population ethics to address these questions.

Not sure, maybe both? I am at least somewhat sympathetic to consequentialism though

“What is the algorithm that we would like legislators to use to decide which legislation to support?”

I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.

Re the China/US scenario: this does seem implausible; why would the US AI prevent almost all future progress, forever? Setting that aside, though, if this scenario did happen, it would be a very tough call. However, I wouldn't make it on the basis of counting people and adding up happiness. I would make it on the basis of something like the value of progress vs. the value of survival.

Abortion policy is a good example. I don't see how you can decide this on the basis of counting people. What matters here is the wishes of the parents, the rights of the mother, and your view on whether the fetus has rights.

I can't imagine a way to guide my actions in a normative sense without thinking about whether the future states my actions bring about are preferable or not.

Preferable to whom? Obviously you could think about whether they are preferable to yourself. I'm against the notion that there is such as thing as “preferable” to no one in particular.

Of course many people de facto think about their preferences when making a decision and they often give that a lot of weight, but I see ethics as standing outside of that…

Hmm, I don't. I see egoism as an alternative ethical framework, rather than as non-ethical.

These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it.

  • Senator case: I think social engineering through the tax code is a bad idea, and I wouldn't do it. I would not decide on the tax reform based on its effect on birth rates. (If I had to decide separately whether such effects would be good, I would ask what is the nature of the extra births? Is the tax reform going to make hospitals and daycare cheaper, or is it going to make contraception and abortion more expensive? Those are very different things.)
  • Advice columnist: I would advise people to start a family if they want kids and can afford them. I might encourage it in general, but only because I think parenting is great, not because I think the world is better with more people in it.
  • Pastor: I would realize that I'm in the wrong profession as an atheist, and quit. Modulo that, this is the same as the advice columnist.
  • Redditor: I don't think people should put pressure on their kids, or anyone else, to have children, because it's a very personal decision.

All of this is about the personal decision of the parents (and whether they can reasonably afford and take care of children). None of it is about general world-states or the abstract/impersonal value of extra people.

Load more