VK

Vidur Kapur

128 karmaJoined Jan 2015Working (0-5 years)Manchester, UK

Bio

Forecasting (at Good Judgment, Swift Centre and Samotsvety), biosecurity research, animal welfare, AI risk, utilitarianism. I studied medicine and public health.

Comments
37

Hi Stephen. I’m also lacto-vegetarian. I take Vitamin D supplements (mainly for the reasons that they’re recommended for everyone) and an occasional Vitamin B complex or B3 supplement. I’ve considered taking algae-based Omega-3 supplements (in the form of DHA and EPA) but I don’t think the evidence is strong enough to justify the expense. My iron levels have consistently been fine without supplementation. I’ve found VeganHealth.org to be useful (I’d vouch for the quality of their evidence reviews). Ginny Messina is also worth reading (https://www.theveganrd.com/vegan-nutrition-101/vegan-nutrition-primers/recommended-supplements-a-vegan-nutrition-primer/).

In addition to Fin's considerations and the excellent post by Jacy Anthis, I find Michael Dickens' analysis to be useful and instructive. What We Owe The Future also contains a discussion of these issues. 

Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.

That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.

Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).

I’m sorry to hear that you’ve been feeling this way, Linch. I’ve also been facing some of the difficulties that you describe. I’ll try to do the best I can but would welcome the input of people who are more knowledgeable than me!

In the professional work of the English Utilitarians, what stands out to me is perseverance. When Bentham’s Panopticon project (which was meant to be an improvement on the often cruel treatment of prisoners) failed to get off the ground, he moved on to other things such as education reform (advocating for an end to corporal punishment, for example). Similarly, when the ‘Philosophical Radicals’ (a loosely knit group of parliamentarians and writers associated with utilitarianism) split in the early 1840s, Mill took the opportunity to do some “deep work” and publish A System of Logic, which had been on the back burner for over a decade.

Friendship and companionship were also important. Mill, over the same period, deepened his companionship and collaboration with Harriet Taylor, which was to be a source of great happiness to him for the rest of his life. After her death in Avignon, he would spend six months a year working close to the cemetery where she was buried. Meanwhile, Henry Sidgwick’s efforts to improve the higher education of women — which he sometimes felt did not progress rapidly enough — were supported by his wife Eleanor, and whenever he had a crisis he would always seek the company of his friends in the Cambridge Apostles (a discussion group in which he felt he could freely express his views), particularly John Addington Symonds. Symonds happened to be gay, and so Sidgwick (who often advised Symonds about what to publish) regularly had to confront dilemmas about how quickly the established moral order should challenged. (His personal experiences here may have influenced his noticeably cautious approach to the utilitarian reform of public morality in The Methods of Ethics.) Again, friendship and open discussion were indispensable to him here.

I’d be interested to learn more about the Benthamite Edwin Chadwick’s life after he was forced to retire from the Civil Service after his stint as Commissioner of the General Board of Health (following the passage of the Public Health Act of 1848, inspired by his report on sanitation). He seems to have attracted a great deal of backlash from various interest groups. One thing he did do was correspond with Florence Nightingale, who wanted to resurrect his efforts, so he did not entirely give up (despite the direct effect of the 1848 Public Health Act, partly due to lax enforcement, being modest at best).

There has historically been some overlap between the charities that Open Phil and the Animal Welfare Fund have supported, and ACE's recommendations, which suggests that there is a degree of consensus. See also the discussion here, in which some endorse the changes that ACE has made to its methodology. 

(Crossposted from FB)

Some initial thoughts: hedonistic utilitarians, ultimately, wish to maximise pleasure. Concurrently, suffering will be eliminated. In the real world, things are a lot fuzzier, and we do have to consider pleasure/suffering tradeoffs. Because it's difficult to measure pleasure/suffering directly, preferences are used as a proxy.

But I aver that we're not very good at considering these tradeoffs. Most are framed as thought-experiments, in which we are asked to imagine two 'real-world' situations. Some people may be willing to take five minutes of having a dust-speck in the eye for ten minutes of eating delicious food, whereas others may only be willing to take 30 seconds of the dust-speck. It's likely that, when we are asked to do this, we aren't considering the pleasure and suffering on their own, but taking other things into consideration too (perhaps thinking about our memories of similar situations in the past). The variance may also arise because a speck of dust in the eye *will* cause some people to suffer more than others.

Ideally, we'd be able to just consider the pleasure and the suffering on their own. That's very difficult to do, though. I think there are right answers to these tradeoff questions, but that our brains aren't able to answer the questions precisely enough. But in extreme cases, the hedonistic utilitarian could argue that anyone who would rather not have a blissful life at all, if it comes at the cost of being pricked by a pin, is simply wrong. It is the pleasure and the suffering that matter, no matter what people *say* they prefer. (See the 'Future Tuesday Indifference' argument promulgated by Parfit and Singer).

Sidgwick's definition of pleasure is after all "a feeling which the sentient individual at the time of feeling it implicitly or explicitly apprehends to be desirable – desirable, that is, when considered merely as feeling." The feeling, as it were, cannot be unfelt, even if an individual makes certain claims about the desirability (or lack thereof) of the feeling later on.

On that note, have you read Derek Parfit's 'On What Matters' (particularly Parts 1 and 6, in Volumes One and Two respectively)? In my view, he makes some convincing arguments against preference-based theories. Singer and de-Lazari Radek, in 'The Point of View of the Universe', build on his arguments to mount a defence of hedonistic utilitarianism against other normative theories, including preference utilitarianism.

Moral realists who endorse hedonistic utilitarianism, such as Singer, posit that the very nature of what Sidgwick describes as pleasure gives us reason to increase it, and that nothing else in the universe gives us similar reasons.

The experience machine is another example of where hedonistic utilitarians would postulate that people's preferences are plagued by bias. Joshua Greene and Peter Singer have both argued that people's objections to entering the experience machine are the result of status quo bias, for instance.

See: https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.757889?journalCode=cphp20 and https://en.wikipedia.org/wiki/Experience_machine#Counterarguments

Thank you for this piece. I enjoyed reading it and I'm glad that we're seeing more people being explicit about their cause-prioritization decisions and opening up discussion on this crucially important issue.

I know that it's a weak consideration, but I hadn't, before I read this, considered the argument for the scale of values spreading being larger than the scale of AI alignment (perhaps because, as you pointed out, the numbers involved in both are huge) so thanks for bringing that up.

I'm in agreement with Michael_S that hedonium and delorium should be the most important considerations when we're estimating the value of the far-future, and from my perspective the higher probability of hedonium likely does make the far-future robustly positive, despite the valid points you bring up. This doesn't necessarily mean that we should focus on AIA over MCE (I don't), but it does make it more likely that we should.

Another useful contribution, though others may disagree, was the biases section: the biases that could potentially favour AIA did resonate with me, and they are useful to keep in mind.

Load more