SK

Simon_Knutsson

55 karmaJoined Aug 2014

Comments
11

A few updates: I have e-mailed the Open Philanthropy Project to ask about their activities. In particular about anyone at the Open Philanthropy Project trying to influence which ideas about, for example, moral philosophy, value theory or the value of the future, that a grant recipient or potential grant recipient talks or writes about in public. I have also asked whether I can share their replies in public, so hopefully there will be more public information about this. They have not replied yet but I have elaborated on this issue in the following section: https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Troublesome_work_behind_the_scenes_including_censoring_research_and_suppressing_ideas_and_debates

I have e-mailed with Bostrom about his claim that his “Undergraduate performance set national record in Sweden” and I have talked to the university he studied at. Again, this is a less important issue but it looks strange to me, it looks like a part of a broader pattern, and it feels valuable to check it. My latest published info on the issue can be found at https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/#Potentially_dishonest_self-promotion. A part of the info is the following: On Oct. 23, 2019, Bostrom replied and gave me permission to share his reply in public, the relevant part of which reads as follows:

The record in question refers to the number of courses simultaneously pursued at one point during my undergraduate studies, which – if memory serves, which it might not since it is more than 25 years ago – was the equivalent of about three and a half programs of full time study, I think 74 ’study points’. (I also studied briefly at Umea Univ during the same two-year period I was enrolled in Gothenburg.) The basis for thinking this might be a record is simply that at the time I asked around in some circles of other ambitious students, and the next highest course load anybody had heard of was sufficiently lower than what I was taking that I thought statistically it looked like it was likely a record.

A part of my e-mail reply to Bostrom on Oct. 24, 2019:

My impression is that it may be difficult to confirm that no one else had done what you did. One would need to check what a vast number of students did at different universities potentially over many years. I don’t even know if that data is accessible before the 1990s, and to search all that data could be an enormous task. My picture of the situation is as follows: You pursued unusually many courses at some point in time during your undergraduate studies. You asked some students and the next highest course load anyone of them had heard of was sufficiently lower. You didn’t and don’t know whether anyone had done what you did before. (I do not know either; we can make guesses about whether someone else had done what you did, but that would be speculation.) Then you claim on your CV “Undergraduate performance set national record in Sweden.” I am puzzled by how you can think that is an honest and accurate claim. Will you change your CV so that you no longer claim that you set a record?

Information about university studies seems publicly available in Sweden. When I called the University of Gothenburg on Oct. 21, 2019, the person there was not aware of any such national records and said they have the following information for Niklas Boström, born 10 March 1973: One bachelor’s degree (Swedish: fil. kand.) from University of Gothenburg awarded in January 1995. Coursework included theoretical philosophy. One master’s degree (Swedish: magister or fil. mag.) from Stockholm University. He also did some additional coursework. He started to study at university in Lund in fall 1992. I asked Bostrom whether this is him but he did not reply. More information that I noted from my call with the university include that the person could see information from different universities in Sweden, and there are in total 367.5 higher education credits in the system (from different Swedish universities) for Boström, according to the current method for counting credits. 60 credits is a normal academic year (assuming one does not, e.g., take summer courses). Boström bachelor’s degree corresponds to 180 credits, which is the exact requirement for a bachelor’s degree. The total number of credits (367.5) corresponds to 6.125 years of full-time study (again, assuming, e.g., no summer courses or extra evening courses). According to the university, he started studying in 1992 and, according to Bostrom’s CV, he studied at Stockholm University until 1996. I asked Bostrom and I gather he confirmed that he only has one bachelor’s degree. Overall, I doubt he set such a record (I think no one knows, including Bostrom himself), and I think he presents the situation in a misleading way.

Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

I gather the metaethical position you describe is something like one of the following three:

(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act I perform is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

This (1) was about which of your actions will be right. Alternatively, the metaethical position could be as follows:

(2) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act anyone performs is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

Or perhaps formulating it in terms of want or preference instead of rightness, like the following, better describes your metaethical position (using utilitarianism as just an example):

(3) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will want or have a preference for that everyone act in a way that results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

My impression is that in the academic literature, metaethical theories/positions are usually, always or almost always formulated as general claims about what, for example, statements such as ‘one ought to be honest’ means; the metaethical theories/position do not have the form ‘when I say “one ought to be honest” I mean …’ But, sure, talking, as you do, about what you mean when you say ‘I think utilitarianism is right’ sounds fine.

The new version of your thought experiment sounds fine, which I gather would go something like the following:

Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace humans with beings of greater well-being. (Assume it would be optimal, all things considered, to kill and replace humans.) Utilitarianism seems to imply that at least humans who are utilitarians should commit mass suicide (or accept being killed) in order to bring the new beings into existence, because that's what utilitarianism implies is the optimal and hence morally right action in that situation.

Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right if and only if …’. But regardless of whether I would be speaking about myself or everyone, we can still talk about what the criterion of rightness (the theory) implies in the sense that one can check which actions satisfy the criteria. So we can say: according to the theory formulated as ‘an act is right if and only if…’ this act X would be right (simply because it satisfies the criteria). A simpler example is if we understand the principle ‘lying is wrong’ from an antirealist perspective. Assuming we specify what counts as lying, we can still talk about whether an act is a case of lying and hence wrong, according to this principle. And then one can discuss whether the theory or principle is appealing, given which acts it classifies as right and wrong. If repugnant action X is classified as right or if something obviously admirable act is classified as wrong, we may want to reject the theory/criterion, regardless of realism or antirealism.

Maybe all I’m saying is obvious and compatible with what you are saying.

Yea, one can formulate many variants. I can't recall seening yours before. The following is one thing that might seem like nitpicking, but which I think it is quite important: In academia, it seems standard to formulate utilitarianism and other consequentialist theories so that they apply to everyone. For example,

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

These theories are not formulated as 'traditional utilitarians ought to ...'. I can't recall ever seeing a version of utilitarianism or consequentialism formluated as 'utilitarians/consequentialists ought to ...'.

So when you write "Utilitarianism seems to imply that humans who are utilitarians should" I would rephrase as 'Utilitarianism seems to imply that humans should' since utilitarianism applies to all agents not only utilitarians. But perhaps you mean 'utilitarianism seems to imply that humans, including those who are utilitarians, should...' which would make sense.

Why does my nitpicking matter? One reason is when thinking about scenarios or thought experiments. For example, I don't think one can reply to world destruction or replacement arguments by saying 'a consequentialist ought to not kill everyone because ...'. We can picture a dictator who has never heard of consequentialism, and who is just about to act out of hatred. And we can ask, 'According to the traditional total act-utilitarian criterion of rightness (i.e. an act is right if and only if ...), would the dictator taking action X (say, killing everyone) be right?

Another reason the nitpicking matters is when thinking about the plausibility of the theories. A theory might sound nicer and more appealing if it merely says 'Those who endorse this theory act in way X' rather than as they are usually roughly formulated 'Everyone act in way X, regardless of whether you endorse this theory or not'.

Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.

The following you write seems relevant to both utilitarianism and other forms of consequentialism:

I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?

Even if these other pragmatic considerations you mention would not be removed by having control of Earth, the question remains whether they (together with other considerations) are sufficient to make it suboptimal to kill and replace everyone. What if the likelihood that they are in a simulation is not high enough? What if new scientific discoveries about the universe or multiverse indicate that taking into account agents far away from Earth is not so important?

You say,

But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable.

I don’t mean that the only way to object to the form of consequentialism under consideration is to stipulate away such things and assume they do not exist. One can also object that what perhaps make it suboptimal to kill and replace everyone are complicated and speculative considerations about living in a simulation or what beings on other planets will do. Maybe your reasoning about such things is flawed somewhere or maybe new scientific discoveries will speak against such considerations. In which case (as I understand you) it may become optimal for the leader we are talking about to kill and replace everyone.

You bring up negative utilitarianism. As I write in my paper, I don’t think negative utilitarianism is worse off than traditional utilitarianism when it comes to these scenarios that involve killing everyone. The same goes for negative vs. traditional consequentialism or the comparison negative vs. traditional consequentialist-leaning morality. I would be happy to discuss that more, but I guess it would be too off-topic given the original post. Perhaps a new separate thread would be appropriate for that.

You write,

That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation.

In that case the ideology (I would say morality) is not restricted to forms of utilitarianism but also include many forms of consequentialism and views that are consequentialist-leaning. It may also include views that are non-consequentialist but open to that killing is sometimes right if it is done to accomplish a greater goal, and that, for example, place huge importance on the far future so that far future concerns make what happens to the few billion humans on Earth a minor consideration. My point is that I think it’s a mistake to merely talk about utilitarianism or consequentialism here. The range of views about which one can reasonable ask ‘would it be right to kill everyone in this situation, according to this theory?’ is much wider.

To bite the bullet here would be to accept that it would be morally right to kill and replace everyone with other beings who, collectively, have a (possibly only slightly) greater sum of well-being. If someone could do that.

The following are two similar scenarios:

Traditional Utilitarian Elimination: The sum of positive and negative well-being in the future will be negative if humans or sentient life continues to exist. Traditional utilitarianism implies that it would be right to kill all humans or all sentient beings on Earth painlessly.

Suboptimal Paradise: The world has become a paradise with no suffering. Someone can kill everyone in this paradise and replace them with beings with (possibly only slightly) more well-being in total. Traditional utilitarianism implies that it would be right to do so.

To bite the bullet regarding those two scenarios would be to accept that killing everyone would be morally right in those scenarios.

If we are concerned with how vulnerable moral theories such as traditional total act-utilitarianism and various other forms of consequentialism are to replacement arguments, I think much more needs to be said. Here are some examples.

1. Suppose the agent is very powerful, say, the leader of a totalitarian society on Earth that can dominate the other people on Earth. This person has access to technology that could kill and replace either everyone on Earth or perhaps everyone except a cluster of the leader’s close, like-minded allies. Roughly, this person (or the group of like-minded people the leader belongs to) is so powerful that the wishes of others on Earth who disagree can essentially be ignored from a tactical perspective. Would it be optimal for this agent to kill and replace either everyone or, for example, at least everyone in other societies who might otherwise get in the way of the maximization of the sum of well-being ?

2. You talk about modifying one’s ideology, self-bind and commit, but there are questions about whether humans can do that. For example, if some agent in the future would be about to be able to kill and replace everyone, can you guarantee that this agent will be able to change ideology, self-bind and commit to not killing? It would not be sufficient that some or most humans could change ideology, self-bind and commit.

3. Would it be optimal for every agent in every relevant future situation to change ideology, self-bind or commit to not kill and replace everyone or billions of individuals? Again, we can consider a powerful, ruthless dictator or totalitarian leader. Assume this person has so far neither modified nor commited to non-violence. This agent is then in a situation in which the agent could kill and replace everyone. Would it at that time be optimal for the leader to change ideology, self-bind or commit to not killing and replacing everyone?

Hi Richard. You ask, “People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?”

Those are good questions, and I hope utilitarians or similar consequentialists reply.

It may be difficult to find out what utilitarians and consequentialists really think of such cases. Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does.

Regarding the opinions among academic philosophers who subscribe to traditional utilitarianism. I don’t know of many such people who are alive, but a few are Torbjörn Tännsjö, Peter Singer, Yew-Kwang Ng (is my impression), and Katarzyna de Lazari-Radek (is also my impression). And Toby Ord has written, “I am very sympathetic towards Utilitarianism, carefully construed.” Tännsjö (2000) says, “Few people today seem to believe that utilitarianism is a plausible doctrine at all.” Perhaps others could list additional currently living academic philosophers who are traditional utilitarians, but otherwise it’s a very small population when talking about a distribution. Here is a list https://en.wikipedia.org/wiki/List_of_utilitarians#Living, but it includes people who are not academic philosophers, like Krauss, Layard, Lindström, Matheny and Reese, and it lists negative utilitarian David Pearce, and I doubt it is correct regarding the academic philosophers included in the list.

I can’t think of any traditional utilitarian who has discussed the replacement argument (i.e., the one that involves killing and replacing everyone). Tännsjö has bitten a bullet on another issue that involves killing. As I write here https://www.simonknutsson.com/the-world-destruction-argument/#Appendix_Reliability_of_intuitions, Tännsjö thinks that a doctor ought to kill one healthy patient to give her organs to five other patients who need them to survive (if there are no bad side effects). He argues that if this is counterintuitive, that intuition is unreliable partly because it is triggered by something that is not directly morally relevant. The intuition stems from an emotional reluctance to kill in an immediate manner using physical force, which is a heuristic device selected for us by evolution, and we should realize that it is morally irrelevant whether the killing is done using physical force (Tännsjö 2015b, 67–68, 205–6, 278–79). And as I also write in my paper, he has written, among other things, “Let us rejoice with all those who one day hopefully … will take our place in the universe.” I like his way of writing. It is illuminating, he feels straightforward, and he often writes as if he teaches (in a good way). But I could only speculate about what he thinks about the replacement argument against his form of utilitarianism.

Thank you cdc482 for raising the topic. I agree describing EA as having only the goal of minimizing suffering would be inaccurate. As would it be to say that it has the goal to “maximizing the difference between happiness and suffering.” Both would be inaccurate simply because EAs disagree about what the goal should be. William MacAskill’s (a) is reasonable: “to ‘do the most good’ (leaving what ‘goodness’ is undefined).” But ‘do the most good’ would need to be understood broadly or perhaps rephrased into something roughly like ‘make things as much better as possible’ to also cover views like ‘only reduce as much badness as possible.’

Julia Wise pointed to Toby Ord's essay “Why I'm not a negative utilitarian” related to negative utilitarianism in the EA community. Since I strongly disagree with that text, I want to share my thoughts on it: http://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

Summary: In 2013, Toby Ord published an essay called “Why I’m Not a Negative Utilitarian” on his website. One can regard the essay as an online text or blog post about his thinking about negative utilitarianism (NU) and his motives for not being NU. After all, the title is about why he is not NU. It is fine to publish such texts, and regarded in that way, it is an unusually thoughtful and well-structured text. In contrast, I will discuss the content of the essay regarded as statements about NU that can be illuminating or confusing, or true or false. Regarded in that way, the essay is an inadequate place for understanding NU and the pros and cons of NU.

The main reason is that the essay makes strong claims without making sufficient caveats or pointing the reader to existing publications that challenge the claims. For clarity and to avoid creating misconceptions, Ord should either have added caveats of the kind “I am not an expert on NU. This is my current thinking, but I haven’t looked into the topic thoroughly.” Or, if he was aware of the related literature, pointed the reader to it. (I also disagree with many of the statements and arguments that his essay presents, but that is a different question.)

[End of summary]

There are also other commentaries on or replies to Ord’s essay: Pearce, David. A response to Toby Ord's essay Why I Am Not A Negative Utilitarian Contestabile, Bruno. Why I’m (Not) a Negative Utilitarian – A Review of Toby Ord’s Essay

Load more