BT

Brian_Tomasik

1628 karmaJoined Aug 2014

Comments
287

Thanks for the question. :)

eliminativism (qualia are just a kind of physical process, not a different essence or property existing separate from physical reality)

That sounds like a definition of physicalism in general rather than eliminativism specifically?

I agree with the analogies in Tim's comment. As he says, the idea is that eliminativism says all physical processes are kind of on the same footing as far as not containing (the philosophically laden version of) consciousness. So it's more plausible we'd treat all physical processes as in the same boat rather than drawing sharp dividing lines.

Great summaries/comments!

I think this is an original argument of Tomasik’s.

The specific calculations are probably original, but the basic idea that being in a simulation would probably reduce the importance of long-term outcomes was discussed by others, such as the people mentioned in this section.

It's great to have these quotes all in one place. :)

In addition to the main point you made -- that the futures containing the most suffering are often the ones that it's too late to stop -- I would also argue that even reflective, human-controlled futures could be pretty terrible because a lot of humans have (by my lights) some horrifying values. For example, human-controlled futures might accept enormous s-risks for the sake of enormous positive value, might endorse strong norms of retribution, might severely punish outgroups or heterodoxy, might value giving agents free will more than preventing harm (cf. the "free will theodicy"), and so on.

The option-value argument works best when I specifically am the one whose options are being kept open (although even in this case there can be concerns about losing my ideals, becoming selfish, being corrupted by other influences, etc). But humanity as a whole is a very different agent from myself, and I don't trust humanity to make the same choices I would; often the exact opposite.

If paperclip maximizers wait to tile the universe with paperclips because they want to first engage in a Long Reflection to figure out if those paperclips should be green or blue, or whether they should instead be making staples, this isn't exactly reassuring.

Thanks for the kind words. :)

things like the dark tetrad traits (narcissism, machiavellianism, psychopathy, sadism) are adaptive even on a group level

Yup. And how adaptive they are depends on the distribution of other agent types. For example, against a population of pure pacifists, Dark Tetrad traits may be pretty effective. In a population of agents who cooperate with one another but punish rule-breakers, Dark Tetrad traits are probably less adaptive. Hopefully present-day society is somewhat close to the latter case, although human reproduction isn't very constrained by material resources in the developed world, so I'm unsure how much society punishing some Dark Tetrad people affects their reproductive fitness. Also, some narcissists "succeed" greatly in our society (Donald Trump, Elon Musk, and too many others to list).

I take your point that darkness and hate can lead to love/reduction in hatred

I mainly mentioned those points in case someone would quibble with the absoluteness of King's quote. As they say, "all generalizations are false". In our present world, where we generally don't kill or tyrannize everyone who disagrees with us, I think it's generally more effective to move more in King's direction than our primate instincts incline us to. If you can't massacre your enemies, you have to find a way to make peace with them.

Thanks for the post! There's a lot of deep food for thought in it. I agree it's nice to know that you're not alone in having these kinds of feelings.

reading an article by Brian Tomasik one night[...]. It was the most painful experience of my life.

Sorry about that! Several people have had strong adverse reactions to my discussions of suffering. On the whole I think it's better for people to be exposed to such ideas, although some particular people may be more debilitated than motivated by thinking about extreme suffering.

I notice a trend for news and Google results to be increasingly censored of violent and gory content. For example, in 2021 there was a news story about Darrell E. Brooks Jr. driving an SUV into a crowd of people, and the footage of the incident -- shown from far away -- was blurred and had no audio. Viewers couldn't really see what happened at all. Such censorship is plausibly good for viewers' mental health, and it's very likely good for advertisers' brand safety. But it's plausibly bad for the victims of violence if it reduces motivation to address their suffering.

"Darkness cannot drive out darkness: only light can do that. Hate cannot drive out hate: only love can do that."

That's a great quote. It's applicable to so many misguided and harmful actions in the world today, from Israel's flattening of entire neighborhoods in Gaza to the vitriolic rhetoric used by some parts of the SJW community. I suspect the world would be more peaceful and advocates would be more effective if they embraced King's "love your enemies" approach more on the margin. That said, there are some cases where darkness and hate lead to love or at least a reduction in hate:

  • Stockholm syndrome, abusive relationships, loyalty to oppressive dictators, worship of gods who threaten hell, etc. The general phenomenon is that a powerful alpha male is asserting supremacy by ruthlessness, and in that case, it may be more adaptive for primate brains to accept defeat and love Big Brother rather than continuing to fight him.
  • If censorship is severe enough, it may be possible to reduce the exposure of future generations to certain hateful ideas. Also, if you murder enough of your enemies that their population becomes small, you can decrease the number of them who remain to hate you.

I haven't looked into sheep and goats specifically, but I imagine their wild-animal impacts would be fairly similar as for cattle. Unfortunately they're smaller, so there's more suffering and death per kg than for cattle, but they're still much better than chicken/fish/etc.

Dairy is another lower-impact option, and I guess a lot of Hindus are ok with dairy.

there's no sense in which asking dumb questions can plausibly have very significant downsides for the world (other than opportunity costs)

I think the opportunity costs are the key issue. :) There's a reason that companies use FAQs and automated phone systems to reduce the number of customer-support calls they have. There have been several times in my life when I've asked questions to someone who was sort of busy, and it was clear the person was annoyed.

At one of my previous employers (not an EA organization), I asked a lot of questions during meetings, which apparently other people didn't like, because it was distracting. During one meeting, people didn't even bother to answer my questions. A few weeks later, my boss told me that he overheard someone saying: "Don't invite Brian to this meeting; he'll slow us down with too many questions." I was accustomed to a school environment in which teachers would always say "There's no such thing as a dumb question", and I didn't realize that people outside of school may not feel the same way.

The situation might be better among altruists. I think one reason people at that organization didn't want to answer my questions was because they had no career incentive to do so, since they were evaluated based on what they individually produced, not based on helping coworkers. That said, lack of time can still apply in EA contexts. I often fail to reply to people who ask me questions, not because I think the questions are dumb but just because I'm slow and lazy and get asked questions frequently.

Thanks! It's worth noting that the rainforest and Cerrado numbers in that piece are very rough guesses based on limited and noisy data. As one friend of mine would say, I basically pulled those numbers out of my posterior (...distribution). :) Also, even if that comparison is accurate, it's just for one region of the world; it may not apply to the difference between, e.g., temperate forests and grasslands. All of that said, my impression is that crop fields do tend to have fewer mammals and birds than wild grassland or forest. For birds, see the screenshot of a table in this section.

Great post! In addition to biases that increase antagonism, there are also biases that reduce antagonism. For example, the fact that most EAs see each other as friends can blind us to the fact that we may in fact be quite opposed on some important questions. Plausibly this is a good thing, because friendship is a form of cooperation that tends to work in the real world. But I think friendship does make us less likely to notice or worry about large value differences.

As an example, it's plausible to me that the EA movement overall somewhat increases expected suffering in the far future, though there's huge uncertainty about that. Because EAs tend to be friends with one another and admire each other's intellectual contributions, most negative-utilitarian EAs don't worry much about this fact and don't seem to, e.g., try to avoid promoting EA to new people out of concern doing so may be net bad. It's much easier to just get along with your friends and not rock the boat, especially when people with values opposed to yours are the "cool kids" in EA. Overall, I think this friendliness is good, and it would be worse if EAs with different values spent more time trying to fight each other. I myself don't worry much about helping the EA movement, in part because it seems more cooperative to not worry about it too much. But I think it's sensible to at least check every once in a while that you're not massively harming your own values or being taken advantage of.

I think a lot of this comes down to one's personality. If you're extremely agreeable and conflict-averse, you probably shouldn't update even more in that direction from Magnus's article. Meanwhile, if you tend to get into fights a lot, you probably should lower your temperature, as Magnus suggests.

Thanks! Good to know. If you're just buying eyeballs, then there's roughly unlimited room for more funding (unless you were to get a lot bigger), so presumably there'd be less reason for funging dynamics. (And I assume you don't receive much or any money from big EA animal donors anyway.)

Load more