Comment author: Buck 12 May 2018 06:40:44PM 3 points [-]

Two points about prediction markets:

  • I think it's interesting that in the limit, prediction markets don't have prices that converge to probabilities--they converge to risk-adjusted prices.
  • I think the strongest case for prediction markets is that they're unbiased and hard to manipulate in the limit. See this cached old blog post. Your post doesn't take that into account.
Comment author: RandomEA 03 May 2018 06:30:24AM *  9 points [-]

The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:

Type 1:

  1. Causes: global health, farm animal welfare

  2. Moral patienthood is hard to seriously dispute

  3. Evidence is more direct (RCTs, corporate pledges)

  4. Charity evaluators exist (because evidence is more direct)

  5. Earning to give is a way to contribute

  6. Direct work can be done by people with general competence

  7. Economic reasoning is more important (partly due to donations being more important)

  8. More emotionally appealing (partly due to being more able to feel your impact)

  9. Some public knowledge about the problem

  10. More private funding and a larger preexisting community

Type 2:

  1. Causes: AI alignment, biosecurity

  2. Moral patienthood can be plausibly disputed (if you're relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)

  3. Evidence is more speculative (making prediction more important)

  4. Charity evaluation is more difficult (because impact is harder to measure)

  5. Direct work is the way to contribute

  6. Direct work seems to benefit greatly from specific skills/graduate education

  7. Game theory reasoning is more important (of course, game theory is technically part of economics)

  8. Less emotionally appealing (partly due to being less able to feel your impact)

  9. Little public knowledge about the problem

  10. Less private funding and a smaller preexisting community

Comment author: Buck 04 May 2018 06:00:51PM 3 points [-]

I don't think my experience matches this split. For example, I don't think that it's obvious that the causes you specify match the attributes in points 2, 5, 6.

Comment author: Buck 19 December 2017 12:00:53AM *  3 points [-]

TL;DR: Don’t worry about any of this, just treat world-splitting the same way you treat classical randomness.

I don’t want to give a full explanation right now, but I don’t think you should be very worried about this.

I think the right way to take many-worlds into account as a utilitarian is to say that your utility over the universal wavefunction is just a weighted sum over Everett branches, with each branch having weight according to the Born rule. If you take this approach, then it adds up to normality and you don’t care about the difference between classical dice and quantum dice.

If you take this approach, then none of the issues you mention come up.

If you instead believe that every Everett branch matters equally regardless of its measure, which is basically what is required for the quantum suicide argument to go through, then your morality ends up being totally incoherent, because the universal wavefunction is nonzero everywhere that isn’t impossible for a geometric reason (eg fermionic exclusion—the wavefunction always has zero amplitude for any configuration with two electrons with the same spin in the same position).

Either way, I don't think there's any good argument that many-worlds implies any of the conclusions you mentioned.

I know a reasonable number of people who have a good understanding of anthropics and QM, and I think all of them agree with me that many-worlds adds up to normality in this way.

You might be interested in looking at theories of anthropics like UDASSA.

Comment author: Peter_Hurford  (EA Profile) 18 December 2017 05:21:22PM 1 point [-]

Thanks Carl, it's good to know that there are RFMF opportunities in topping up AI grants.

My reasoning for not donating to AI projects right now is based much less on a RFMF argument and more on not knowing enough about the space. I think I know enough about opportunities in global poverty, animal welfare, and EA community building to recommend projects there with confidence, but not for AI. I expect it would take me a good deal of time to develop the relevant expertise in AI to consider it properly. I have thought about working to develop that expertise, but so far I have not prioritized doing so.

Comment author: Buck 18 December 2017 09:31:14PM 2 points [-]

I don't understand how that logic leads to thinking it's a good idea to donate to the causes you're thinking of donating to. Donating to a cause area because you can identify good projects within it seems like the streetlight effect.

If you think that AI stuff is plausibly better, shouldn't you either want to learn more about it or enter a donor lottery so that it's more cost-effective for you to learn about it?

Comment author: thebestwecan 28 October 2017 02:34:35PM 0 points [-]

Another (possibly bad, but want to put it out there) solution is to list names of people who downvoted. That of course has downsides, but it would have more accountability, especially when it comes to my suspicion that it's a few people doing a lot of the downvoting against certain people/ideas.

Another is to have downvotes 'cost' karma, e.g. if you have 500 total karma, that allows you to make 50 downvotes.

Comment author: Buck 29 October 2017 12:51:52AM 1 point [-]

This would make it harder for people to downvote on topics like this one where it's really risky to admit disagreeing with people.

Comment author: xccf 28 October 2017 01:58:36AM 1 point [-]

[Edit: I appreciate that I should generally behave as though my community will behave well, and as such I should not have requested that people upvote if they find the post helpful. I want to be sure to flag in this response though the incredibly poor way in which people who disagree with claims and arguments in favor of diversity and inclusion are using their votes, in comments and on the whole post.]

Thanks.

I'm also finding the voting in this thread frustrating.

I appreciate your suggestions a lot, but caution you to be careful of your own assumptions. For instance, I never suggested that a Diversity & Inclusion Officer should be the person most passionate about the role instead of most smart about it.

Sorry about that.

To emphasize though, so it doesn't get lost behind those critical thoughts: I thoroughly appreciate the suggestions you've contributed here.

Glad to hear it :)

[Edit: Apologies for some excessive editing. I readily acknowledge that in an already a hostile environment, my initial reaction to criticism regarding an important issue that is causing a lot of harm is too defensive.]

I'm an excessive editor too, I'm not sure it's something you need to apologize for :)

Comment author: Buck 28 October 2017 06:14:45AM 1 point [-]

xccf, I'd be interested to hear an examples of comments which you think were excessively downvoted.

Comment author: xccf 28 October 2017 12:51:26AM 4 points [-]

I think you're overstating your case.

I don't think it is, at all, any more than Daryl Bem's research updates me towards thinking ESP is real.

This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don't believe in ESP is because your prior for ESP is very low. But I think hearing about Bem's research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.

if you think that the scientists would have published these papers regardless of their truth

I don't think social psychologists are that dishonest. Even 36% replicability suggests some relationship between paper-publishing and truth.

Furthermore, I think the fact that social psychologists are so liberal should cause some update in the direction that studying humans causes you to realize liberal views about human nature are correct.

Comment author: Buck 28 October 2017 01:16:45AM 3 points [-]

This strikes me as a misunderstanding of how Bayesian updates work. The reason you still don't believe in ESP is because your prior for ESP is very low. But I think hearing about Bem's research should still cause you to update your estimate in favor of ESP a tiny amount. In a world with ESP, Bem finds it easier to discover ESP effects.

I think you slightly misunderstand me. What I'm saying is that Bem's work isn't really a Bayesian update for me, because I think Bem is approximately as likely to publish papers in the world where (extremely weak) ESP works as the worlds where it doesn't. The strength of my prior doesn't feel relevant to me.

I think you're right that I slightly overstated my case.

Comment author: xccf 28 October 2017 12:16:57AM 3 points [-]

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument. I can understand how it might be frustrating for people to tell you you need to up your paper scrutinizing game while you are busy trying to respond to an entire thread full of people expressing disagreement.

Comment author: Buck 28 October 2017 12:35:03AM *  5 points [-]

I am disinclined to be sympathetic when someone's problem is that they posted so many bad arguments all at once that they're finding it hard to respond to all the objections.

Comment author: MichaelPlant 28 October 2017 12:03:40AM 0 points [-]

I think we should stop having downvotes on the EA Forum

I agree with this. Contra Buck, I think people use downvotes to express things they ultimately disagree with, rather that because they genuinely find someone's comments 'unhelpful', i.e. malicious, lazy, something like that. I might also prompt people to say what they didn't like with the other person's vote, rather than just voting anonymously (and snarkily) with karma points.

Comment author: Buck 28 October 2017 12:06:41AM 8 points [-]

I might also prompt people to say what they didn't like with the other person's vote, rather than just voting anonymously (and snarkily) with karma points.

The problem is that this takes a lot of time, and people with good judgement are more likely to have a high opportunity cost of time; you want to make it as cheap as possible for people with good judgement to discourage bad comments; I think that the current downvoting system is working pretty well for that purpose. (One suggestion that's better than yours is to only allow a subset of people (perhaps those with over 500 karma) to downvote; Hacker News for example does this.)

Comment author: Buck 27 October 2017 11:25:04PM 21 points [-]

Even after clarification, your sentence is misleading. The true thing you could say is "Among outsiders to projects, women are more likely to have their contributions accepted than men. Both men and women are less likely to have their contributions accepted when their genders are revealed; the effect was measured to be a percentage point different between the genders and may or may not be statistically significant. There are also major differences between the contribution patterns of men and women."

As a side note, I find the way you're using social science quite frustrating. You keep claiming that social science supports many of your particular beliefs, and then other people keep digging into the evidence and pointing out the specific reason that the evidence you've presented isn't very convincing. But it takes a lot of time to rebut all of your evidence that way, much more time than it takes for you to link to another bad study.

Comment author: Buck 27 October 2017 11:37:58PM *  18 points [-]

This is a similar issue that's going on in another thread where people feel you're cherrypicking results rather than sampling randomly in a way that will paint an accurate picture. Perhaps this dialogue can help to explain the concerns that others have expressed:

Person One: Here are 5 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption.

Person Two: Actually if you do a comprehensive survey of the literature, you'll fine 3 studies showing that coffee causes cancer, 17 showing no effect, and 3 showing the coffee prevents cancer. On balance there's no stronger evidence that coffee causes cancer than that it prevents it, and in fact it probably has no effect.

Person One: Thanks for the correction! [Edits post to say: "Here are 3 studies showing that coffee causes cancer, which suggests we should limit our coffee consumption."]

Person Two: I mean... that's technically true, but I don't feel the problem is solved.

View more: Next