Comment author: rohinmshah  (EA Profile) 03 January 2018 03:44:51AM 4 points [-]

Crucial Premise: Necessarily, the more someone is willing to pay for a good, the more welfare they get from consuming that good.

It seems to me that this premise as you've stated it is in fact true. The thing that is false is a stronger statement:

Strengthened Premise: Necessarily, if person A is willing to pay more for a good than person B, then person A gets more welfare from that good than person B.

For touting/scalping, you also need to think about the utility of people besides Pete and Rich -- for example, the producers of the show and the scalper (who is trading his time for money). Then there are also more diffuse effects, where if tickets go for $1000 instead of $50, there will be more Book of Mormon plays in the future since it is more lucrative, and more people can watch it. The main benefit of markets is mainly through these sorts of effects.

Comment author: Halstead 03 January 2018 11:51:00AM 0 points [-]

Thanks, yes my formulation was meant in the interpersonal way you suggest. Your formulation is more precise, so preferable and I will update in the main text.

I agree that the effects you mention are also important.

4

Economics, prioritisation, and pro-rich bias  

  tl;dr: Welfare economics is highly relevant to effective altruism, but tends to rely on a flawed conception of social welfare, which holds that the more someone is willing to pay for a good, the more utility or welfare they would get from consuming that good. (I use ‘welfare’ and... Read More
4

We're hiring! Founders Pledge is seeking a new researcher

  In brief: We're hiring a researcher with quantitative skills. Initial salary is £30k, but negotiable upwards for exceptional candidates. You will mainly work on longer reports and shorter briefs on effective donation opportunities for our pledgers.    *** Founders Pledge is a global community of founders and investors who... Read More
Comment author: Halstead 03 December 2017 07:54:03PM *  8 points [-]

One factor pushing against climate change being a >0.1% existential risk is that >6 degrees of warming would most probably take 150 years+ to happen because the oceans would absorb a large portion of the warming generated. By this time, it's plausible that we will develop artificial superintelligence, which will have (a) killed us already or (b) will enable us to solve the climate change problem by developing new forms of clean energy and carbon dioxide removal technology. Indeed, we are likely to get the tech mentioned in (b) even if we don't develop artificial superintelligence. This suggests that inferring from estimates of climate senstivity overstates the existential risk of global warming because it includes warming over 100 year+ timescales.

This suggests most of the risk comes from abrupt runaway irreversible warming. It's not clear what the risk of that is.

Comment author: Halstead 23 November 2017 07:05:10PM *  4 points [-]

Thanks for this. Founders Pledge has recently completed a report on mental health and we found that Strong Minds' cost-effectiveness has increased significantly due to significant declines in cost. We have it at around the $220/DALY mark. Depending on how much you think DALYs underweight mental health, this makes Strong Minds look highly cost-effective. There is, I should note, large uncertainty, and their intervention is less well-tested than other global health interventions.

The report will be on our website in the next few months, but I'm happy to share it with anyone interested.

As we're recommending Strong Minds, this will probably mean that significant funds will be directed to them in the next year or so.

Comment author: Gregory_Lewis 30 October 2017 07:14:21PM 1 point [-]

Hello John (and Michael - never quite how to manage these sorts of 'two to one' replies)

I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I'd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the 'truly modest' one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)

For the supervaluation case, I don't know whether it is the majority view on vagueness, but pretend it was a consensus. I'd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.

In the AI case, I'd say you'd have to weigh up (which is tricky) degrees of expertise re. AI. I don't see it as a cost for my view to update towards the more sceptical AI researchers even if you don't think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.

In essence, the challenge modesty would make is, "Why do you back yourself to have the right grasp on the object level reasons?" Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they're all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.

What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:

My impression is that my theory is right, but I don't believe its more likely my impression is more likely to be right than Paul Krugman's (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.

Of course, maybe you can investigate the object level reasons, per Michael's example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn't an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)

Comment author: Halstead 01 November 2017 05:19:39PM 1 point [-]

Hi Greg, So, your view is that it's ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn't apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.

Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can't both not be true; it's wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?

In some of these cases, it also seems that in order to justifiably demote, one doesn't need to offer an account of why the other party is biased that is independent of the object-level reasons.

A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world's most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I'm sure there are lots of examples outside philosophy.

In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There's still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: Halstead 30 October 2017 12:27:44AM *  7 points [-]

I thought I'd offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I've heard from AI researchers have been very weak and haven't shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.

With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: Habryka 26 October 2017 10:18:01PM *  22 points [-]

As a relevant piece of data:

I looked into the 4 sources you cite in your article as improving the effectiveness of diverse teams and found the following:

  • 1 didn't replicate, and the replication found the opposite effect with a much larger sample size (which you link to in your article)
  • One is a Forbes article that cites a variety of articles, two of which I looked into and didn't say at all what the Forbes article said they say, with the articles usually saying "we found no significant effects"

  • One study you cited directly found the opposite result of what you seemed to imply it does, with its results table looking like this:

https://imgur.com/a/dRms0

And the results section of the study explicitly saying:

"whereas background diversity displayed a small negative, yet nonsignificant, relationship with innovation (.133)."

(the thing that did have a positive relation was "job-related diversity" which is very much not the kind of diversity the top-level article is talking about)

  • The only study that you cited that did seem to cite some positive effects was one with the following results table:

https://imgur.com/a/tgS6q

Which found some effects on innovation, though overall it found very mixed effects of diversity, with its conclusion stating:

"Based on the results of a series of meta-analyses, we conclude that cultural diversity in teams can be both an asset and a liability. Whether the process losses associated with cultural diversity can be minimized and the process gains be realized will ultimately depend on the team’s ability to manage the process in an effective manner, as well as on the context within which the team operates."

Comment author: Halstead 27 October 2017 10:42:20AM 16 points [-]

I find this troubling. If a small sample of the evidence cited has been misreported or is weak, this seems to cast serious doubt on the evidence cited in the rest of the piece. Also, my prior is that pointing to lots of politically amenable social psychology research is a big red flag.

Comment author: Halstead 18 October 2017 04:49:44PM 1 point [-]

Thanks for posting this. I agree this is a hugely neglected issue. It would be good to see a more coherent and sustained movement towards reducing this problem.

Anyone wanting to learn more should read Dan Kahan.

View more: Prev | Next