Comment author: Halstead 03 December 2017 07:54:03PM *  7 points [-]

One factor pushing against climate change being a >0.1% existential risk is that >6 degrees of warming would most probably take 150 years+ to happen because the oceans would absorb a large portion of the warming generated. By this time, it's plausible that we will develop artificial superintelligence, which will have (a) killed us already or (b) will enable us to solve the climate change problem by developing new forms of clean energy and carbon dioxide removal technology. Indeed, we are likely to get the tech mentioned in (b) even if we don't develop artificial superintelligence. This suggests that inferring from estimates of climate senstivity overstates the existential risk of global warming because it includes warming over 100 year+ timescales.

This suggests most of the risk comes from abrupt runaway irreversible warming. It's not clear what the risk of that is.

Comment author: Halstead 23 November 2017 07:05:10PM *  4 points [-]

Thanks for this. Founders Pledge has recently completed a report on mental health and we found that Strong Minds' cost-effectiveness has increased significantly due to significant declines in cost. We have it at around the $220/DALY mark. Depending on how much you think DALYs underweight mental health, this makes Strong Minds look highly cost-effective. There is, I should note, large uncertainty, and their intervention is less well-tested than other global health interventions.

The report will be on our website in the next few months, but I'm happy to share it with anyone interested.

As we're recommending Strong Minds, this will probably mean that significant funds will be directed to them in the next year or so.

Comment author: Gregory_Lewis 30 October 2017 07:14:21PM 1 point [-]

Hello John (and Michael - never quite how to manage these sorts of 'two to one' replies)

I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I'd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the 'truly modest' one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)

For the supervaluation case, I don't know whether it is the majority view on vagueness, but pretend it was a consensus. I'd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.

In the AI case, I'd say you'd have to weigh up (which is tricky) degrees of expertise re. AI. I don't see it as a cost for my view to update towards the more sceptical AI researchers even if you don't think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.

In essence, the challenge modesty would make is, "Why do you back yourself to have the right grasp on the object level reasons?" Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they're all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.

What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:

My impression is that my theory is right, but I don't believe its more likely my impression is more likely to be right than Paul Krugman's (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.

Of course, maybe you can investigate the object level reasons, per Michael's example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn't an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)

Comment author: Halstead 01 November 2017 05:19:39PM 1 point [-]

Hi Greg, So, your view is that it's ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn't apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.

Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can't both not be true; it's wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?

In some of these cases, it also seems that in order to justifiably demote, one doesn't need to offer an account of why the other party is biased that is independent of the object-level reasons.

A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world's most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I'm sure there are lots of examples outside philosophy.

In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There's still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: Halstead 30 October 2017 12:27:44AM *  7 points [-]

I thought I'd offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I've heard from AI researchers have been very weak and haven't shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.

With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: Habryka 26 October 2017 10:18:01PM *  22 points [-]

As a relevant piece of data:

I looked into the 4 sources you cite in your article as improving the effectiveness of diverse teams and found the following:

  • 1 didn't replicate, and the replication found the opposite effect with a much larger sample size (which you link to in your article)
  • One is a Forbes article that cites a variety of articles, two of which I looked into and didn't say at all what the Forbes article said they say, with the articles usually saying "we found no significant effects"

  • One study you cited directly found the opposite result of what you seemed to imply it does, with its results table looking like this:

https://imgur.com/a/dRms0

And the results section of the study explicitly saying:

"whereas background diversity displayed a small negative, yet nonsignificant, relationship with innovation (.133)."

(the thing that did have a positive relation was "job-related diversity" which is very much not the kind of diversity the top-level article is talking about)

  • The only study that you cited that did seem to cite some positive effects was one with the following results table:

https://imgur.com/a/tgS6q

Which found some effects on innovation, though overall it found very mixed effects of diversity, with its conclusion stating:

"Based on the results of a series of meta-analyses, we conclude that cultural diversity in teams can be both an asset and a liability. Whether the process losses associated with cultural diversity can be minimized and the process gains be realized will ultimately depend on the team’s ability to manage the process in an effective manner, as well as on the context within which the team operates."

Comment author: Halstead 27 October 2017 10:42:20AM 16 points [-]

I find this troubling. If a small sample of the evidence cited has been misreported or is weak, this seems to cast serious doubt on the evidence cited in the rest of the piece. Also, my prior is that pointing to lots of politically amenable social psychology research is a big red flag.

Comment author: Halstead 18 October 2017 04:49:44PM 1 point [-]

Thanks for posting this. I agree this is a hugely neglected issue. It would be good to see a more coherent and sustained movement towards reducing this problem.

Anyone wanting to learn more should read Dan Kahan.

Comment author: Geuss 15 September 2017 10:11:44AM *  -4 points [-]

"This does not mean that capitalism is bad because capitalism is not conceptually tied to selfishness. The question of which system of economic ownership we ought to have is entirely separate to the question of which ethos we ought to follow."

This is almost solipsistic - it sounds like you're denying that a complex social world exists out there with powerful and entrenched system of causation. Only for the most remote, cerebral idealist are these two things possibly separate. What's the point of this kind of philosophy?

Comment author: Halstead 15 September 2017 12:25:03PM *  1 point [-]

I'm not making a claim about what capitalism causally produces. I think that is fairly clear from the fact that I say that I am making a conceptual distinction. These are separate questions:

  1. Capitalism is defined as a system in which people are selfish.
  2. Capitalism causally makes people more selfish than socialism.
  3. Capitalism is all-things-considered the best system.

I am arguing for 1, and saying that we need different information to evaluate its truth to both 2 and 3. I'm not denying anything about social systems causing different motivations in people; it is obviously true that they do.

The point: you don't get to argue against capitalism by suggesting that it is all about selfishness, as a conceptual matter. Rather, you'd need to show that it makes people more selfish than socialism does or that it produces less good outcomes overall than socialism. So, you'd need to present empirical evidence.

Comment author: Geuss 15 September 2017 10:04:41AM *  0 points [-]

I don't way to be too harsh, but this is the apotheosis of obtuse Oxford-style analytic philosophy. You can make whatever conceptual distinctions you like, but you should really be starting from the historical and sociological reality of capitalism. The case for why capitalism generates selfish motivations is not obscure.

Capitalism is a set of property relations that emerged in early modern England because its weak feudal aristocracy had no centralised apparatus by which to extract value from peasants, and so turned to renting out land to the unusually large number of tenants in the country - generating (a) competitive market pressures to maximise productivity; (b) landless peasants that were suddenly deprived of the means of subsistence farming. The peasants were forced to sell their labour - the labour they had heretofore been performing for themselves, on their own terms - to the emerging class of agrarian capitalists, who extracted a portion of their product to re-invest in their holdings.

The capitalists have to maximise productivity through technological innovation, wage repression, and so forth, or they are run into the ground and bankrupted by market competition. There is, as such, a set of self-interested motivations which one acquires if one wants to be a successful and lasting capitalist. It is a condition of the role within the structure of the market. The worker has to, on the other hand, sell themselves to those with a monopoly of the means of subsistence or face starvation. To do so they have to acquire the skills, comport and obedience to be attractive to the capitalist class. Again, one has to acquire certain self-interested motivations as a condition of the role within the market. Finally, capitalism requires a sufficiently self-interested culture such that it can sustain compounding capital accumulation through the sale of ever-greater commodities.

Comment author: Halstead 15 September 2017 12:16:21PM *  5 points [-]

Thanks for the comment.

  1. In the first paragraph you suggest that I have argued that it is not the case that capitalism generates selfish motives. I have not argued for this or against it. I have just argued that capitalism is not defined as a system in which people are selfish. This is entirely separate to the question of whether capitalism causally produces more selfishness than socialism. If you accept that it is an empirical question whether capitalism or socialism causally produces more selfishness, then you agree with my argument since you don't need to do empirical work to find out whether a conceptual claim is true.

General point: conceptual distinctions are very useful. It is difficult to have debates about things when the concepts are not clearly defined. And conceptual distinctions are not, nor did they pretend to be in my piece, to the exclusion of history and sociology. Actually, they make historical and sociological arguments better because more precise.

  1. I'm not sure whether capitalism causally produces more selfishness than socialism. In 'Why Not Capitalism?' and in the blog I linked to, Brennan argues that market societies actually produce more virtuous people than socialist societies, though I haven't looked into this very deeply. Studies of traditional (hunter gatherer and other societies) show that people in market societies are nicer in ultimatum games and that kind of thing, though I'm not sure how much weight to put on this.

You have presented an abstract argument showing that capitalism creates incentives for selfishness. But we really want to know whether capitalism creates greater incentives for virtuous conduct than socialism. To answer that question, we'd need to look at actual socialist societies and compare them to how people are in actual capitalist ones. e.g. You could look at how nice people are in capitalist countries and compare that to how nice people are/were in Venezuela now, during the Cultural Revolution, in communist Russia or Cambodia etc.

Comment author: Geuss 14 September 2017 07:10:26PM *  0 points [-]

I meant socialist in broad terms. One can be a socialist and not think much of a project for change based on the 'voluntaristic' exchange of money without demolishing capitalist social relations. It pushes back to your philosophy of society, and whether you think capitalism operates as a systemic whole to generate those things which you think need to be changed.

I'm not sure that you're not building a strawman, either. The defining problem of anti-capitalist thought since the failure of the Bolshevik Revolution to spread to Germany has been why it isn't obvious. And it's worth saying that no one wants to abolish private property altogether, just the historically specific property relations that emerged in the early modern period and made it such that peasants could not earn a living except by selling themselves to those who owned the means of production. Even more ambitious forms of social anarchism allow for usufruct.

Comment author: Halstead 15 September 2017 08:17:28AM 0 points [-]

Sorry I should've been clearer. I meant the socialist argument as used in criticisms of EAs by Leiter and Srinivasan etc. They talk as though EAs are missing something painfully obvious by not advocating for the destruction of extensive private property ownership. This shows a lack of epistemic awareness.

View more: Next