Comment author: brianwang712 04 March 2018 04:58:46PM 7 points [-]

I'd like to hear more about your estimate that another non-human civilization may appear on Earth on the order of 100 million years from now; is this mostly based on the fact that our civilization took ~100 million years to spring up from the first primates?

If there is a high probability of another non-human species with moral value reaching our level of technological capacity on Earth in ~100 million years conditional on our own extinction, then this could lessen the expected "badness" of x-risks in general, and could also have implications for the prioritization of the reduction of some x-risks over others (e.g., risks from superintelligent AI vs. risks from pandemics). The magnitudes of these implications remain unclear to me, though.

Comment author: brianwang712 20 July 2017 01:56:28AM 6 points [-]

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to extract food/labor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head it's the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I don't think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I don't think industrialization had as much to do with it as people's changing views on ethics.

We reach the same conclusion – that the future is likely to be good – but I think for slightly different reasons.

Comment author: Owen_Cotton-Barratt 30 October 2016 05:19:29PM 9 points [-]

Although there are dangers of having norms that dedicated EAs are less likely to pledge, because then not-pledging might become higher status in the community.

Comment author: brianwang712 30 October 2016 06:42:36PM *  1 point [-]

This is a good point; however, I would also like to point out that it could be the case that a majority of "dedicated donors" don't end up taking the pledge, without this becoming a norm. The norm instead could be "each individual should think through themselves, giving their own unique situations, whether or not taking the pledge is likely to be valuable," which could lead to a situation where "dedicated donors" tend not to take the pledge, but not necessarily to a situation where, if you are a "dedicated donor," you are expected not to take the pledge.

(I am highly uncertain as to whether or not this is how norms work; that is to say, whether norms connecting a group of people and a certain action could refrain from developing even though a majority of that group of people take that action.)

Comment author: Carl_Shulman 06 October 2016 05:03:53AM 0 points [-]

I don't think I'm following your argument. Are you saying that we should care about the absolute size of the difference in effort in the two areas rather than proportions?

Research has diminishing returns because of low-hanging fruit. Going from $1MM to $10 MM makes a much bigger difference than going from $10,001 MM to $10,010 MM.

Comment author: brianwang712 06 October 2016 06:52:12AM 0 points [-]

I guess the argument is that, if it takes (say) the same amount of effort/resources to speed up AI safety research by 1000% and to slow down general AI research by 1% via spreading norms of safety/caution, then plausibly the latter is more valuable due to the sheer volume of general AI research being done (with the assumption that slowing down general AI research is a good thing, which as you pointed out in your original point (1) may not be the case). The tradeoff might be more like going from $1 million to $10 million in safety research, vs. going from $10 billion to $9.9 billion in general research.

This does seem to assume that absolute size in difference is more important than proportions. I'm not sure how to think about whether or not this is the case.

Comment author: Carl_Shulman 04 October 2016 06:17:58PM *  5 points [-]

Maybe not a great analogy

Indeed.

1) It's not obvious how the speed of AI affects global risk and sustainability. E.g. getting to powerful AI faster through more AI research would reduce the time spent exposed to various state risk. It would also reduce the amount of computing hardware at the time which could make for a less disruptive transition. If you think the odds are 60:40 that one direction is better than the other (with equal magnitudes), then you get a fifth of the impact.

2) AI research overall is huge relative to work focused particularly on AI safety, by orders of magnitude. So the marginal impact of a change in research effort is much greater for the latter. Combined with the first point it looks at least hundreds of times more effective to address safety rather than to speed up or slow down software progress with given resources, and not at all worthwhile to risk the former for the latter.

3) AI researchers aren't some kind of ogres or tyrants: they are smart scientists and engineers with a lot of awareness of uninformed and destructive technophobia (consider GMO crops, opposition to using gene drives to wipe out malaria, opposition to the industrial revolution, panics about books/film/cars/video games, anti-vaccine movements, anti-nuclear). And they are very aware of the large benefits their work could produce. There actually is a very foolish technophobic response to AI that doesn't care about the immense benefits, one that it is important not to be confused with (and that it is understandable that people might confuse with someone like Bostrom who has written a lot about the great benefits of AI and that its expected value is good).

4) If you're that worried about the dangers of offending people (some of whose families may have fled the Soviet Union, and other places with gulags), don't make needlessly offensive analogies about them. It is AI researchers who will solve the problems of AI safety.

Comment author: brianwang712 06 October 2016 02:54:08AM 0 points [-]

Regarding your point (2), couldn't this count as an argument for trying to slow down AI research? I.e., given that the amount of general AI research done is so enormous, even changing community norms around safety a little bit could result in dramatically narrowing the gap between the rates of general AI research and AI safety research?

Comment author: brianwang712 13 August 2016 04:56:24PM 1 point [-]

Quick feedback forms for workshops/discussion groups would be nice; I think most of the workshops I attended didn't allow any opportunity for feedback, and I would have had comments for them.

Comment author: brianwang712 13 August 2016 04:53:37PM 6 points [-]

A guarantee that all the talks/panels will be recorded.

The booklet this year stated that "almost" all the talks would be recorded, which left me worried that, if I missed a talk, I wouldn't be able to watch it in the future (this might just be me). I probably would have skipped more talks and talked to more people if I had a guarantee that all the talks would be recorded.

Also, it would be nice to have a set schedule that didn't change so much during the conference. The online schedule was pretty convenient and was (for the most part) up to date, but people using the physical booklet may have been confused.

Comment author: brianwang712 18 May 2016 07:33:09PM *  0 points [-]

I think that adopting your first resolution, in addition to the assumption by commenters that being a child with malaria is a net negative experience, can rescue some of the value of AMF. Say in situation 1, a family has a child, Afiya, who eventually gets malaria and dies, and thus has a net negative experience. Because of this, the family decides to have a second child, Brian, who does not get malaria and lives a full and healthy life. In situation 2, where AMF is taken to have a contribution, a family has just one child, Afiya, who is prevented from getting malaria and lives a full and healthy life. The family does not decide to have a second child. Only taking into account the utility of the people directly affected by malaria, and not the family, it seems to me that situation 1 is worse than situation 2 by an amount equivalent to Afiya's net negative experience of getting malaria; the reverse of this could be said to be AMF's contribution. So while this is not the same as 35 QALY's, it still seems like a net positive.

EDIT: Note of clarification: The above is in particular a response to the statement, "Because AMF hardly changes humans’ lifespans, it does not have a clear beneficial effect for humans," which was stated as a problem for Givewell with adopting the first resolution.

Comment author: brianwang712 11 October 2015 03:08:17AM 1 point [-]

I have a question for those who donate to meta-charities like Charity Science or REG to take advantage of their multiplier effect (these charities typically raise ~$5-10 per dollar of expenditure). Do you donate directly towards the operations expenses of these meta-charities? For example, REG's donations page has the default split of your donations as 80% towards object-level charities (and other meta-charities), while 20% is towards REG's operating expenses, which include the fundraising efforts that the multiplier presumably is coming from. It seems to me that in order to get the best multiplier for your donation, you would donate 100% towards operating expenses, since any dollar not spent on operating expenses wouldn't have any multiplier. Is this right?

View more: Prev