Comment author: Askell 27 March 2017 08:40:48AM 3 points [-]

I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another example that's close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on. I suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only. So my inclination is to take this on a case to case basis. We see radical leaps forward sometimes being generated by theoretical work and sometimes being generated by novel empirical discoveries. It seems odd to not draw from two highly successful methods.

What, then, about pure a priori work like mathematics and conceptual work? I think I agree with Owen that this kind of work is important for building solid foundations. But I'd also go further in saying that if you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.

Comment author: RyanCarey 27 March 2017 07:07:09PM 0 points [-]

I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work... suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only... So my inclination is to take this on a case to case basis... What, then, about pure a priori work like mathematics and conceptual work?

I don't think I'm arguing what you think I'm arguing. To be clear, I wouldn't claim a bright dividing line, nor would I claim that more philosophical work, or pure mathematics has no use at all. Now would I claim that we should avoid theory altogether. I agree that there are cases of theoretical work that could be useful. For examples, there is AI safety, and there may be some important crossover work to be done in ethics and in understanding human experience and human values. But that doesn't mean we just need to throw up our arms and say that everything needs to be taken on a case by case bases, if in-fact we have good reasons to say we're overall overinvesting in one kind of research rather than another. The aim has to be to do some overall prioritization.

Another example that's close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on... If you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.

I agree that thinking about exploration vs exploration tradeoffs is both interesting and useful. However, the Gittins Index was discovered in 1979. Much of the payoff of this discovery came decades afterward. We have good reasons to have pretty high discount rates, such as i) returns on shaping research communities that are growing at high double-digit percentages, ii) double digit chances of human-level AI in next 15 years.

There's very little empirical research going into important concrete issues such as how to stage useful policy interventions for risky emerging technologies (Allan Dafoe, Mathias Mass notwithstanding), how to build better consensus among decision-makers, how to get people to start more good projects, how to better recruit, etc that many important decisions of EAs will depend on. It's tempting to say that many EAs have wholly forgotten what ambitious business plans and literature reviews on future-facing technologies are even supposed to look like! I would love to write that off as hyperbole but I haven't seen any recent examples. And it seems critical that theory should be feeding into such a process.

I'd be interested to know if people have counterconsiderations on the level of what should be a higher priority.

Comment author: AGB 26 March 2017 10:04:11AM 10 points [-]

I'm sympathetic to this view, though I think the EA funds have some EA-Ventures-like properties; charities in each of the fund areas presumably can pitch themselves to the people running the funds if they so choose.

One difference that has been pointed out to me in the past is that for (e.g.) EA Ventures you have to put a lot of up-front work into your actual proposal. That's time-consuming and costly if you don't get anything out of it. That's somewhat different to handing some trustworthy EA an unconditional income and saying 'I trust your commitment and your judgment, go and do whatever seems most worthwhile for 6/12/24 months'. It's plausible to me that the latter involves less work on both donor and recipient side for some (small) set of potential recipients.

With that all said, better communication of credible proposals still feels like the relatively higher priority to me.

In response to comment by AGB on Concrete project lists
Comment author: RyanCarey 26 March 2017 06:38:52PM 0 points [-]

Agreed!

Comment author: Peter_Hurford  (EA Profile) 26 March 2017 04:07:24AM 7 points [-]

I think it could be possible to set up a general sort of EA Fund for this sort of thing, sort of like how there is one for political activism. That could be a missing step on our quest to turn money into talent.

How long do you think someone would have basic income for before they could either "prove" their project and get actual donations / fundraising based on merit or they could go back to a day job? How much funding do you think this would take?

Comment author: RyanCarey 26 March 2017 06:33:01AM 4 points [-]

You could do unconditional basic income but why would you start with that when we haven't even created a facility for people to fund credible proposals yet? Seems better to reboot EA Ventures or Impact Certificates first (given that the EA community is a bit bigger, and that some of the reasons for previous failure were related to circumstance).

Comment author: RyanCarey 25 March 2017 10:53:14PM *  3 points [-]

I doubly agree here. The title "Hard-to-reverse decisions destroy option value" is hard to disagree with because it is pretty tautological.

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.

Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think "customers" like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).

Comment author: RyanCarey 25 March 2017 11:53:25PM *  1 point [-]

For an example of this view, see Nick Beckstead's research advice from back in 2014:

I think most highly abstract philosophical research is unlikely to justify making different decisions. For example, I am skeptical of the “EA upside” of most philosophical work on decision theory, anthropics, normative ethics, disagreement, epistemology, the Fermi paradox, and animal consciousness—despite the fact that I’ve done a decent amount of work in the first few categories. If someone was going to do work in these areas, I’d probably be most interested in seeing a very thorough review of the Fermi Paradox, and second most interested in a detailed critique of arguments for the overwhelming importance of the very long-term future.

I’m also skeptical of developing frameworks for making comparisons across causes right now. Rather than, e.g., trying to come up with some way of trying to trade off IQ increases per person with GDP per capita increases, I would favor learning more about how we could increase IQ and how we could increase GDP per capita. There are some exceptions to this; e.g., I see how someone could make a detailed argument that, from a long-run perspective, human interests are much more instrumentally important than animal interests. But, for the most part, I think it makes more sense to get information about promising causes now, and do this kind of analysis later. Likewise, rather than developing frameworks for choosing between career areas, I’d like to see people just gather information about career paths that look particularly promising at the moment.

Other things being equal, I strongly prefer research that involves less guesswork. This is less because I’m on board with the stuff Holden Karnofsky has said about expected value calculations—though I agree with much of it—and more because I believe we’re in the early days of effective altruism research, and most of our work will be valuable in service of future work. It is therefore important that we do our research in a way that makes it possible for others to build on it later. So far, my experience has been that it’s really hard to build on guesswork. I have much less objection to analysis that involves guesswork if I can be confident that the parts of the analysis that involve guesswork factor in the opinions of the people who are most likely to be informed on the issues.

Comment author: redmoonsoaring 18 March 2017 05:38:04PM 13 points [-]

While I see some value in detailing commonly-held positions like this post does, and I think this post is well-written, I want to flag my concern that it seems like a great example of a lot of effort going into creating content that nobody really disagrees with. This sort of armchair qualified writing doesn't seem to me like a very cost-effective use of EA resources, and I worry we do a lot of it, partly because it's easy to do and gets a lot of positive social reinforcement, to a much greater degree than empirical bold writing tends to get.

Comment author: RyanCarey 25 March 2017 10:53:14PM *  3 points [-]

I doubly agree here. The title "Hard-to-reverse decisions destroy option value" is hard to disagree with because it is pretty tautological.

Over the last couple of years, I've found it to be a widely held view among researchers interested in the long-run future that the EA movement should on the margin be doing less philosophical analysis. It seems to me that it would be beneficial for more work to be done on the margin on i) writing proposals for concrete projects, ii) reviewing empirical literature, and iii) analyzing technological capabilities and fundamental limitations, and less philosophical analysis.

Philosophical analysis such as in much of EA Concepts and these characterizations of how to think about counterfactuals and optionality are less useful than (i-iii) because they do not very strongly change how we will try to affect the world. Suppose I want to write some EA project proposals. In such cases, I am generally not very interested in citing these generalist philosophical pieces. Rather, I usually want to build from a concrete scientific/empirical understanding of related domains and similar past projects. Moreover, I think "customers" like me who are trying to propose concrete work are usually not asking for these kinds of philosophical analysis and are more interested in (i-iii).

Comment author: RyanCarey 25 March 2017 07:59:13PM *  9 points [-]

It would be good to have a longer list of important research areas, so that we can all walk around with a cache of important research topics in case we run into EAs working in nearby areas. Then it can become common knowledge that it is useful for such people to perform literature reviews or to settle into those fields. Personally, I'm interested in the domain of risky emerging technologies, so I can list some related areas:

Comment author: MichaelDello 14 March 2017 11:18:38AM 0 points [-]

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment author: RyanCarey 14 March 2017 10:07:53PM *  3 points [-]

People like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other (https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation.

The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.

Comment author: the_jaded_one 14 March 2017 07:23:06PM *  1 point [-]

you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

A post which simply quotes a news source could be criticized as not containing anything original and therefore not worth posting. Someone has already complained that this post is superfluous since a discussion already exists on Facebook.

Actually if I had to criticize my own post I would say its weakness is that it lacks in-depth analysis and research. Unfortunately, in-depth analysis takes a lot of time...

Comment author: RyanCarey 14 March 2017 08:34:08PM 0 points [-]

Posting news together with analysis, arguments, and a few opinions is great. If you find yourself posting news and polemics together, you should think really hard if they should rather be split.

I don't think this post is too bad.

Comment author: ThomasSittler 11 March 2017 10:51:07AM 0 points [-]

Hey Ryan, I'm following up about the idea of using a Medium blog. Medium is beautiful, and allows commenting on particular portion of the document, which is the main advantage of Google Docs commenting. However, you need to create an account to comment, and it seems like that will be too much trouble for most people. Also, it seems like there isn't a simple way to embed Medium into Squarespace (https://support.squarespace.com/hc/en-us/articles/205814558-Connecting-Medium-with-Squarespace). What are your thoughts?

Comment author: RyanCarey 11 March 2017 07:41:41PM 0 points [-]

I guess you'd get more shares, views, and hence comments on a Medium, even accounting for a small inconvenience from signup. Traffic is almost all through sharing nowadays. e.g. EA Forum gets 70% of referrals from Facebook, >80% if you include other social media, and >90% if you include other blogs.

The proposal would not require embedding anything inside a Squarespace. You can just put it on a subdomain with the right logos, and linking back to the main page as in the recent EA example of https://blog.ought.com/

Comment author: ThomasSittler 11 March 2017 10:40:23AM 4 points [-]

I think I've only ever seen cause-neutrality used to mean cause-impartiality.

Comment author: RyanCarey 11 March 2017 07:28:11PM *  4 points [-]

I think one aim here is to stop people from conflating other things with cause impartiality, which does seem like an unhelpful thing that people occasionally consciously or subconsciously do.

View more: Next