If it's supposed to be introductory then it should probably focus on the simpler reasons why animal advocacy looks important. Plus, my understanding is that most animal advocates don't focus as much about the far future, so even if the author does, it probably makes sense for the article to use arguments that convinced most people rather than the arguments that convinced him personally.
OK. Personally I would prefer the convention that everybody at the EA forum gives the reasons they actually believe in themselves. I think that is more in line with the EA credo of evidence and reason, and with intellectual honesty.
This is a nice article. Thanks for writing it.
Regarding: "Consideration of the far future is the strongest factor in favor of prioritizing animal advocacy for many long-time EAs, including myself."
How do you see animal advocacy as a cause area stacking up against work on existential risks?
It is a bit surprising that such a small part of the article is explicitly concerned with the far future, if considerations of the far future is the strongest factor in favour of prioritising animal advocacy. In general, the amount of space one spends on a consideration should probably be at least roughly proportionate to its significance.
Transhumanism seems to have a decently large Russian presence. Any ideas why that might be?
Yes I have noticed that too. My impression/hunch is that parts of the Russian intellectual class seem to be interested in ideas seen to be far out (and have been so for a long time). That might be one explanatory factor.
Thanks. I agree that we should do cross-cutting work that addresses several or all catastrophic risks. At the same time, the catastrophic risks are so dissimilar (e.g. asteroids, AI, and synthetic biology have little in common) that many of the more effective interventions will be risk-specific.
It is also worth noting that prevention work in general seems more risk-specific than recovery work (response work might be somewhere in between). Also, note that for some risks (e.g. AI, asteroids), there is a risk that there would be no chance of recovery after a disaster.
Another relevant distinction is that between object-level interventions, which reduce X-risk directly, and meta-level/capacity-building interventions (e.g. setting up new X-risk institutions, raising awareness about X-risk among policy-makers), which reduce X-risk because we anticipate that they will enable us to do object-level work more effectively later on. Capacity-building is more often cross-cutting, and is plausibly quite important relative to object-level work at this point in time.
Primarily much lower property and living costs, meaning people can live there for less (= more donations, longer runways for startups, lower expenses for researchers, etc) while still retaining high quality of living and being around interesting people. Hubs in higher cost countries would likely be valuable as well, but they cater to a different group of people, would require higher initial investment for a comparable property, and generally have stricter visa requirements.
I don't think this counts as sufficient reason. You need to list all of the (major) pro-and-con-considerations and weigh them up against each other in order to show that a low-cost EA hub is better than another one in a developed country.
Evan's comments below are interesting, though.
I'm glad you're doing work on this - it's a potentially very valuable project. I think we could go about it in a different way though. There's a risk of analysis paralysis in trying to find the optimal location in advance so that we can commit to something as big as buying and converting property. Instead we could just find the people who are likely to move somewhere cheaper in the next few months (I'm one of those people) and see if we can do it together. We might also want to drop the framing of it as 'A new EA hub' at this stage because that makes the task seem big, important, and intimidating. Let's just experiment with some locations and see how it goes. We'll learn something about living abroad and we'll be able to observe existing coworking and coliving setups to see what works.
I think the "analysis paralysis" objection is exaggerated. Even if you run it as an experiment, you need to put a lot of thought into where you decide to run that experiment. It's unlikely that you'll test very many places, so you'd better do some thinking in advance.
I'd be interested in hearing arguments for why a hub in a low-cost country would be better than a new hub in a developed country with more potential EAs (e.g. Australia, the East Coast).
Happy to discontinue posting about research position openings, if these are not of interest, or the EA forum is no longer an appropriate venue. Thanks!
In my view, they very much are of interest.
Good thought, Stefan, thanks for identifying the meta-issue here. I think I didn't notice the possibly provocative nature of the term simply because I learned it in the context of acquiring expertise on marketing, and took it in as just the specific technical term used there. Probably a bit of a curse of knowledge for me on not identifying the possible pejorative connotations this term might inspire, and something to watch out for.
Back from the meta-level, for the object-level issue, perhaps a term like "using breaking news stories" instead of newsjacking would serve. I'll retitle the post.
I think that you want to be "selling", and that you for that reason come up with these eye-catching terms. Being selling or using eye-catching terms is not necessarily a problem, but you need to be cautious regarding which eye-catching terms you use, and in which contexts.
Owen, I hear you about the term!
I use it because it's actually the technical term for this sort of activity, but other EAs have found the term somewhat distasteful as well. There's a trade-off to be made between using a technical term that some non-experts find distasteful, and using a more friendly term. That's one reason why CT scans were not called "intense X-ray" machines so we certainly have historical precedents for this :-)
Do you think using a more friendly, non-technical term would be better for this activity? If so, what suggestions do you or others have?
Gleb, I think that you should think a bit more about exactly what terms to use and for what reason in general. Cf the previous discussion about "softcore EA". Provocative metaphors are generally to be avoided in sensitive areas.
© 2017 Effective Altruism Forum |
Powered by reddit