M

mhpage

312 karmaJoined Apr 2015

Posts
3

Sorted by New

Comments
66

Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I'll leave it to them to share more if/when they want to, but I think it's fair to say they left at least in part due to concerns about Sam's business ethics. She's had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam's actions are used to tarnish Tara.

[Disclosure: Tara is my wife]

Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they're primarily modeled after South Africa's constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).

I haven't read about this case, but some context: This has been an issue in environmental cases for a while. It can manifest in different ways, including "standing," i.e., who has the ability to bring lawsuits, and what types of injuries are actionable. If you google some combination of "environmental law" & standing & future generations you'll find references to this literature, e.g.: https://scholarship.law.uc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1272&context=fac_pubs

Last I checked, this was the key case in which a court (from the Phillipines) actually recognized a right of future generations: http://heinonline.org/HOL/LandingPage?handle=hein.journals/gintenlr6&div=29&id=&page=

Also, people often list parties as plaintiffs for PR reasons, even though there's basically no chance that a court would recognize that the named party has legal standing.

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

Variant on this idea: I'd encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.

Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they're a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).

I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.

My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.

My advice would be:

  1. When assessing someone's talent, focus on the content of what they're saying/writing, not the general feeling you get from them.

  2. When discussing how talented someone is, always explain the basis of your view (e.g., I read a paper they wrote; or Bob told me).

Thanks for doing these analyses. I find them very interesting.

Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:

  1. I don't think AI is a "cause area."
  2. I don't think there will be a non-AI far future.

Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make progress on, like climate change or pandemic risk. But that's not all of what EAs are doing (or should be doing) with respect to AI.

This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.

So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there's a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I'd love to see more EAs foraging those carrots.

Max's point can be generalized to mean that the "talent" vs. "funding" constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn't just put them to work.

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

This is my concern (which is not to say it's Open Phil's responsibility to solve it).

Hey Josh,

As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.

I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.

Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.

Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.

Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using "project" in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.

Regards, Michael

Load more