CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6157 karmaJoined Nov 2015Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1016

Very interesting report. It provided a lot of visibility into how these funders think.

Geographically, India may be a standout opportunity for getting talent to do research/direct work in a counterfactually cheap way.


I would have embraced that more in the past, but I'm a lot more skeptical of this these days. For many tasks, EA wants the best talent that is available. Top talent is most able to access overseas opportunities and so the price is largely independent of current location.

In terms of age prioritization, it is suboptimal that EA focuses more on
outreach to university students or young professionals as opposed to mid-career
people with greater expertise and experience.

I agree that there is a lot of alpha in reaching out to mid-career professionals - if you are able to successfully achieve this. This work is a lot more challenging - mid-career professionals are often harder to reach, less able to pivot and have less time available to upskill. Less people are able to do this kind of outreach because these professionals may take current students or recent grads less seriously.

The report writes "80,000 Hours in particular, has been singled out for its work not having a clear positive impact, despite the enormous sums they have spent.

As an on-the-ground community builder, I'm skeptical of your take. So many people that I've talked to became interested in EA or AI safety through 80,000 Hours. Regarding: "misleading the community that it is cause neutral while being almost exclusively focused on AI risk", I was more concerned about this in the past, but I feel that they're handling this pretty well these days. Would be quite interested to hear if you had specific ways you thought they should change. Regarding, causing early career EA's to choose suboptimal early career jobs that mess up their CVs, I'd love to hear more detail on this if you can. Has anyone written up a post on this?

On funding Rethink Priorities specifically – a view floated is that there is value in RP as a check on OP, and yet if OP doesn't think RP is worth funding beyond a certain point, it's hard to gainsay them

Rethink Priorities seems to be a fairly big org - especially taking into account that it operates on the meta-level - so I understand why OpenPhil might be reluctant to spend even more money there. I wouldn't take that as a strong signal.

In practice, it's unclear if the community building actually translates to money/talent moved, as opposed to getting people into EA socially.

As someone on the ground, I can say that community building has translated to talent moved at the very least. Money moved is less visible to me because a lot of people aren't shouting about the pledges from the rooftops, plus the fact that donations are very heavy-tailed. Would love to hear some thoughtful discussion of how this could be better measured.

What would interest OP might be a project about getting in more people who are doing the best things in GHW (e.g. Buddy Shah, Eirik Mofoss).

Very curious about what impressed Open Philanthropy about these people.

At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.

For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of general hypotheses such as the hinge of history hypothesis, the vulnerable world hypothesis and the technological completion conjecture; the development of conceptual tools such as the concepts of existential risk, of a crucial consideration and of differential progress; and the analysis of the impacts and capabilities of future technologies such as artificial general intelligence, whole brain emulation and atomically precise manufacturing, but considered at a higher level of abstraction than is generally the case in cause prioritization research.

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Disagree because it is at -36.

Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.

That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.

I didn't vote, but maybe people are worried about the EA forum being filled up with a bunch of logistics questions?

This post makes some interesting points about EA's approach to philanthropy, but I certainly have mixed feelings on "please support at least one charity run by someone in the global south that just so happens to be my own".

Might be more useful if you explain why the arguments weren't persuasive to you


So my position is that most of your arguments are worth some "debate points" but that mitigating potential x-risks outweigh this.

Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world

I've personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly change. At the same time, my intuition is that if we're going to pursue the liability route, at least strict liability has the advantage of keeping the developer focused on preventing the issue from occurring rather than taking actions to avoid legal responsibility. Those actions won't help, so they need to focus on preventing the issue from occurring.

I know that I wrote above:

In any case my main worry about strong liability laws is that we may create a situation where AI developers end up thinking primarily about dodging liability more than actually making the AI safe.

and that this is in tension with what I'm writing now. I guess upon reflection I now feel that my concerns about strong liability laws only apply to strong fault-based liability laws, not to strict liability laws, so in retrospect I wouldn't have included this sentence.

Regarding your discussion in point 1 - apologies for not addressing this in my initial reply - I just don't buy that courts being able to handle chainsaws or medical or actuary evidence means that they're equipped to handle transformative AI given how fast the situation is changing and how disputed many of the key questions are. Plus the stakes involved play a role in me not wanting to take a risk here/make an unnecessary bet on the capabilities of the courts. Even if there was a 90% chance that the courts would be fine, I'd prefer to avoid the 10% probability that they aren't.

I find the idea of a reverse burden of proof interesting, but tbh I wasn’t really persuaded by the rest of your arguments. I guess the easiest way to respond to most of them would be “Sure, but human extinction kind of outweighs it” and then you’d reraise how these risks are abstract/speculative and then I’d respond that putting risks in two boxes, speculative and non-speculative, hinders clear thinking more than it helps. Anyway, that’s just how I see the argument play out.

I̶n̶ ̶a̶n̶y̶ ̶c̶a̶s̶e̶ ̶m̶y̶ ̶m̶a̶i̶n̶ ̶w̶o̶r̶r̶y̶ ̶a̶b̶o̶u̶t̶ ̶s̶t̶r̶o̶n̶g̶ ̶l̶i̶a̶b̶i̶l̶i̶t̶y̶ ̶l̶a̶w̶s̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶w̶e̶ ̶m̶a̶y̶ ̶c̶r̶e̶a̶t̶e̶ ̶a̶ ̶s̶i̶t̶u̶a̶t̶i̶o̶n̶ ̶w̶h̶e̶r̶e̶ ̶A̶I̶ ̶d̶e̶v̶e̶l̶o̶p̶e̶r̶s̶ ̶e̶n̶d̶ ̶u̶p̶ ̶t̶h̶i̶n̶k̶i̶n̶g̶ ̶p̶r̶i̶m̶a̶r̶i̶l̶y̶ ̶a̶b̶o̶u̶t̶ ̶d̶o̶d̶g̶i̶n̶g̶ ̶l̶i̶a̶b̶i̶l̶i̶t̶y̶ ̶m̶o̶r̶e̶ ̶t̶h̶a̶n̶ ̶a̶c̶t̶u̶a̶l̶l̶y̶ ̶m̶a̶k̶i̶n̶g̶ ̶t̶h̶e̶ ̶A̶I̶ ̶s̶a̶f̶e̶.̶

I have very mixed views on Richard Hannania.

On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).

On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more credibly than a moderate or liberal ever could.

So I guess I feel he's kind of a necessary voice, at least at this particular point in time when there are few alternatives.

Load more