CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6134 karmaJoined Nov 2015Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1014

For anyone wondering about the definition of macrostrategy, the EA forum defines it as follows:

Macrostrategy is the study of how present-day actions may influence the long-term future of humanity.[1]

Macrostrategy as a field of research was pioneered by Nick Bostrom, and it is a core focus area of the Future of Humanity Institute.[2] Some authors distinguish between "foundational" and "applied" global priorities research.[3] On this distinction, macrostrategy may be regarded as closely related to the former. It is concerned with the assessment of general hypotheses such as the hinge of history hypothesis, the vulnerable world hypothesis and the technological completion conjecture; the development of conceptual tools such as the concepts of existential risk, of a crucial consideration and of differential progress; and the analysis of the impacts and capabilities of future technologies such as artificial general intelligence, whole brain emulation and atomically precise manufacturing, but considered at a higher level of abstraction than is generally the case in cause prioritization research.

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Disagree because it is at -36.

Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.

That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.

I didn't vote, but maybe people are worried about the EA forum being filled up with a bunch of logistics questions?

This post makes some interesting points about EA's approach to philanthropy, but I certainly have mixed feelings on "please support at least one charity run by someone in the global south that just so happens to be my own".

Might be more useful if you explain why the arguments weren't persuasive to you


So my position is that most of your arguments are worth some "debate points" but that mitigating potential x-risks outweigh this.

Our interest is in a system of liability that can meet AI safety goals and at the same time have a good chance of success in the real world

I've personally made the mistake of thinking that the Overton Window is narrower than it actually was in the past. So even though such laws may not seem viable now, my strong expectation is that it will quickly change. At the same time, my intuition is that if we're going to pursue the liability route, at least strict liability has the advantage of keeping the developer focused on preventing the issue from occurring rather than taking actions to avoid legal responsibility. Those actions won't help, so they need to focus on preventing the issue from occurring.

I know that I wrote above:

In any case my main worry about strong liability laws is that we may create a situation where AI developers end up thinking primarily about dodging liability more than actually making the AI safe.

and that this is in tension with what I'm writing now. I guess upon reflection I now feel that my concerns about strong liability laws only apply to strong fault-based liability laws, not to strict liability laws, so in retrospect I wouldn't have included this sentence.

Regarding your discussion in point 1 - apologies for not addressing this in my initial reply - I just don't buy that courts being able to handle chainsaws or medical or actuary evidence means that they're equipped to handle transformative AI given how fast the situation is changing and how disputed many of the key questions are. Plus the stakes involved play a role in me not wanting to take a risk here/make an unnecessary bet on the capabilities of the courts. Even if there was a 90% chance that the courts would be fine, I'd prefer to avoid the 10% probability that they aren't.

I find the idea of a reverse burden of proof interesting, but tbh I wasn’t really persuaded by the rest of your arguments. I guess the easiest way to respond to most of them would be “Sure, but human extinction kind of outweighs it” and then you’d reraise how these risks are abstract/speculative and then I’d respond that putting risks in two boxes, speculative and non-speculative, hinders clear thinking more than it helps. Anyway, that’s just how I see the argument play out.

I̶n̶ ̶a̶n̶y̶ ̶c̶a̶s̶e̶ ̶m̶y̶ ̶m̶a̶i̶n̶ ̶w̶o̶r̶r̶y̶ ̶a̶b̶o̶u̶t̶ ̶s̶t̶r̶o̶n̶g̶ ̶l̶i̶a̶b̶i̶l̶i̶t̶y̶ ̶l̶a̶w̶s̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶w̶e̶ ̶m̶a̶y̶ ̶c̶r̶e̶a̶t̶e̶ ̶a̶ ̶s̶i̶t̶u̶a̶t̶i̶o̶n̶ ̶w̶h̶e̶r̶e̶ ̶A̶I̶ ̶d̶e̶v̶e̶l̶o̶p̶e̶r̶s̶ ̶e̶n̶d̶ ̶u̶p̶ ̶t̶h̶i̶n̶k̶i̶n̶g̶ ̶p̶r̶i̶m̶a̶r̶i̶l̶y̶ ̶a̶b̶o̶u̶t̶ ̶d̶o̶d̶g̶i̶n̶g̶ ̶l̶i̶a̶b̶i̶l̶i̶t̶y̶ ̶m̶o̶r̶e̶ ̶t̶h̶a̶n̶ ̶a̶c̶t̶u̶a̶l̶l̶y̶ ̶m̶a̶k̶i̶n̶g̶ ̶t̶h̶e̶ ̶A̶I̶ ̶s̶a̶f̶e̶.̶

I have very mixed views on Richard Hannania.

On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).

On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more credibly than a moderate or liberal ever could.

So I guess I feel he's kind of a necessary voice, at least at this particular point in time when there are few alternatives.

Yeah, it's possible I'm taking a narrow view of what a professional organisation is. I don't have a good sense of the landscape here.

I guess I'm a bit skeptical of this proposal.

I don't think we'd have much credibility as a professional organisation. We could require people to do the intro and perhaps even the advanced fellowship, but that's hardly rigorous training.

I'm worried that trying to market ourselves as a professional organisation might backfire if people end up seeing us as just a faux one.

I suspect that this kind of association might be more viable for specific cause areas than for EA as a whole, but there might not be enough people except in a couple of countries.

Load more