Comment author: capybaralet 27 January 2017 02:31:26AM 3 points [-]

(cross posted on facebook):

I was thinking of applying... it's a question I'm quite interested in. The deadline is the same as ICML tho!

I had an idea I will mention here: funding pools: 1. You and your friends whose values and judgement you trust and who all have small-scale funding requests join together.
2. A potential donor evaluates one funding opportunity at random, and funds all or none of them on the basis of that evaluation. 3. You have now increased the ratio of funding / evaluation available to a potential donor by a factor of #projects 4. There is an incentive for you to NOT include people in your pool if you think their proposal is quite inferior to yours... however, you might be incentivized to include somewhat inferior proposals in order to reach a threshold where the combined funding opportunity is large enough to attract more potential donors.

3

EA essay contest for <18s

I am planning to sponsor an Effective Altruism essay contest for people <18 years old. See this document for details and prompts . The inspiration is the Ayn Rand Institute essay contest .   The goal is to motivate young people to learn about and get involved with EA, thus... Read More
Comment author: capybaralet 17 January 2017 05:38:13AM 0 points [-]

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of honesty as some unquestionable ideal. I think doing so as a consequentialist, without a very strong justification, itself smacks of disingenuousness and seems motivated by the same phony and manipulative attitude towards PR that Sarah's article attacks.

What would be more interesting to me would be a thoughtful survey of potential EA perspectives on honesty, but an honest treatment of the subject does seem to be risky from a PR standpoint. And it's not clear that it would bring enough benefit to justify the cost. We probably will all just end up agreeing with common moral intuitions.

Comment author: MichaelDickens  (EA Profile) 24 December 2016 08:38:08PM 13 points [-]

I'm glad that you write this sort of thing. 80K is one of the few organizations that I see writing "why you should donate to us" articles. I believe more organizations should do this because they generally know more about their own accomplishments than anyone else. I wouldn't take an organization's arguments as seriously as a third party's because they're necessarily biased toward themselves, but they can still provide a useful service to potential donors by presenting the strongest arguments in favor of donating to them.

I have written before about why I'm not convinced that I should donate to 80K (see the comments on the linked comment thread). I have essentially the same concerns that I did then. Since you're giving more elaborate arguments than before, I can respond in more detail about why I'm still not convinced.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

Below I give a few of the concerns I have with 80K, and what could convince me to donate.

Highly indirect impact. A lot of 80K's claims to impact rely on long chains such that your actual effect is pretty indirect. For example, the claim that an IASPC is worth £7500 via getting people to sign the GWWC pledge relies on assuming:

  • These people would not have signed the pledge without 80K.
  • These people would not have done something similarly or more valuable otherwise.
  • The GWWC pledge is as valuable as GWWC claims it is.

I haven't seen compelling evidence that any of these is true, and they all have to be true for 80K to have the impact here that it claims to have.

Problems with counterfactuals.

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats. The only way these can both be good is if 80K is moving people toward their comparative advantages, which is a much harder claim to justify. I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

Questionable marketing tactics.

This is somewhat less of a concern, but I might as well bring it up here. 80K uses very aggressive marketing tactics (invasive browser popups, repeated asks to sign up for things, frequent emails) that I find abrasive. 80K justifies these by claiming that it increases sign-ups, and I'm sure it does, but these metrics don't account for the cost of turning people off.

By comparison, GiveWell does essentially no marketing but has still attracted more attention than any other EA organization, and it has among the best reputations of any EA org. It attracts donors by producing great content rather than by cajoling people to subscribe to its newsletter. For most orgs I don't believe this would work because most orgs just aren't capable of producing valuable content, but like GiveWell, 80K produces plenty of good content.

Perhaps 80K's current marketing tactics are a good idea on balance, but we have no way of knowing. 80K's metrics can only observe the value its marketing produces and not the value it destroys. It may be possible to get better evidence on this; I haven't really thought about it.

Past vs. future impact.

80K has made a bunch of claims about its historical impact. I'm skeptical that the impact has been as big as 80K claims, but I'm also skeptical that the impact will continue to be as big. For example, 80K claims substantial credit for about a half dozen new organizations. Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past? 80K's writeup claims that it will but doesn't give much justification. Similarly, 80K claims that a lot of benefit comes from its articles, but writing new articles has diminishing utility as you start to cover the most important ideas.


In summary, to persuade me to donate to 80K, you need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org. More importantly, you need to find strong evidence that 80K actually has the impact it claims to have, or better demonstrate that the existing evidence is sufficient.

Comment author: capybaralet 07 January 2017 02:05:59AM 1 point [-]

Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?

I can imagine that people would not be very good at predicting that accurately.

Comment author: capybaralet 05 January 2017 07:38:16PM 3 points [-]

People are motivated both by: 1. competition and status and 2. cooperation and identifying with the successes of a group. I think we should aim to harness both of these forms of motivation.

Comment author: John_Maxwell_IV 23 December 2016 12:00:32PM 1 point [-]

9) seems pretty compelling to me. To use some analogies from the business world: it wouldn't make sense for a company to hire lots of people before it had a business model figured out, or run a big marketing campaign while its product was still being developed. Sometimes it feels to me like EA is doing those things. (But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people.)

Comment author: capybaralet 05 January 2017 06:13:17PM 0 points [-]

"But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people."

Care to elaborate (or link to something?)

Comment author: capybaralet 05 January 2017 08:06:26AM 0 points [-]

"This is something the EA community has done well at, although we have tended to focus on talent that current EA organization might wish to hire. It may make sense for us to focus on developing intellectual talent as well."

Definitely!! Are there any EA essay contests or similar? More generally, I've been wondering recently if there are many efforts to spread EA among people under the age of majority. The only example I know of is SPARC.

Comment author: capybaralet 04 January 2017 10:07:39PM *  2 points [-]

EDIT: I forgot to link to the Google group: https://groups.google.com/forum/#!forum/david-kruegers-80k-people

Hi! David Krueger (from Montreal and 80k) here. The advice others have given so far is pretty good.

My #1 piece of advise is: start doing research ASAP!
Start acting like a grad student while you are still an undergrad. This is almost a requirement to get into a top program afterwards. Find a supervisor and ideally try to publish a paper at a good venue before you graduate.

Stats is probably a bit more relevant than CS, but some of both is good. I definitely recommend learning (some) programming. In particular, focus on machine learning (esp. Deep Learning and Reinforcement Learning). Do projects, build a portfolio, and solicit feedback.

If you haven't already, please check out these groups I created for people wanting to get into AI Safety. There are a lot of resources to get you started in the Google Group, and I will be adding more in the near future. You can also contact me directly (see https://mila.umontreal.ca/en/person/david-scott-krueger/ for contact info) and we can chat.

Comment author: cflexman 28 September 2016 10:30:29PM 11 points [-]

I don't think the issue is that we don't have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people's pattern-matching minds associate their entire movement with the worst example.

Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don't tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.

Comment author: capybaralet 04 January 2017 07:49:42PM 0 points [-]

Sure, but the examples you gave are more about tactics than content. What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don't appreciate the issue. I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.

TBC, I'm not saying we are lacking in radicals ATM, the level is probably about right. I just don't think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.

Comment author: Paul_Christiano 20 December 2016 03:20:18AM *  2 points [-]

(effectively) prematurely settling on a utility function whose goodness depends heavily on the nature of qualia

This feels extremely unlikely; I don't think we have plausible paths to obtaining a non-negligibly good outcome without retaining the ability to effectively deliberate about e.g. the nature of qualia. I also suspect that we will be able to solve the control problem, and if we can't then it will be because of failure modes that can't be avoided by settling on a utility function. Of course "can't see any way it can happen" is not the same as "am justifiably confident it won't happen," but I think in this case it's enough to get us to pretty extreme odds.

More precisely, I'd give 100:1 against: (a) we will fail to solve the control problem in a satisfying war, (b) we will fall back to a solution which depends on our current understanding of qualia, (c) the resulting outcome will be non-negligibly good according to our view about qualia at the time that we build AI, and (d) it will be good because we hold that view about qualia.

(My real beliefs might be higher than 1% just based on "I haven't thought about it very long" and peer disagreement. But I think it's more likely than not that I would accept a bet at 100:1 odds after deliberation, even given that reasonable people disagree.)

(By non-negligibly good I mean that we would be willing to make some material sacrifice to improve its probability compared to a barren universe, perhaps of $1000/1% increase. By because I mean that the outcome would have been non-negligibly worse according to that view if we had not held it.)

I'm not sure if there is any way to turn the disagreement into a bet. Perhaps picking an arbiter and looking at their views in a decade? (e.g. Toby, Carl Schulman, Wei Dai?) This would obviously involve less extreme odds.

Probably more interesting than betting is resolving the disagreement. This seems to be a slightly persistent disagreement between me and Toby, I have never managed to really understand his position but we haven't talked about it much. I'm curious about what kind of solutions you see as plausible---it sounds like your view is based on a more detailed picture rather than an "anything might happen" view.

Comment author: capybaralet 04 January 2017 07:32:22PM *  2 points [-]

I think I was too terse; let me explain my model a bit more.

I think there's a decent chance (OTTMH, let's say 10%) that without any deliberate effort we make an AI which wipes our humanity, but is anyhow more ethically valuable than us (although not more than something which we deliberately design to be ethically valuable). This would happen, e.g. if this was the default outcome (e.g. if it turns out to be the case that intelligence ~ ethical value). This may actually be the most likely path to victory.**

There's also some chance that all we need to do to ensure that AI has (some) ethical value (e.g. due to having qualia) is X. In that case, we might increase our chance of doing X by understanding qualia a bit better.

Finally, my point was that I can easily imagine a scenario in which our alternatives are: 1. Build an AI with 50% chance of being aligned, 50% chance of just being an AI (with P(AI has property X) = 90% if we understand qualia better, 10% else) 2. Allow our competitors to build an AI with ~0% chance of being ethically valuable.

So then we obviously prefer option1, and if we understand qualia better, option 1 becomes better.

* I notice as I type this that this may have some strange consequences RE high-level strategy; e.g. maybe it's better to just make something intelligent ASAP and hope that it has ethical value, because this reduces *its X-risk, and we might not be able to do much to change the distribution of the ethical value the AI we create produces that much anyhow. I tend to think that we should aim to be very confident that the AI we build is going to have lots of ethical value, but this may only make sense if we have a pretty good chance of succeeding.

View more: Next