Comment author: sdspikes 01 March 2017 01:50:13AM 1 point [-]

As a Stanford CS (BS/MS '10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.

Maybe I have "easily corrected, false beliefs" but the people I've talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.

My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I'll grant that that's possible, but that's a human problem, not an AI problem, and requires a human solution.

You don't try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.

But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.

Comment author: capybaralet 01 March 2017 11:34:16PM 2 points [-]

I'm also very interested in hearing you elaborate a bit.

I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.

RE: "MIRI has focused in on an extremely specific kind of AI", I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I'm less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights in real-world resource-limited agents). But I'm also curious what you think MIRIs research is focusing on vs. ignoring.

I also would not equate technical AIS with MIRI's research.

Is it necessary to be convinced? I think the argument for AIS as a priority is strong so long as the concerns have some validity to them, and cannot be dismissed out of hand.

Comment author: capybaralet 27 January 2017 02:31:26AM 3 points [-]

(cross posted on facebook):

I was thinking of applying... it's a question I'm quite interested in. The deadline is the same as ICML tho!

I had an idea I will mention here: funding pools: 1. You and your friends whose values and judgement you trust and who all have small-scale funding requests join together.
2. A potential donor evaluates one funding opportunity at random, and funds all or none of them on the basis of that evaluation. 3. You have now increased the ratio of funding / evaluation available to a potential donor by a factor of #projects 4. There is an incentive for you to NOT include people in your pool if you think their proposal is quite inferior to yours... however, you might be incentivized to include somewhat inferior proposals in order to reach a threshold where the combined funding opportunity is large enough to attract more potential donors.


EA essay contest for <18s

I am planning to sponsor an Effective Altruism essay contest for people <18 years old. See this document for details and prompts . The inspiration is the Ayn Rand Institute essay contest .   The goal is to motivate young people to learn about and get involved with EA, thus... Read More
Comment author: capybaralet 17 January 2017 05:38:13AM 0 points [-]

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of honesty as some unquestionable ideal. I think doing so as a consequentialist, without a very strong justification, itself smacks of disingenuousness and seems motivated by the same phony and manipulative attitude towards PR that Sarah's article attacks.

What would be more interesting to me would be a thoughtful survey of potential EA perspectives on honesty, but an honest treatment of the subject does seem to be risky from a PR standpoint. And it's not clear that it would bring enough benefit to justify the cost. We probably will all just end up agreeing with common moral intuitions.

Comment author: MichaelDickens  (EA Profile) 24 December 2016 08:38:08PM 13 points [-]

I'm glad that you write this sort of thing. 80K is one of the few organizations that I see writing "why you should donate to us" articles. I believe more organizations should do this because they generally know more about their own accomplishments than anyone else. I wouldn't take an organization's arguments as seriously as a third party's because they're necessarily biased toward themselves, but they can still provide a useful service to potential donors by presenting the strongest arguments in favor of donating to them.

I have written before about why I'm not convinced that I should donate to 80K (see the comments on the linked comment thread). I have essentially the same concerns that I did then. Since you're giving more elaborate arguments than before, I can respond in more detail about why I'm still not convinced.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

Below I give a few of the concerns I have with 80K, and what could convince me to donate.

Highly indirect impact. A lot of 80K's claims to impact rely on long chains such that your actual effect is pretty indirect. For example, the claim that an IASPC is worth £7500 via getting people to sign the GWWC pledge relies on assuming:

  • These people would not have signed the pledge without 80K.
  • These people would not have done something similarly or more valuable otherwise.
  • The GWWC pledge is as valuable as GWWC claims it is.

I haven't seen compelling evidence that any of these is true, and they all have to be true for 80K to have the impact here that it claims to have.

Problems with counterfactuals.

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats. The only way these can both be good is if 80K is moving people toward their comparative advantages, which is a much harder claim to justify. I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

Questionable marketing tactics.

This is somewhat less of a concern, but I might as well bring it up here. 80K uses very aggressive marketing tactics (invasive browser popups, repeated asks to sign up for things, frequent emails) that I find abrasive. 80K justifies these by claiming that it increases sign-ups, and I'm sure it does, but these metrics don't account for the cost of turning people off.

By comparison, GiveWell does essentially no marketing but has still attracted more attention than any other EA organization, and it has among the best reputations of any EA org. It attracts donors by producing great content rather than by cajoling people to subscribe to its newsletter. For most orgs I don't believe this would work because most orgs just aren't capable of producing valuable content, but like GiveWell, 80K produces plenty of good content.

Perhaps 80K's current marketing tactics are a good idea on balance, but we have no way of knowing. 80K's metrics can only observe the value its marketing produces and not the value it destroys. It may be possible to get better evidence on this; I haven't really thought about it.

Past vs. future impact.

80K has made a bunch of claims about its historical impact. I'm skeptical that the impact has been as big as 80K claims, but I'm also skeptical that the impact will continue to be as big. For example, 80K claims substantial credit for about a half dozen new organizations. Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past? 80K's writeup claims that it will but doesn't give much justification. Similarly, 80K claims that a lot of benefit comes from its articles, but writing new articles has diminishing utility as you start to cover the most important ideas.

In summary, to persuade me to donate to 80K, you need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org. More importantly, you need to find strong evidence that 80K actually has the impact it claims to have, or better demonstrate that the existing evidence is sufficient.

Comment author: capybaralet 07 January 2017 02:05:59AM 1 point [-]

Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?

I can imagine that people would not be very good at predicting that accurately.

Comment author: capybaralet 05 January 2017 07:38:16PM 3 points [-]

People are motivated both by: 1. competition and status and 2. cooperation and identifying with the successes of a group. I think we should aim to harness both of these forms of motivation.

Comment author: John_Maxwell_IV 23 December 2016 12:00:32PM 1 point [-]

9) seems pretty compelling to me. To use some analogies from the business world: it wouldn't make sense for a company to hire lots of people before it had a business model figured out, or run a big marketing campaign while its product was still being developed. Sometimes it feels to me like EA is doing those things. (But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people.)

Comment author: capybaralet 05 January 2017 06:13:17PM 0 points [-]

"But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people."

Care to elaborate (or link to something?)

Comment author: capybaralet 05 January 2017 08:06:26AM 0 points [-]

"This is something the EA community has done well at, although we have tended to focus on talent that current EA organization might wish to hire. It may make sense for us to focus on developing intellectual talent as well."

Definitely!! Are there any EA essay contests or similar? More generally, I've been wondering recently if there are many efforts to spread EA among people under the age of majority. The only example I know of is SPARC.

Comment author: capybaralet 04 January 2017 10:07:39PM *  2 points [-]

EDIT: I forgot to link to the Google group:!forum/david-kruegers-80k-people

Hi! David Krueger (from Montreal and 80k) here. The advice others have given so far is pretty good.

My #1 piece of advise is: start doing research ASAP!
Start acting like a grad student while you are still an undergrad. This is almost a requirement to get into a top program afterwards. Find a supervisor and ideally try to publish a paper at a good venue before you graduate.

Stats is probably a bit more relevant than CS, but some of both is good. I definitely recommend learning (some) programming. In particular, focus on machine learning (esp. Deep Learning and Reinforcement Learning). Do projects, build a portfolio, and solicit feedback.

If you haven't already, please check out these groups I created for people wanting to get into AI Safety. There are a lot of resources to get you started in the Google Group, and I will be adding more in the near future. You can also contact me directly (see for contact info) and we can chat.

Comment author: cflexman 28 September 2016 10:30:29PM 11 points [-]

I don't think the issue is that we don't have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people's pattern-matching minds associate their entire movement with the worst example.

Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don't tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.

Comment author: capybaralet 04 January 2017 07:49:42PM 0 points [-]

Sure, but the examples you gave are more about tactics than content. What I mean is that there are a lot of people who are downplaying their level of concern about Xrisk in order to not turn off people who don't appreciate the issue. I think that can be a good tactic, but it also risks reducing the sense of urgency people have about AI-Xrisk, and can also lead to incorrect strategic conclusions, which could even be disasterous when they are informing crucial policy decisions.

TBC, I'm not saying we are lacking in radicals ATM, the level is probably about right. I just don't think that everyone should be moderating their stance in order to maximize their credibility with the (currently ignorant, but increasingly less so) ML research community.

View more: Next